-
-
Notifications
You must be signed in to change notification settings - Fork 12.1k
[Bugfix] Fix incorrect import of CacheConfig #24631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly fixes an incorrect import of CacheConfig in vllm/attention/layers/cross_attention.py. The change replaces the import from the transformers library with the correct one from vllm.config. The implementation is clean and directly addresses the bug described. I have no further comments.
|
Sorry. Same problem happened again. #23459 |
|
Once we upgrade vLLM's transformers version, mypy should be able to detect the missing import |
| from vllm.attention.layer import Attention | ||
| from vllm.attention.selector import get_attn_backend | ||
| from vllm.config import VllmConfig | ||
| from vllm.config import CacheConfig, VllmConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nit but canonical imports are preferred
| from vllm.config import CacheConfig, VllmConfig | |
| from vllm.config import VllmConfig | |
| from vllm.config.cache import CacheConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's address this separately
|
sorry, and thank you! |
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Purpose
#21088 incorrectly imports the deprecated
CacheConfigfrom transformers library instead of vLLM, which breaks Whisper model (and any models that use Whisper vLLM impl as encoder) when using latest transformers version.FIX https://buildkite.com/vllm/ci/builds/30279/steps/canvas?sid=019936ef-4094-4ebf-85c3-047098b7a6ee
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.