-
-
Notifications
You must be signed in to change notification settings - Fork 12.1k
[Bugfix] when set offline model running error #23711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a crash that occurs when running vLLM in offline mode (HF_HUB_OFFLINE=1). The root cause was that creating a default EngineArgs instance for logging purposes would trigger network access. The fix cleverly avoids this by initializing the default arguments with the already-resolved model path from the provided arguments, which prevents the crash. The logic for detecting a non-default model is also correctly adjusted. This is a solid and well-targeted fix for the reported bug.
|
@Isotr0py please take a look. thanks ~ |
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
Purpose
Fix: #23684
Test Plan
Test Result
HF_HUB_OFFLINE=1 python3 offline_test.pyEssential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.