Skip to content

Conversation

@CSWYF3634076
Copy link
Contributor

@CSWYF3634076 CSWYF3634076 commented Aug 8, 2025

Purpose

Support Baidu Ernie4.5 VL model for vllm

Note: torch.compile is not supported. Due to the model limitations of multimodal experts and text experts, using torch.compile may fail to start

Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
@github-actions
Copy link

github-actions bot commented Aug 8, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation new-model Requests to new models labels Aug 8, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the Baidu Ernie4.5 VL model. The changes include new model implementation files, rotary embedding logic, and a custom processor. While the overall structure seems correct, there are several critical issues that need to be addressed. These include hardcoded values that make the code brittle, unsafe tensor operations that could lead to runtime errors, incorrect function calls in the processor, and some incomplete logic indicated by TODOs. Addressing these points will significantly improve the robustness and maintainability of the new model support.

Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contritbution! Is this PR ready for review? If not, could you please convert this to a draft PR?

@CSWYF3634076 CSWYF3634076 marked this pull request as draft August 9, 2025 07:21
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
@CSWYF3634076 CSWYF3634076 marked this pull request as ready for review August 12, 2025 08:41
@CSWYF3634076
Copy link
Contributor Author

Thanks for the contritbution! Is this PR ready for review? If not, could you please convert this to a draft PR?

@ywang96 Hi, the PR is ready for review. Could you help review it?

@CSWYF3634076 CSWYF3634076 requested a review from ywang96 August 12, 2025 08:46
@CSWYF3634076 CSWYF3634076 changed the title [WIP][Model] Add Ernie4.5 VL Model Support [Model] Add Ernie4.5 VL Model Support Aug 12, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
@DarkLight1337
Copy link
Member

Sorry for the delay, I'll take a look later today

Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
@mergify mergify bot added the ci/build label Aug 26, 2025
@Isotr0py Isotr0py enabled auto-merge (squash) August 26, 2025 13:51
@vllm-bot vllm-bot merged commit 644d57d into vllm-project:main Aug 27, 2025
64 of 71 checks passed
tc-mb pushed a commit to tc-mb/vllm that referenced this pull request Aug 27, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Signed-off-by: tc-mb <caitianchi@modelbest.cn>
@hmellor
Copy link
Member

hmellor commented Aug 27, 2025

Why has decord been re-added as a dependency? In the past we went to the effort of removing it because it's been unmaintained for 3 years.

As far as I can tell nothing that was added in this PR needs it.

@DarkLight1337
Copy link
Member

It is only in the test dependencies, and is needed to load the model from HF Hub

@hmellor
Copy link
Member

hmellor commented Aug 27, 2025

Ah I see. That'll be why nothing in vLLM appears to be using it. Thanks for explaining.

epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Signed-off-by: Xiao Yu <xiao.yu@amd.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
jinyouzhi pushed a commit to jinyouzhi/vllm that referenced this pull request Sep 11, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
jinyouzhi pushed a commit to jinyouzhi/vllm that referenced this pull request Sep 12, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
jinyouzhi pushed a commit to jinyouzhi/vllm that referenced this pull request Sep 26, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
jinyouzhi pushed a commit to jinyouzhi/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
@justicel
Copy link

It appears the decord requirement breaks this in production when trying to load the model. The decord lib isn't included in the vllm docker images and so this happens:

INFO 11-24 16:04:03 [scheduler.py:207] Chunked prefill is enabled with max_num_batched_tokens=2048.
(APIServer pid=1) INFO 11-24 16:04:03 [api_server.py:2056] vLLM API server version 0.11.2.dev201+g55c21c883
(APIServer pid=1) INFO 11-24 16:04:03 [utils.py:253] non-default args: {'model': '/models/ernie54-vl-28b-395bb358-6394-4721-be96-2fd42fd77299', 'trust_remote_code': True, 'served_model_name': ['ernie54-vl-28b'], 'gpu_memory_utilization': 0.95}
(APIServer pid=1) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=1) INFO 11-24 16:04:03 [config.py:508] Replacing legacy 'type' key with 'rope_type'
(APIServer pid=1) INFO 11-24 16:04:10 [model.py:630] Resolved architecture: Ernie4_5_VLMoeForConditionalGeneration
(APIServer pid=1) INFO 11-24 16:04:10 [model.py:1745] Using max model len 131072
(APIServer pid=1) INFO 11-24 16:04:10 [scheduler.py:207] Chunked prefill is enabled with max_num_batched_tokens=8192.
(APIServer pid=1) Encountered exception while importing decord: No module named 'decord'
(APIServer pid=1) Traceback (most recent call last):
(APIServer pid=1)   File "<frozen runpy>", line 198, in _run_module_as_main
(APIServer pid=1)   File "<frozen runpy>", line 88, in _run_code
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 2178, in <module>
(APIServer pid=1)     uvloop.run(run_server(args))
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 96, in run
(APIServer pid=1)     return __asyncio.run(
(APIServer pid=1)            ^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=1)     return runner.run(main)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1)     return self._loop.run_until_complete(task)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=1)     return await main
(APIServer pid=1)            ^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 2106, in run_server
(APIServer pid=1)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 2125, in run_server_worker
(APIServer pid=1)     async with build_async_engine_client(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 196, in build_async_engine_client
(APIServer pid=1)     async with build_async_engine_client_from_engine_args(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 237, in build_async_engine_client_from_engine_args
(APIServer pid=1)     async_llm = AsyncLLM.from_vllm_config(
(APIServer pid=1)                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/utils/func_utils.py", line 116, in inner
(APIServer pid=1)     return fn(*args, **kwargs)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 219, in from_vllm_config
(APIServer pid=1)     return cls(
(APIServer pid=1)            ^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 114, in __init__
(APIServer pid=1)     tokenizer = init_tokenizer_from_configs(self.model_config)
(APIServer pid=1)                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 295, in init_tokenizer_from_configs
(APIServer pid=1)     return get_tokenizer(
(APIServer pid=1)            ^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 221, in get_tokenizer
(APIServer pid=1)     tokenizer = AutoTokenizer.from_pretrained(
(APIServer pid=1)                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/tokenization_auto.py", line 1122, in from_pretrained
(APIServer pid=1)     tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
(APIServer pid=1)                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/transformers/dynamic_module_utils.py", line 604, in get_class_from_dynamic_module
(APIServer pid=1)     final_module = get_cached_module_file(
(APIServer pid=1)                    ^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/transformers/dynamic_module_utils.py", line 427, in get_cached_module_file
(APIServer pid=1)     modules_needed = check_imports(resolved_module_file)
(APIServer pid=1)                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/transformers/dynamic_module_utils.py", line 260, in check_imports
(APIServer pid=1)     raise ImportError(
(APIServer pid=1) ImportError: This modeling file requires the following packages that were not found in your environment: decord. Run `pip install decord`

@DarkLight1337
Copy link
Member

Yes, that's intentional. You have to install these additional dependencies manually because of licensing constraints. See #28722

@justicel
Copy link

@DarkLight1337 Thanks. By any chance do you know who to talk to (or should I just submit a pull request), about adding details about this requirement to the Ernie 4.5 page(s)? https://docs.vllm.ai/projects/recipes/en/latest/Ernie/Ernie4.5-VL.html

@ywang96
Copy link
Member

ywang96 commented Nov 25, 2025

@DarkLight1337 Thanks. By any chance do you know who to talk to (or should I just submit a pull request), about adding details about this requirement to the Ernie 4.5 page(s)? https://docs.vllm.ai/projects/recipes/en/latest/Ernie/Ernie4.5-VL.html

Feel free to submit a PR! vLLM-recipe is a community project

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build documentation Improvements or additions to documentation multi-modality Related to multi-modality (#4194) new-model Requests to new models ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants