Skip to content

Conversation

@ZhengWG
Copy link
Contributor

@ZhengWG ZhengWG commented Dec 19, 2025

Motivation

This PR adds video input support for Qwen-series models as a follow-up to #12263 , addressing the requirements outlined in #15118

Modifications

  1. Added video processing pipeline
  2. Handle multimodal embedding aggregation with MultiModalEmbeddingData
  3. Support modality-based assignment

Accuracy Tests

Launch scripts:

# language
MODEL_PATH=Qwen3-VL-30B-A3B-Instruct  # Qwen2.5-VL-7B-Instruct
CUDA_VISIBLE_DEVICES=4 python3 -m sglang.launch_server --model-path ${MODEL_PATH} --disable-radix-cache \
        --host $HOST_IP --port 8003 --trust-remote-code --tp-size 1 \
        --enable-cache-report --log-level info  --encoder-urls 'http://127.0.0.1:8001' 'http://127.0.0.1:8002' \
        --mem-fraction-static 0.7--chunked-prefill-size 8192--attention-backend fa3 \
        --enable-multimodal --language-only
# encoder
CUDA_VISIBLE_DEVICES=0 python3 -m sglang.launch_server --model-path ${MODEL_PATH} \
        --host $HOST_IP --port 8001 --trust-remote-code --tp-size 1 \
        --enable-cache-report --log-level info  \
        --mem-fraction-static 0.7--chunked-prefill-size 8192--attention-backend fa3 \
        --mm-attention-backend fa3 --encoder-only
CUDA_VISIBLE_DEVICES=1 python3 -m sglang.launch_server --model-path ${MODEL_PATH} \
        --host $HOST_IP --port 8002 --trust-remote-code --tp-size 1 \
        --enable-cache-report --log-level info  \
        --mem-fraction-static 0.7--chunked-prefill-size 8192--attention-backend fa3 \
        --mm-attention-backend fa3 --encoder-only

Test scripts:

curl "http://127.0.0.1:8003/v1/chat/completions" \
--header 'Content-Type: application/json' \
--data '{
    "model": "auto",
    "messages": [
        {
            "role": "system", 
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": [
                {
                    "type": "video_url",
                    "video_url": {
                        "url": "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4"
                    }
                },
                {
                    "type": "text",
                    "text": "Desribe the video."
                }
            ]
        }
    ],
    "stream": false,
    "max_tokens": 512
}'
image

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ZhengWG, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the system's multimodal capabilities by integrating native video input support for Qwen-series models. It involves a comprehensive refactoring of how multimodal data, including both images and videos, is handled from ingestion and encoding to processing and tokenization. Key changes include a new data structure for managing diverse multimodal embeddings, a dynamic load-balancing strategy for distributing multimodal items across encoders, and updates to the core processing logic to seamlessly incorporate video data alongside existing image functionality. This generalization allows for more flexible and powerful multimodal interactions.

Highlights

  • Multimodal Data Handling Refactor: The EmbeddingData class has been refactored to be more generic, and a new MultiModalEmbeddingData class was introduced to handle the aggregation of embeddings and grid dimensions for various modalities, including images and videos.
  • Video Input Support for Qwen-series: The system now supports video input for Qwen-series models, incorporating video loading, preprocessing using AutoProcessor and preprocess_video, and modality-specific feature extraction within the encoding server.
  • Dynamic Modality Assignment and Load Balancing: A new mechanism, _assign_items_by_modality, has been implemented to dynamically assign multimodal items (images and videos) across encoders, ensuring cross-modality load balancing.
  • Generalized Input Processing and Tokenization: Updates to the tokenizer and multimodal processors enable building input IDs and handling embeddings for both image and video data, correctly integrating modality-specific tokens and grid dimensions into the model's input.
  • API and Internal Flag Updates: Internal flags like need_wait_for_image have been renamed to need_wait_for_mm_inputs, and data structures such as num_items_assigned now support modality-specific assignments to reflect the generalized multimodal capabilities.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for video input for qwen-series models, which is a significant feature enhancement. The changes are extensive, touching upon data structures for multimodal data, the encoding server, I/O structures, and multimodal processors to handle video inputs alongside images.

The refactoring to introduce MultiModalEmbeddingData and generalize data handling for different modalities is well-executed. The logic for assigning multimodal items to encoders and processing them seems correct.

I have found two issues:

  1. A critical bug in python/sglang/srt/multimodal/processors/qwen_vl.py where the Modality enum is used without being imported, which would cause a runtime error.
  2. A medium-severity bug in python/sglang/srt/disaggregation/encode_receiver.py due to a syntax error in a __repr__ method.

I've provided code suggestions to fix both issues. After addressing these points, the pull request should be in good shape.

from sglang.srt.environ import envs
from sglang.srt.layers.rotary_embedding import MRotaryEmbedding
from sglang.srt.managers.schedule_batch import Modality, MultimodalDataItem
from sglang.srt.managers.schedule_batch import MultimodalDataItem
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The Modality enum is used in this file (e.g., in get_mm_data), but it is no longer imported. This will lead to a NameError at runtime.

Suggested change
from sglang.srt.managers.schedule_batch import MultimodalDataItem
from sglang.srt.managers.schedule_batch import Modality, MultimodalDataItem

return mm_data

def __repr__(self):
return f"MultiModalEmbeddingData(req_id={self.req_id}, num_parts={self.num_parts}, part_idx={self.part_idx}, modality={self.modality}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The __repr__ method is missing a closing parenthesis, which will cause a SyntaxError if this method is called during debugging.

Suggested change
return f"MultiModalEmbeddingData(req_id={self.req_id}, num_parts={self.num_parts}, part_idx={self.part_idx}, modality={self.modality}"
return f"MultiModalEmbeddingData(req_id={self.req_id}, num_parts={self.num_parts}, part_idx={self.part_idx}, modality={self.modality})"

@ZhengWG ZhengWG changed the title [EPD][VLM] support video input(qwen-series) [WIP][EPD][VLM] support video input(qwen-series) Dec 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant