Skip to content

Conversation

@ziyixiong-nv
Copy link
Collaborator

@ziyixiong-nv ziyixiong-nv commented Sep 22, 2025

Summary by CodeRabbit

  • New Features
    • More robust handling of speculative decoding drafts, ensuring correct initialization and cleanup based on dynamic draft lengths.
  • Performance
    • Reduced scheduler overhead by removing redundant draft-token preprocessing, improving efficiency during generation.
  • Bug Fixes
    • Prevented stale or incorrect draft tokens from appearing in responses when draft length is zero or unavailable.
  • Refactor
    • Centralized draft-length evaluation to improve consistency across scheduling and response handling.

Description

The scheduling would use request.draft_tokens, but in the overlap path, we delayed the initialization of it.
In the non-overlap path, it's set to [0] * max_draft_len, so we need to do the same thing before the scheduling.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@ziyixiong-nv ziyixiong-nv requested a review from a team as a code owner September 22, 2025 08:19
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
@ziyixiong-nv
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 22, 2025

📝 Walkthrough

Walkthrough

Centralizes speculative decoding draft-token handling in py_executor by initializing and deriving draft_tokens based on dynamic draft length. Removes corresponding preprocessing from the scheduler. Adds import of get_draft_token_length in py_executor and removes it from scheduler. Response handling now sets draft_tokens based on computed draft length, not directly from py_draft_tokens.

Changes

Cohort / File(s) Summary of Changes
PyExecutor draft-token management
tensorrt_llm/_torch/pyexecutor/py_executor.py
Imports get_draft_token_length. Initializes draft_tokens during prepare/schedule for active requests with drafting enabled (states: GENERATION_IN_PROGRESS, DISAGG_GENERATION_INIT) using model config max_draft_len. On response, sets draft_tokens from py_draft_tokens only when computed draft length > 0; otherwise sets to [].
Scheduler cleanup
tensorrt_llm/_torch/pyexecutor/scheduler.py
Removes pre-processing loop that copied py_draft_tokens to draft_tokens based on draft length. Drops import and dependency on get_draft_token_length; retains only LlmRequest, LlmRequestState.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant PyExecutor
  participant Scheduler
  participant Model

  Client->>PyExecutor: Submit request
  Note over PyExecutor: Prepare/Schedule phase<br/>Initialize draft_tokens if drafting enabled
  PyExecutor->>Scheduler: Enqueue request(s)
  Scheduler->>Model: Run generation (speculative capable)
  Model-->>Scheduler: Tokens + optional py_draft_tokens
  Scheduler-->>PyExecutor: Responses
  Note over PyExecutor: On response<br/>draft_tokens = py_draft_tokens if get_draft_token_length(request)>0 else []
  PyExecutor-->>Client: LlmResponse
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[https://nvbugs/5528405][fix] Set up draft_tokens before scheduling" is concise, follows the repository's ticket/type template, and accurately summarizes the primary change (initializing request.draft_tokens prior to scheduling), so a reviewer scanning history will understand the main intent.
Description Check ✅ Passed The PR description provides a clear, concise Description that explains the bug (delayed initialization of request.draft_tokens in the overlap path) and the intended fix (initialize draft_tokens to [0] * max_draft_len before scheduling), so the main template requirement is met; however the Test Coverage section is empty and the body does not list concrete tests, verification steps, or explicitly show the repository PR-title template usage. These omissions are non-critical to understanding the change but are important for review and CI validation. Overall the description is mostly complete but missing test/verification details.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19548 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19548 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14694 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@mikeiovine
Copy link
Collaborator

We should add a test to cover this case

@ziyixiong-nv
Copy link
Collaborator Author

We should add a test to cover this case

@mikeiovine I find there's already a test tests/unittest/_torch/speculative/test_eagle3.py::test_multi_eagle3[False] for covering this issue, but I'm not sure why it can't cover this issue. Even when I revert my fix and set max_tokens to 2048, the issue can't be reproduced in this UT.

@mikeiovine
Copy link
Collaborator

Can you try lowering max_num_tokens? One of the ways that failing to do this can cause issues is that we can potentially go above the token limit if the scheduler is not aware of the draft tokens.

Copy link
Collaborator

@mikeiovine mikeiovine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accept to unblock

@ziyixiong-nv
Copy link
Collaborator Author

Can you try lowering max_num_tokens? One of the ways that failing to do this can cause issues is that we can potentially go above the token limit if the scheduler is not aware of the draft tokens.
After trying with several different configurations (batch_size in [16, 32], max_num_tokens in [32, 64, 128, 256, 1024, 2048], the issue still can't be reproduced in the UT. I'm going to merge this PR, and we can look back to see how to add the test later.

@ziyixiong-nv ziyixiong-nv merged commit 31ef03f into NVIDIA:main Sep 24, 2025
6 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants