Skip to content

Conversation

@ziyixiong-nv
Copy link
Collaborator

@ziyixiong-nv ziyixiong-nv commented Nov 17, 2025

Summary by CodeRabbit

  • Bug Fixes
    • Enhanced precision in KV cache length tracking by properly distinguishing between actual cached data and internal buffer management allocations, ensuring accurate memory state reporting during model execution.

Description

The variable mMaxSeqLenKv in the attention kernel should exclude the draft_tokens, as it's referring to the host_past_kv_len.

By applying this change, the GPQA testing with 1-model spec dec at reasoning high can reach score 0.76, which is similar to the score of no spec dec.

There's no single test for this fix. In the GPQA testing, a repetition issue happens more when enabling 1-model spec dec without the fix. However, for a single testing, the test without spec dec could also meet the repetition issue when using some specific tactics.

The PR also removed the unnecessary code in prepare_flash_mla.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@ziyixiong-nv ziyixiong-nv marked this pull request as ready for review November 17, 2025 07:52
@ziyixiong-nv ziyixiong-nv requested a review from a team as a code owner November 17, 2025 07:52
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 17, 2025

📝 Walkthrough

Walkthrough

Introduces a new kv_lens_actual attribute to the TrtllmAttentionWrapper to track actual KV cache lengths separately from internal cache management tokens, and updates runtime KV length references to derive from this actual length instead of the original kv_lens.

Changes

Cohort / File(s) Summary
KV Cache Length Tracking
tensorrt_llm/_torch/attention_backend/trtllm.py
Adds kv_lens_actual attribute to store KV length without extra tokens; documents that num_extra_kv_tokens are internal only; updates kv_lens_runtime to derive from kv_lens_actual while kv_lens_cuda_runtime remains unchanged

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

  • Single file with localized, consistent pattern changes
  • Attribute addition and reference updates without logic modifications
  • Changes are straightforward data flow rearrangement with accompanying documentation

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive PR description partially addresses the issue but lacks comprehensive detail about the technical changes and their impact. Provide clearer explanation of what mMaxSeqLenKv is, why draft_tokens should be excluded, and confirm test coverage details for the GPQA validation.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The pull request title clearly specifies the fix: excluding draft tokens from mMaxSeqLenKv, with proper NVBugs ticket reference and [fix] type indicator.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ziyixiong-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24752 [ run ] triggered by Bot. Commit: 58aaac0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24752 [ run ] completed with state FAILURE. Commit: 58aaac0

@ziyixiong-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24754 [ run ] triggered by Bot. Commit: 8fa324d

@ziyixiong-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24756 [ run ] triggered by Bot. Commit: 80dfd67

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24754 [ run ] completed with state ABORTED. Commit: 8fa324d
LLM/main/L0_MergeRequest_PR #18671 (Blue Ocean) completed with status: ABORTED

Copy link
Collaborator

@schetlur-nv schetlur-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ziyixiong-nv looking over this change it seems like a very subtle bug that is not restricted to gpt-oss (or maybe not even 2 model), is that right?
Can we add a unit test to for this issue in the MR, please?

@mikeiovine
Copy link
Collaborator

Great work @ziyixiong-nv. Seems pretty hard to test this by verifying accuracy end-to-end - shall we add a test that injects a mock model and validates the values of AttentionMetadata.kv_lens instead?

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24756 [ run ] completed with state SUCCESS. Commit: 80dfd67
/LLM/main/L0_MergeRequest_PR pipeline #18673 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@longlee0622 longlee0622 enabled auto-merge (squash) November 18, 2025 02:04
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
@ziyixiong-nv
Copy link
Collaborator Author

/bot run

@ziyixiong-nv
Copy link
Collaborator Author

Great work @ziyixiong-nv. Seems pretty hard to test this by verifying accuracy end-to-end - shall we add a test that injects a mock model and validates the values of AttentionMetadata.kv_lens instead?

Thanks. This is a good idea. I've added a test for it.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24845 [ run ] triggered by Bot. Commit: c4976a5

Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
@ziyixiong-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24846 [ run ] triggered by Bot. Commit: 9e682cc

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24845 [ run ] completed with state ABORTED. Commit: c4976a5
LLM/main/L0_MergeRequest_PR #18754 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24846 [ run ] completed with state SUCCESS. Commit: 9e682cc
/LLM/main/L0_MergeRequest_PR pipeline #18755 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@longlee0622 longlee0622 merged commit 7c4344b into NVIDIA:main Nov 18, 2025
6 of 7 checks passed
lkomali pushed a commit to lkomali/TensorRT-LLM that referenced this pull request Nov 19, 2025
…qLenKv (NVIDIA#9210)

Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
Signed-off-by: lkomali <lkomali@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants