Skip to content

Conversation

@leslie-fang25
Copy link
Collaborator

@leslie-fang25 leslie-fang25 commented Jul 24, 2025

Summary by CodeRabbit

  • Documentation
    • Updated the compatibility matrix to indicate that the "MTP" feature now supports "Chunked Prefill."
  • Tests
    • Expanded test coverage for the "Chunked Prefill" feature in the DeepSeekV3Lite test suite.
    • Added new test cases and updated existing ones to explicitly include the "enable_chunked_prefill" parameter.
    • Adjusted test lists and configurations to reflect the new parameter across multiple test environments.

Description

This PR add the accuracy test combining mtp and chunked prefill with TestDeepSeekV3Lite and also update the feature combination matrix.

Test Coverage

python -u -m pytest -s -v tests/integration/defs/accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_bfloat16

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 24, 2025

📝 Walkthrough

Walkthrough

The changes add and parameterize the enable_chunked_prefill option for the TestDeepSeekV3Lite::test_bfloat16 test across integration test definitions, test lists, and duration records. The compatibility matrix documentation is updated to reflect "Chunked Prefill" support for the "MTP" feature. No changes are made to core logic or public APIs.

Changes

Cohort / File(s) Change Summary
Feature Documentation Update
docs/source/torch/features/feature_combination_matrix.md
Updated the "MTP" feature's compatibility matrix entry to indicate support ("Yes") for "Chunked Prefill" instead of "Untested".
Test Implementation
tests/integration/defs/accuracy/test_llm_api_pytorch.py
Parameterized the test_bfloat16 method in TestDeepSeekV3Lite to accept enable_chunked_prefill (True/False), updated the test signature, and passed the parameter to the LLM constructor.
Test Duration Records
tests/integration/defs/.test_durations
Appended enable_chunked_prefill=False to all relevant TestDeepSeekV3Lite::test_bfloat16 test entries; duration values unchanged.
Test List: l0_b200
tests/integration/test_lists/test-db/l0_b200.yml
Updated existing test_bfloat16 entries to include enable_chunked_prefill=False and added a new entry with enable_chunked_prefill=True.
Test List: l0_h100
tests/integration/test_lists/test-db/l0_h100.yml
Replaced all original test_bfloat16 entries with versions explicitly setting enable_chunked_prefill=False.
Test List: l0_gb200
tests/integration/test_lists/test-db/l0_gb200.yml
Added three new test_bfloat16 entries with various combinations of mtp_nextn and enable_chunked_prefill (False/True).
Test List: QA Function Full
tests/integration/test_lists/qa/llm_function_full.txt
Appended enable_chunked_prefill=False to the test_bfloat16 test identifier.
Test List: QA Function RTX6KD
tests/integration/test_lists/qa/llm_function_rtx6kd.txt
Appended enable_chunked_prefill=False to all test_bfloat16 entries.
Test List: QA Function Sanity
tests/integration/test_lists/qa/llm_function_sanity.txt
Appended enable_chunked_prefill=False to the relevant test_bfloat16 test identifier.

Sequence Diagram(s)

sequenceDiagram
    participant TestRunner
    participant TestDeepSeekV3Lite
    participant LLM

    TestRunner->>TestDeepSeekV3Lite: test_bfloat16(..., enable_chunked_prefill)
    TestDeepSeekV3Lite->>LLM: LLM(..., enable_chunked_prefill)
    LLM-->>TestDeepSeekV3Lite: (runs test with/without chunked prefill)
    TestDeepSeekV3Lite-->>TestRunner: Test result
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested labels

Community want to contribute

Suggested reviewers

  • litaotju
  • yilin-void
  • yizhang-nv
  • crazydemo
  • pamelap-nvidia

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cb7a677 and 639b7b8.

📒 Files selected for processing (9)
  • docs/source/torch/features/feature_combination_matrix.md (1 hunks)
  • tests/integration/defs/.test_durations (2 hunks)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (3 hunks)
  • tests/integration/test_lists/qa/llm_function_full.txt (1 hunks)
  • tests/integration/test_lists/qa/llm_function_rtx6kd.txt (1 hunks)
  • tests/integration/test_lists/qa/llm_function_sanity.txt (1 hunks)
  • tests/integration/test_lists/test-db/l0_b200.yml (1 hunks)
  • tests/integration/test_lists/test-db/l0_gb200.yml (1 hunks)
  • tests/integration/test_lists/test-db/l0_h100.yml (1 hunks)
✅ Files skipped from review due to trivial changes (3)
  • tests/integration/test_lists/qa/llm_function_sanity.txt
  • tests/integration/test_lists/qa/llm_function_rtx6kd.txt
  • tests/integration/test_lists/qa/llm_function_full.txt
🚧 Files skipped from review as they are similar to previous changes (6)
  • docs/source/torch/features/feature_combination_matrix.md
  • tests/integration/test_lists/test-db/l0_h100.yml
  • tests/integration/defs/.test_durations
  • tests/integration/test_lists/test-db/l0_gb200.yml
  • tests/integration/test_lists/test-db/l0_b200.yml
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@leslie-fang25
Copy link
Collaborator Author

/bot run

@coderabbitai coderabbitai bot requested review from chzblych and yiqingy0 July 24, 2025 01:33
@leslie-fang25 leslie-fang25 requested review from QiJune and removed request for chzblych and yiqingy0 July 24, 2025 01:33
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12766 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12766 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #9506 completed with status: 'FAILURE'

@leslie-fang25 leslie-fang25 force-pushed the leslie/add_accuracy_test_mtp_chunk_prefill branch from d604a98 to c17f0ac Compare July 24, 2025 06:08
@leslie-fang25
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12813 [ run ] triggered by Bot

@leslie-fang25 leslie-fang25 requested a review from QiJune July 24, 2025 06:20
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12813 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #9547 completed with status: 'FAILURE'

@coderabbitai coderabbitai bot requested a review from yiqingy0 July 24, 2025 07:46
@leslie-fang25
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12823 [ run ] triggered by Bot

@yizhang-nv
Copy link
Member

I think you should also enable tests in l0_gb200.yml and skip tests with torch.compile enabled since we do not support chunked prefill for torch.compile right now.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12823 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9557 completed with status: 'FAILURE'

@leslie-fang25
Copy link
Collaborator Author

leslie-fang25 commented Jul 24, 2025

I think you should also enable tests in l0_gb200.yml and skip tests with torch.compile enabled since we do not support chunked prefill for torch.compile right now.

Thanks for your suggestion. Copied 3 test configuration (all with torch.compile disabled) from l0_b200.yml to l0_gb200.yml. Per my understanding, b200 and gb200 are with same arch, may I know the reason why suggested to enable these cases on gb200 in addition to b200?

@leslie-fang25
Copy link
Collaborator Author

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/integration/test_lists/test-db/l0_gb200.yml (1)

23-25: Symmetric coverage gap: mtp_nextn=0 with enable_chunked_prefill=True missing

For completeness you added both enable_chunked_prefill states when mtp_nextn=2, but only False when mtp_nextn=0.
If chunked prefill is expected to work with mtp_nextn=0, add the missing case to prevent silent regressions.

+  - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_bfloat16[mtp_nextn=0-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=False-enable_chunked_prefill=True]
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b7b120f and ece128d.

📒 Files selected for processing (1)
  • tests/integration/test_lists/test-db/l0_gb200.yml (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tests/integration/test_lists/test-db/l0_gb200.yml (1)

23-25: Check world-size expectation vs. test identifier

These three new entries don’t carry the _4gpus suffix that the rest of the suite uses to indicate a 4-GPU world size, yet they run under a matrix whose system_gpu_count hard-filters to 4.
If the underlying test fixture implicitly assumes world_size==1, the jobs will waste three GPUs; if it expects four, the missing suffix can make grepping / reporting inconsistent.

Please verify that these variants actually spawn four ranks (or adjust the identifier to _4gpus for consistency).

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12850 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12850 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9579 completed with status: 'FAILURE'

@leslie-fang25 leslie-fang25 force-pushed the leslie/add_accuracy_test_mtp_chunk_prefill branch from ece128d to b64732c Compare July 24, 2025 23:47
@leslie-fang25
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13918 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10480 completed with status: 'SUCCESS'

@leslie-fang25 leslie-fang25 force-pushed the leslie/add_accuracy_test_mtp_chunk_prefill branch from fd53953 to cb7a677 Compare August 4, 2025 06:16
@leslie-fang25
Copy link
Collaborator Author

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13944 [ reuse-pipeline ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13944 [ reuse-pipeline ] completed with state SUCCESS
Reusing PR_Github #13918 for commit cb7a677

@leslie-fang25 leslie-fang25 force-pushed the leslie/add_accuracy_test_mtp_chunk_prefill branch from cb7a677 to 639b7b8 Compare August 6, 2025 05:13
@leslie-fang25 leslie-fang25 requested a review from a team as a code owner August 6, 2025 05:13
@leslie-fang25 leslie-fang25 requested a review from kaiyux August 6, 2025 05:13
@leslie-fang25
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14240 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14240 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10753 completed with status: 'SUCCESS'

@leslie-fang25 leslie-fang25 force-pushed the leslie/add_accuracy_test_mtp_chunk_prefill branch from 639b7b8 to 8efeda2 Compare August 15, 2025 03:38
@leslie-fang25
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15389 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15389 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11600 completed with status: 'FAILURE'

@leslie-fang25 leslie-fang25 force-pushed the leslie/add_accuracy_test_mtp_chunk_prefill branch from 8efeda2 to 1929e82 Compare August 15, 2025 06:01
@leslie-fang25
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15407 [ run ] triggered by Bot

Copy link
Collaborator

@QiJune QiJune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15407 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11613 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

Signed-off-by: leslie-fang25 <leslief@nvidia.com>
@leslie-fang25 leslie-fang25 force-pushed the leslie/add_accuracy_test_mtp_chunk_prefill branch from 1929e82 to 3d5dc9b Compare August 18, 2025 01:34
@leslie-fang25
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15558 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15558 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11718 completed with status: 'ABORTED'

@leslie-fang25
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15601 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15601 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11745 completed with status: 'SUCCESS'

@leslie-fang25 leslie-fang25 merged commit e76e5c6 into NVIDIA:main Aug 18, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants