Skip to content

Conversation

@JunyiXu-nv
Copy link
Collaborator

@JunyiXu-nv JunyiXu-nv commented Nov 3, 2025

Summary by CodeRabbit

  • Chores
    • Updated example Slurm launch script configuration with additional batch size parameter for LLM API invocation.

Description

The original test will encounter OOM issue when warmup cuda graph after #7999. The max batch size should be reduced to prevent the warmup using too much memory.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
@JunyiXu-nv JunyiXu-nv requested a review from a team as a code owner November 3, 2025 09:24
@JunyiXu-nv JunyiXu-nv requested a review from liji-nv November 3, 2025 09:24
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 3, 2025

📝 Walkthrough

Walkthrough

A Slurm launch script for the LLM API is modified to include an additional command-line argument --max_batch_size 256 passed to the trtllm-llmapi-launch command, alongside the existing --tp_size 2 parameter.

Changes

Cohort / File(s) Change Summary
LLM API Launch Script Configuration
examples/llm-api/llm_mgmn_llm_distributed.sh
Added --max_batch_size 256 parameter to the Python LLM API invocation command

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Pre-merge checks and finishing touches

✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title follows the required format [NVBugs ID][type] Summary as specified in the template, using a valid type [fix] with a valid NVBugs ticket reference. The title "Adjust max batch size to prevent OOM" is clear, concise, and specific, directly corresponding to the actual code change shown in the raw summary where --max_batch_size 256 is added to the Slurm launch script. The title avoids vague language and effectively communicates the main purpose of the change to address an out-of-memory issue.
Description Check ✅ Passed The pull request description is mostly complete with the required Description section clearly filled in, explaining both the issue (OOM during CUDA graph warmup following PR 7999) and the solution (reducing max batch size to prevent excessive memory usage). The PR follows the template structure with the PR Checklist marked as reviewed and the GitHub Bot Help section included. However, the Test Coverage section is left empty without specific test identification, which is a notable gap. Despite this missing section, the core description content is sufficiently detailed and informative to understand the PR's purpose and rationale.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e9e1a36 and 200f810.

📒 Files selected for processing (1)
  • examples/llm-api/llm_mgmn_llm_distributed.sh (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Learnt from: dbari
Repo: NVIDIA/TensorRT-LLM PR: 6714
File: tests/integration/defs/triton_server/build_model.sh:724-726
Timestamp: 2025-08-13T08:21:13.006Z
Learning: Mistral Small 3.1 multimodal (pixtral model type) supports any batch size for the multimodal encoder, not just batch size 1. The max_batch_size parameter can be set to values like 2 without conflicts with the runtime batching logic.
📚 Learning: 2025-08-20T06:56:02.889Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.

Applied to files:

  • examples/llm-api/llm_mgmn_llm_distributed.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
examples/llm-api/llm_mgmn_llm_distributed.sh (1)

54-55: Verify that 256 is a reduction from the default batch size.

The syntax and placement of the --max_batch_size 256 argument are correct. However, to confirm this fix addresses the OOM issue effectively, please verify:

  1. Whether 256 is indeed lower than the previous or default maximum batch size
  2. Whether this value was validated to prevent OOM during CUDA graph warmup without causing other test failures

Additionally, consider the following questions:

  • Are there other LLM-API launch scripts (in examples/llm-api/ or elsewhere) that should receive the same --max_batch_size configuration?
  • Should an inline comment be added to explain why this specific value is used and reference the bug/PR context (e.g., bug 5569754, PR 7999)?

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23371 [ run ] triggered by Bot. Commit: 200f810

@JunyiXu-nv JunyiXu-nv changed the title [https://nvbugs/5569754][fix] Adjust max batch size to prevent OOM [https://nvbugs/5569754][chore] Adjust max batch size to prevent OOM Nov 3, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #23371 [ run ] completed with state SUCCESS. Commit: 200f810
/LLM/release-1.1/L0_MergeRequest_PR pipeline #381 completed with status: 'FAILURE'

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23440 [ run ] triggered by Bot. Commit: 200f810

Copy link
Collaborator

@Superjomn Superjomn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23440 [ run ] completed with state SUCCESS. Commit: 200f810
/LLM/release-1.1/L0_MergeRequest_PR pipeline #392 completed with status: 'SUCCESS'

@MartinMarciniszyn MartinMarciniszyn merged commit c329f5f into NVIDIA:release/1.1 Nov 4, 2025
10 of 11 checks passed
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 20, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 20, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 20, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 20, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 21, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 21, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 23, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 23, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 23, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 24, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 25, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 26, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 26, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 27, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 28, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 28, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 28, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 29, 2025
…VIDIA#8876)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants