Skip to content

Conversation

@zerollzeng
Copy link
Collaborator

@zerollzeng zerollzeng commented Nov 10, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Added accuracy evaluation workflow with configurable model, tasks, and parameters
    • Added support for custom TensorRT-LLM wheel paths
    • Expanded configuration options for performance tuning including parallelism, batching, and caching controls
  • Chores

    • Simplified process cleanup behavior in benchmark execution

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 10, 2025

📝 Walkthrough

Walkthrough

The changes introduce a new accuracy evaluation workflow for disaggregated benchmarking by adding a dedicated evaluation script, extending configuration schemas with accuracy parameters, integrating accuracy testing into SLURM submission pipelines, and modifying the TensorRT-LLM installation to support both wheel-based and repository-based builds.

Changes

Cohort / File(s) Summary
Accuracy Evaluation Workflow
examples/disaggregated/slurm/benchmark/accuracy_eval.sh
New Bash script that parses input arguments, waits for server_config.yaml, extracts hostname and port, polls health endpoint (30s cadence, 1800s timeout), installs lm_eval[api]==0.4.8, and executes lm_eval with model, tasks, and model_args including base_url, concurrency, retries, tokenization, timeout, max tokens, and max length. Includes error handling for missing config, missing hostname/port, and server health timeout.
Configuration Schema
examples/disaggregated/slurm/benchmark/config.yaml
Adds trtllm_wheel_path environment option for pre-built wheel path specification. Introduces accuracy configuration block with enable/disable, model, tasks, concurrency, retries, timeouts, and generation/sequence limits. Expands worker_config gen and ctx sections with parallelism controls (tensor_parallel_size, moe_expert_parallel_size, pipeline_parallel_size, etc.), token/length limits (max_batch_size, max_num_tokens, max_seq_len), and extensive CUDA graph, kv_cache, moe, and streaming settings.
SLURM Workflow Integration
examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
Introduces configuration variables for trtllm_wheel_path and accuracy parameters (enable_accuracy_test, model, tasks, concurrency, retries, tokenization, timeout, max tokens/length). Implements dual-path TensorRT-LLM installation: prioritizes wheel installation if trtllm_wheel_path is set, otherwise falls back to repository build. Adds conditional accuracy evaluation step after server startup, running accuracy_eval.sh with parsed parameters and logging to accuracy_eval.log. Adjusts argument index for nsys_on from 28 to 29. Adds runtime reporting and minor logging enhancements.
Benchmark Execution
examples/disaggregated/slurm/benchmark/run_benchmark.sh
Removes final cleanup block that terminated server and worker processes, waited for cleanup, and reported remaining processes. Script now ends after recording job node list.
Job Submission
examples/disaggregated/slurm/benchmark/submit.py
Adds shutil import. Replaces lazy log directory creation with aggressive removal and recreation (shutil.rmtree then mkdir) to ensure clean state. Adds --reservation=oos_fix_gdrdrv_dkms flag to sbatch command. Extends sbatch configuration payload with accuracy test fields (enable_accuracy_test, model, tasks, num_concurrent, max_retries, tokenized_requests, timeout, max_gen_toks, max_length) and trtllm_wheel_path environment variable.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant Submit as submit.py
    participant Slurm as disaggr_torch.slurm
    participant TrtLLM as TensorRT-LLM<br/>Installation
    participant Server as Server
    participant EvalScript as accuracy_eval.sh
    participant LMEval as lm_eval

    User->>Submit: Invoke with config
    Submit->>Submit: Clean log directory
    Submit->>Slurm: Submit job via sbatch
    
    Slurm->>Slurm: Parse arguments & config
    Slurm->>TrtLLM: Check trtllm_wheel_path
    
    alt Wheel path provided
        TrtLLM->>TrtLLM: Install from wheel
    else Repo exists
        TrtLLM->>TrtLLM: Build from repository
    end
    
    TrtLLM->>Server: Start server/workers
    Slurm->>EvalScript: Trigger accuracy_eval.sh<br/>(if enabled)
    
    EvalScript->>EvalScript: Wait for server_config.yaml
    EvalScript->>EvalScript: Extract hostname/port
    EvalScript->>Server: Poll /health endpoint<br/>(1800s timeout)
    EvalScript->>EvalScript: Install lm_eval[api]==0.4.8
    EvalScript->>LMEval: Execute with model/tasks
    LMEval->>Server: Request evaluations
    Server->>LMEval: Return results
    LMEval->>EvalScript: Report completion
    EvalScript->>Slurm: Log results
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~45 minutes

Areas requiring extra attention:

  • Dual-path TensorRT-LLM installation logic in disaggr_torch.slurm: Verify conditional flow correctly prioritizes wheel over repo-based builds and handles edge cases
  • Argument index shift (nsys_on from position 28 to 29): Ensure all downstream argument parsing correctly references the new indices
  • Health endpoint polling implementation in accuracy_eval.sh: Validate timeout/cadence logic and error handling for server unavailability scenarios
  • Log directory handling change in submit.py: Confirm aggressive shutil.rmtree doesn't conflict with concurrent runs or user workflows
  • Accuracy workflow integration points between disaggr_torch.slurm and run_benchmark.sh: Ensure evaluation timing doesn't conflict with benchmark execution phases

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning PR description is incomplete. The title, Description, and Test Coverage sections are empty with only placeholder comments. Only the PR Checklist has one item checked. Fill in the PR title following the template format, provide a clear description of the changes and their purpose, and list the relevant test coverage for the accuracy testing and wheel installation features.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically summarizes the two main changes: accuracy test support and wheel-based installation. It is concise, uses descriptive terms, and directly relates to the primary modifications across all changed files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
examples/disaggregated/slurm/benchmark/accuracy_eval.sh (1)

36-43: Consider using a YAML parser for more robust configuration parsing.

The current grep/awk approach is fragile and may fail if the YAML structure changes (e.g., extra whitespace, different formatting, nested keys). However, this requires adding a YAML parser dependency (e.g., yq or Python with PyYAML).

If you want to avoid external dependencies, the current approach is acceptable with the existing error handling.

examples/disaggregated/slurm/benchmark/disaggr_torch.slurm (1)

128-140: Add validation to check that the wheel file exists before attempting installation.

The script attempts to install from trtllm_wheel_path without verifying the file exists, which will result in a cryptic pip error if the path is invalid.

Apply this diff to add file existence validation:

 # Install TensorRT-LLM
 if [ -n "${trtllm_wheel_path}" ]; then
     # Install from pre-built wheel if path is provided
+    if [ ! -f "${trtllm_wheel_path}" ]; then
+        cleanup_on_failure "TensorRT-LLM wheel file not found: ${trtllm_wheel_path}"
+    fi
     echo "Installing TensorRT-LLM from wheel: ${trtllm_wheel_path}..."
     if ! srun --container-name=${container_name} \
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e8d4a56 and e337ce8.

📒 Files selected for processing (5)
  • examples/disaggregated/slurm/benchmark/accuracy_eval.sh (1 hunks)
  • examples/disaggregated/slurm/benchmark/config.yaml (1 hunks)
  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm (5 hunks)
  • examples/disaggregated/slurm/benchmark/run_benchmark.sh (0 hunks)
  • examples/disaggregated/slurm/benchmark/submit.py (4 hunks)
💤 Files with no reviewable changes (1)
  • examples/disaggregated/slurm/benchmark/run_benchmark.sh
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • examples/disaggregated/slurm/benchmark/submit.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • examples/disaggregated/slurm/benchmark/submit.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • examples/disaggregated/slurm/benchmark/submit.py
🧠 Learnings (13)
📓 Common learnings
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: samuellees
Repo: NVIDIA/TensorRT-LLM PR: 6974
File: tensorrt_llm/serve/scripts/benchmark_dataset.py:558-566
Timestamp: 2025-08-18T08:42:02.640Z
Learning: In TensorRT-LLM's RandomDataset (tensorrt_llm/serve/scripts/benchmark_dataset.py), when using --random-token-ids option, sequence length accuracy is prioritized over semantic correctness for benchmarking purposes. The encode/decode operations should use skip_special_tokens=True and add_special_tokens=False to ensure exact target token lengths.
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • examples/disaggregated/slurm/benchmark/config.yaml
  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • examples/disaggregated/slurm/benchmark/config.yaml
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • examples/disaggregated/slurm/benchmark/config.yaml
  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-08-11T20:09:24.389Z
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.

Applied to files:

  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-08-21T00:16:56.457Z
Learnt from: farshadghodsian
Repo: NVIDIA/TensorRT-LLM PR: 7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.

Applied to files:

  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-08-18T08:42:02.640Z
Learnt from: samuellees
Repo: NVIDIA/TensorRT-LLM PR: 6974
File: tensorrt_llm/serve/scripts/benchmark_dataset.py:558-566
Timestamp: 2025-08-18T08:42:02.640Z
Learning: In TensorRT-LLM's RandomDataset (tensorrt_llm/serve/scripts/benchmark_dataset.py), when using --random-token-ids option, sequence length accuracy is prioritized over semantic correctness for benchmarking purposes. The encode/decode operations should use skip_special_tokens=True and add_special_tokens=False to ensure exact target token lengths.

Applied to files:

  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.

Applied to files:

  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-08-18T09:08:07.687Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 6984
File: cpp/tensorrt_llm/CMakeLists.txt:297-299
Timestamp: 2025-08-18T09:08:07.687Z
Learning: In the TensorRT-LLM project, artifacts are manually copied rather than installed via `cmake --install`, so INSTALL_RPATH properties are not needed - only BUILD_RPATH affects the final artifacts.

Applied to files:

  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
📚 Learning: 2025-09-16T09:30:09.716Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7763
File: cpp/tensorrt_llm/CMakeLists.txt:297-301
Timestamp: 2025-09-16T09:30:09.716Z
Learning: In the TensorRT-LLM project, NCCL libraries are loaded earlier by PyTorch libraries or the bindings library, so the main shared library doesn't need NCCL paths in its RPATH - the libraries will already be available in the process address space when needed.

Applied to files:

  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm
🪛 Ruff (0.14.3)
examples/disaggregated/slurm/benchmark/submit.py

118-118: f-string without any placeholders

Remove extraneous f prefix

(F541)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (6)
examples/disaggregated/slurm/benchmark/accuracy_eval.sh (1)

65-67: Verify that lm_eval[api]==0.4.8 is the intended version.

The script installs a specific version of lm_eval. Ensure this version is compatible with the evaluation tasks and models being used.

Consider making the version configurable via the accuracy configuration block in config.yaml if different versions might be needed for different evaluation scenarios.

examples/disaggregated/slurm/benchmark/config.yaml (2)

37-37: LGTM!

The trtllm_wheel_path configuration option is well-documented and provides flexibility for using pre-built wheels instead of building from the repository.


45-55: Verify that num_concurrent: 512 is appropriate for typical hardware configurations.

The default concurrency of 512 requests might overwhelm servers with limited resources. Consider whether a more conservative default (e.g., 64 or 128) would be safer for initial testing.

The other defaults appear reasonable for accuracy evaluation tasks.

examples/disaggregated/slurm/benchmark/submit.py (1)

155-169: LGTM!

The new arguments for TensorRT-LLM wheel installation and accuracy evaluation are properly extracted from the configuration and passed to the SLURM script. The use of .get() for the optional wheel path and proper boolean-to-string conversions are good practices.

examples/disaggregated/slurm/benchmark/disaggr_torch.slurm (2)

240-254: LGTM!

The accuracy evaluation step is well-integrated into the workflow. It properly:

  • Checks the enable flag before execution
  • Passes all required parameters to the accuracy_eval.sh script
  • Handles errors with cleanup_on_failure
  • Runs after server startup but before benchmarking

Note: The accuracy_eval.sh script has a critical health check issue (flagged in a separate comment) that should be addressed for this integration to work correctly.


281-281: LGTM!

Adding total runtime reporting is useful for performance tracking and debugging.

@zerollzeng zerollzeng force-pushed the accuracy_test_and_wheel_installation branch from e337ce8 to 72ae407 Compare November 12, 2025 09:40
@zerollzeng zerollzeng changed the title [TRTLLM-9053, TRTLLM-9054][feat] Draft: support accuracy test and install from wheel [TRTLLM-9053][TRTLLM-9054][feat] Support accuracy test and install from wheel Nov 12, 2025
@zerollzeng zerollzeng changed the title [TRTLLM-9053][TRTLLM-9054][feat] Support accuracy test and install from wheel [TRTLLM-9053][feat] Support accuracy test and install from wheel Nov 12, 2025
@zerollzeng zerollzeng force-pushed the accuracy_test_and_wheel_installation branch from d0139d8 to b7471bd Compare November 14, 2025 01:53
@chuangz0
Copy link
Collaborator

LGTM

Signed-off-by: Zero Zeng <38289304+zerollzeng@users.noreply.github.com>
@zerollzeng zerollzeng force-pushed the accuracy_test_and_wheel_installation branch from b7471bd to 7d736fd Compare November 14, 2025 06:48
@kaiyux kaiyux enabled auto-merge (squash) November 14, 2025 06:58
@kaiyux
Copy link
Member

kaiyux commented Nov 14, 2025

/bot skip --comment "slurm scripts are not tested in CI pipeline yet"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24568 [ skip ] triggered by Bot. Commit: 7d736fd

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24568 [ skip ] completed with state SUCCESS. Commit: 7d736fd
Skipping testing for commit 7d736fd

@kaiyux kaiyux merged commit c6cce39 into NVIDIA:main Nov 14, 2025
5 checks passed
zheyuf pushed a commit to zheyuf/TensorRT-LLM that referenced this pull request Nov 19, 2025
…DIA#9038)

Signed-off-by: Zero Zeng <38289304+zerollzeng@users.noreply.github.com>
greg-kwasniewski1 pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Nov 20, 2025
…DIA#9038)

Signed-off-by: Zero Zeng <38289304+zerollzeng@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants