Skip to content

Conversation

@chenfeiz0326
Copy link
Collaborator

@chenfeiz0326 chenfeiz0326 commented Aug 8, 2025

Summary by CodeRabbit

  • New Features

    • Added a configurable performance benchmarking system for LLM serving with selective test execution, automated server lifecycle, concurrency sweeps, and CSV report generation; supports local and SLURM runs.
  • Documentation

    • Added a detailed README with usage examples, skip/select syntax, CLI options, and expected output structure.
  • Tests

    • Added ready-to-run benchmark configurations and a results parser to streamline performance sanity checks across models and hardware.
  • Bug Fixes

    • Improved kernel launch sizing for quantization paths to enhance stability and performance with swizzled layouts.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 8, 2025

📝 Walkthrough

Walkthrough

Adds a TensorRT-LLM serving perf-sanity benchmark framework under tests/scripts/perf-sanity (runner, YAML configs, parser, SLURM launcher, README) and updates CUDA quantization kernel grid sizing to be padding-aware for SWIZZLED layouts.

Changes

Cohort / File(s) Summary
Perf Benchmark Docs & Config
tests/scripts/perf-sanity/README.md, tests/scripts/perf-sanity/benchmark_config.yaml
Adds README describing benchmark system and usage; introduces YAML with ~24 test cases covering models, GPU/topology params, backends, memory/batch/token settings, and concurrency iterations.
Benchmark Runner
tests/scripts/perf-sanity/run_benchmark_serve.py
New runner that loads YAML, builds execution plan with skip/select semantics, generates extra LLM API config, manages trtllm-serve lifecycle, runs concurrency/iteration benchmarks, logs output, and handles errors/timeouts.
Results Parser
tests/scripts/perf-sanity/parse_benchmark_results.py
New parser extracting configuration and throughput metrics from serve logs (falls back to filename parsing), sorts results, inserts group separators, and writes a CSV report plus console summary.
SLURM/Docker Launcher
tests/scripts/perf-sanity/benchmark-serve.sh
New SLURM-ready script that launches a Docker container to run the benchmark runner, builds timestamped output folders, passes select/skip patterns, and optionally invokes the parser to produce a CSV.
CUDA Quantization Grid Sizing
cpp/tensorrt_llm/kernels/quantization.cu
Adjusts grid-sizing to pad M to 128 when layout is SWIZZLED and clamps blocks by occupancy for FP4 and MXFP8 quantization paths (no change to block sizes or kernel logic).

Sequence Diagram(s)

sequenceDiagram
  participant U as User
  participant BR as BenchmarkRunner
  participant S as trtllm-serve
  participant B as Benchmark Script
  participant L as Log Files
  participant P as Parser

  U->>BR: run_benchmark_serve.py (config, skip/select, output)
  BR->>S: start server (with extra LLM API config)
  BR->>BR: wait for server readiness
  loop For each test case and concurrency
    BR->>B: invoke benchmark (concurrency, iterations)
    B->>S: send inference requests
    S-->>B: responses
    B-->>L: append serve.*.log
  end
  BR->>S: terminate server
  U->>P: parse_benchmark_results.py <output_folder>
  P->>L: read logs and extract metrics
  P-->>U: CSV report + console summary
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • litaotju
  • LarryXFly
  • StanleySun639
  • chzblych
  • kaiyux
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (15)
tests/scripts/perf-sanity/benchmark_config.yaml (1)

466-468: Remove trailing whitespace to keep YAML lint-clean

Line 468 has a trailing space after # - [4096, 2].
Many CI linters treat this as an error and will fail the pipeline.

-    #  - [4096, 2] 
+    #  - [4096, 2]
tests/scripts/perf-sanity/run_benchmark_serve.sh (1)

218-229: Use a trap to ensure the server is always terminated

If the benchmark loop exits early (e.g., Ctrl-C or an error despite set -e), the background trtllm-serve may stay alive.
Registering a trap keeps nodes clean and avoids port conflicts on the next run.

+# Ensure background server is cleaned up on exit
+cleanup() {
+  [[ -n "$server_pid" ]] && kill -9 "$server_pid" 2>/dev/null || true
+}
+trap cleanup EXIT
tests/scripts/perf-sanity/benchmark-job.sh (1)

1-1: Prefer env-based shebang for portability

/usr/bin/bash is not guaranteed on all systems; /usr/bin/env bash is the common portable form.

-#! /usr/bin/bash
+#!/usr/bin/env bash
tests/scripts/perf-sanity/benchmark-bench-prerelease.sh (2)

1-1: Use a portable shebang

Replace hard-coded path with env lookup.

-#! /usr/bin/bash
+#!/usr/bin/env bash

13-14: script_dir is calculated but never used

Remove it or make use of it to avoid reader confusion and silence SC2034.

tests/scripts/perf-sanity/benchmark-serve.sh (1)

1-1: Prefer /usr/bin/env for portability.

Use env to locate bash across environments.

-#! /usr/bin/bash
+#!/usr/bin/env bash
tests/scripts/perf-sanity/benchmark-serve-prerelease.sh (3)

1-1: Prefer /usr/bin/env for portability.

-#! /usr/bin/bash
+#!/usr/bin/env bash

46-47: Quote subshells in report for safety (SC2046).

-    echo "Report path" $(realpath ${results})
-    echo "START" $start_time "-" "END" ${end_time} $(hostname)
+    echo "Report path $(realpath "${results}")"
+    echo "START ${start_time} - END ${end_time} $(hostname)"

14-19: Hard-coded DEFAULT_COMMIT will become stale; prefer dynamic default or explicit arg.

Consider defaulting to the current repo HEAD when available, else "unknown". Keeps tracking accurate without frequent edits.

-DEFAULT_COMMIT="4d040b50b77a737dd5a87d5888babbc364eca557"
+DEFAULT_COMMIT="$(git rev-parse --short HEAD 2>/dev/null || echo unknown)"
tests/scripts/perf-sanity/README.md (2)

195-195: Add language tag to fenced code block (MD040).

Improves rendering and lint compliance.

-```
+```text

236-243: Align Python version with repo guidelines (3.8+).

Project guideline specifies Python 3.8+; update dependency note accordingly.

-- Python 3.7+
+- Python 3.8+
tests/scripts/perf-sanity/parse_benchmark_results.py (2)

289-289: Portability: os.uname() isn’t available on all platforms.

Use platform.node() to avoid AttributeError outside POSIX.

-    print(f"Parsed at: {end_time} on {os.uname().nodename}")
+    import platform
+    print(f"Parsed at: {end_time} on {platform.node()}")

3-6: Docstring nits (punctuation/format).

Minor docstring cleanup for D205/D415/D200; optional for tests, but improves readability.

-"""
-Script to parse benchmark metrics from a specified folder and generate Excel table
-Usage: python parse_benchmark_results.py <folder_name>
-"""
+"""
+Parse benchmark metrics from a specified folder and generate a report.
+
+Usage: python parse_benchmark_results.py <folder_name>
+"""
@@
-    Extract configuration from log file content using "Completed benchmark with Configuration:" pattern
+    Extract configuration from log content using the "Completed benchmark with Configuration:" pattern.
@@
-    Extract basic configuration from filename as fallback
-    Expected format: serve.{model_name}.tp{tp}.ep{ep}.isl{isl}.osl{osl}.concurrency{concurrency}.log
+    Extract basic configuration from filename as fallback.
+    Expected format:
+      serve.{model_name}.tp{tp}.ep{ep}.isl{isl}.osl{osl}.concurrency{concurrency}.log
@@
-    Return default configuration values when log content parsing fails
+    Return default configuration values when log content parsing fails.

Also applies to: 18-21, 93-98, 128-141

tests/scripts/perf-sanity/run_benchmark_serve.py (2)

2-9: Wrap long header docstring to satisfy linters and readability.

-"""
-Script to run benchmarks from YAML configuration file
-Usage: python run_benchmark_serve.py --output_folder <output_folder> --commit <commit> --config_file <config_file> [--skip <skip_pattern>] [--select <select_pattern>]
-Skip pattern format: "2,4-1" means skip test case 2 and test case 4's 1st concurrency
-Select pattern format: "1,3,5" means only run test cases 1, 3, and 5
-If select_pattern is empty, all test cases are selected
-If skip_pattern is empty, no test cases are skipped
-"""
+"""
+Run benchmarks from a YAML configuration file.
+
+Usage:
+  python run_benchmark_serve.py --output_folder <output_folder> --commit <commit> \
+    --config_file <config_file> [--skip <skip_pattern>] [--select <select_pattern>]
+
+Notes:
+  - Skip pattern: "2,4-1" skips test case 2 and test case 4's 1st concurrency.
+  - Select pattern: "1,3,5" runs only test cases 1, 3, and 5.
+  - If select_pattern is empty, all test cases are selected.
+  - If skip_pattern is empty, no test cases are skipped.
+"""

185-201: Benchmark log capture: ensure both stdout/stderr preserved in order (optional).

Subprocess capture merges stdout/stderr only when stderr is redirected; current approach writes them sequentially which can reorder timestamps. Optional: redirect stderr to stdout in the command to keep ordering in a single stream, or run without capture and tee to file.

-            result = subprocess.run(benchmark_cmd, capture_output=True, text=True, check=True)
-            
-            # Write output to log file
-            with open(log_filename, 'w') as f:
-                f.write(result.stdout)
-                f.write(result.stderr)
+            result = subprocess.run(
+                benchmark_cmd,
+                stdout=subprocess.PIPE,
+                stderr=subprocess.STDOUT,
+                text=True,
+                check=True,
+            )
+            # Write output to log file (preserves ordering)
+            with open(log_filename, 'w') as f:
+                f.write(result.stdout)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9687bb4 and c4c6c3e.

📒 Files selected for processing (10)
  • tests/scripts/perf-sanity/README.md (1 hunks)
  • tests/scripts/perf-sanity/benchmark-bench-prerelease.sh (1 hunks)
  • tests/scripts/perf-sanity/benchmark-job.sh (1 hunks)
  • tests/scripts/perf-sanity/benchmark-serve-prerelease.sh (1 hunks)
  • tests/scripts/perf-sanity/benchmark-serve.sh (1 hunks)
  • tests/scripts/perf-sanity/benchmark_config.yaml (1 hunks)
  • tests/scripts/perf-sanity/parse_benchmark_results.py (1 hunks)
  • tests/scripts/perf-sanity/run_benchmark_bench.sh (1 hunks)
  • tests/scripts/perf-sanity/run_benchmark_serve.py (1 hunks)
  • tests/scripts/perf-sanity/run_benchmark_serve.sh (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tests/scripts/perf-sanity/parse_benchmark_results.py
  • tests/scripts/perf-sanity/run_benchmark_serve.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tests/scripts/perf-sanity/parse_benchmark_results.py
  • tests/scripts/perf-sanity/run_benchmark_serve.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/scripts/perf-sanity/benchmark-serve.sh
  • tests/scripts/perf-sanity/benchmark-job.sh
  • tests/scripts/perf-sanity/benchmark_config.yaml
  • tests/scripts/perf-sanity/README.md
  • tests/scripts/perf-sanity/run_benchmark_bench.sh
  • tests/scripts/perf-sanity/run_benchmark_serve.py
  • tests/scripts/perf-sanity/run_benchmark_serve.sh
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/scripts/perf-sanity/benchmark-serve.sh
  • tests/scripts/perf-sanity/benchmark-job.sh
  • tests/scripts/perf-sanity/benchmark_config.yaml
  • tests/scripts/perf-sanity/README.md
  • tests/scripts/perf-sanity/benchmark-bench-prerelease.sh
  • tests/scripts/perf-sanity/run_benchmark_bench.sh
  • tests/scripts/perf-sanity/run_benchmark_serve.sh
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • tests/scripts/perf-sanity/README.md
  • tests/scripts/perf-sanity/run_benchmark_bench.sh
🧬 Code Graph Analysis (5)
tests/scripts/perf-sanity/benchmark-serve.sh (2)
tests/scripts/perf-sanity/benchmark-serve-prerelease.sh (1)
  • run_benchmark (28-39)
tests/scripts/perf-sanity/benchmark-job.sh (2)
  • run_benchmark (26-28)
  • report (30-41)
tests/scripts/perf-sanity/benchmark-job.sh (3)
tests/scripts/perf-sanity/benchmark-serve-prerelease.sh (1)
  • run_benchmark (28-39)
tests/scripts/perf-sanity/benchmark-bench-prerelease.sh (1)
  • run_benchmark (21-30)
tests/scripts/perf-sanity/benchmark-serve.sh (2)
  • run_benchmark (15-17)
  • report (19-31)
tests/scripts/perf-sanity/benchmark-bench-prerelease.sh (2)
tests/scripts/perf-sanity/benchmark-job.sh (2)
  • run (43-58)
  • run_benchmark (26-28)
tests/scripts/perf-sanity/benchmark-serve-prerelease.sh (2)
  • run_benchmark (28-39)
  • parse_report (41-82)
tests/scripts/perf-sanity/run_benchmark_serve.py (2)
tests/scripts/perf-sanity/run_benchmark_serve.sh (1)
  • wait_for_server (37-71)
tests/scripts/perf-sanity/benchmark-serve-prerelease.sh (1)
  • run_benchmark (28-39)
tests/scripts/perf-sanity/run_benchmark_serve.sh (1)
tests/scripts/perf-sanity/run_benchmark_serve.py (1)
  • wait_for_server (134-159)
🪛 Shellcheck (0.10.0)
tests/scripts/perf-sanity/benchmark-serve.sh

[warning] 23-23: Quote this to prevent word splitting.

(SC2046)


[warning] 24-24: Quote this to prevent word splitting.

(SC2046)

tests/scripts/perf-sanity/benchmark-job.sh

[warning] 34-34: Quote this to prevent word splitting.

(SC2046)


[warning] 35-35: Quote this to prevent word splitting.

(SC2046)

tests/scripts/perf-sanity/benchmark-serve-prerelease.sh

[warning] 33-33: Quote this to prevent word splitting.

(SC2046)


[warning] 33-33: Quote this to prevent word splitting.

(SC2046)


[warning] 34-34: Quote this to prevent word splitting.

(SC2046)


[warning] 46-46: Quote this to prevent word splitting.

(SC2046)


[warning] 47-47: Quote this to prevent word splitting.

(SC2046)

tests/scripts/perf-sanity/benchmark-bench-prerelease.sh

[warning] 13-13: script_dir appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 24-24: Quote this to prevent word splitting.

(SC2046)


[warning] 24-24: Quote this to prevent word splitting.

(SC2046)


[warning] 25-25: Quote this to prevent word splitting.

(SC2046)


[warning] 38-38: Quote this to prevent word splitting.

(SC2046)


[warning] 39-39: Quote this to prevent word splitting.

(SC2046)

tests/scripts/perf-sanity/run_benchmark_bench.sh

[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.

(SC2148)

tests/scripts/perf-sanity/run_benchmark_serve.sh

[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.

(SC2148)

🪛 YAMLlint (1.37.1)
tests/scripts/perf-sanity/benchmark_config.yaml

[error] 468-468: trailing spaces

(trailing-spaces)

🪛 markdownlint-cli2 (0.17.2)
tests/scripts/perf-sanity/README.md

195-195: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.12.2)
tests/scripts/perf-sanity/parse_benchmark_results.py

3-5: 1 blank line required between summary line and description

(D205)


3-5: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


16-17: One-line docstring should fit on one line

Reformat to one line

(D200)


16-17: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


46-46: Line too long (128 > 120)

(E501)


91-93: 1 blank line required between summary line and description

(D205)


91-93: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


125-126: One-line docstring should fit on one line

Reformat to one line

(D200)


125-126: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


140-141: One-line docstring should fit on one line

Reformat to one line

(D200)


140-141: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


163-164: One-line docstring should fit on one line

Reformat to one line

(D200)


163-164: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


289-289: Line too long (531 > 120)

(E501)

tests/scripts/perf-sanity/run_benchmark_serve.py

2-8: 1 blank line required between summary line and description

(D205)


2-8: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


3-3: Line too long (166 > 120)

(E501)


22-22: Line too long (128 > 120)

(E501)


56-56: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


81-81: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


95-95: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


104-104: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


109-109: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


135-135: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


162-162: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


187-187: Line too long (165 > 120)

(E501)


199-199: Line too long (593 > 120)

(E501)


216-216: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


258-258: Line too long (147 > 120)

(E501)


281-281: Line too long (124 > 120)

(E501)


300-300: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


332-332: Line too long (180 > 120)

(E501)


345-345: Line too long (121 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@chenfeiz0326 chenfeiz0326 force-pushed the dev/chenfeiz/sa-perf-sweep branch from c4c6c3e to 99e2fab Compare August 8, 2025 06:45
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
tests/scripts/perf-sanity/benchmark-serve-prerelease.sh (1)

32-35: Quote $(pwd) instead of back-ticks.

Same remark as previous review: replace

-v `pwd`:`pwd` \
-w `pwd` \

with

-        -v `pwd`:`pwd` \
-        -w `pwd`  \
+        -v "$(pwd)":"$(pwd)" \
+        -w "$(pwd)" \

Prevents word-splitting (SC2046).

tests/scripts/perf-sanity/parse_benchmark_results.py (1)

206-208: GPU count falls back to TP – still incorrect.

gpus should default to a sane value (e.g., 1 or the default config) instead of tp. This was raised earlier but not fixed.

tests/scripts/perf-sanity/run_benchmark_serve.py (1)

260-299: Handle server_process safely and terminate gracefully.

server_process may be undefined if Popen fails, and the cleanup always sends kill -9. Initialise to None, use terminate()/kill() guards, then wait. See prior review for diff.

🧹 Nitpick comments (3)
tests/scripts/perf-sanity/benchmark_config.yaml (1)

458-468: Add missing newline & remove trailing spaces.

Line 468 has trailing whitespace and the file lacks a terminating newline, tripping YAML-lint (new-line-at-end-of-file).
Minor, but fixing avoids false CI noise.

tests/scripts/perf-sanity/README.md (1)

175-183: Specify a language on fenced block.

Add “bash” (or “text”) after the triple-backticks to satisfy MD040 and improve syntax highlighting.

tests/scripts/perf-sanity/benchmark-serve-prerelease.sh (1)

46-47: Quote variable expansion in echo/realpath.

Wrap ${results} in quotes to avoid path issues with spaces:

-    echo "Report path" $(realpath ${results})
+    echo "Report path" "$(realpath "${results}")"
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c4c6c3e and 99e2fab.

📒 Files selected for processing (5)
  • tests/scripts/perf-sanity/README.md (1 hunks)
  • tests/scripts/perf-sanity/benchmark-serve-prerelease.sh (1 hunks)
  • tests/scripts/perf-sanity/benchmark_config.yaml (1 hunks)
  • tests/scripts/perf-sanity/parse_benchmark_results.py (1 hunks)
  • tests/scripts/perf-sanity/run_benchmark_serve.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tests/scripts/perf-sanity/run_benchmark_serve.py
  • tests/scripts/perf-sanity/parse_benchmark_results.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tests/scripts/perf-sanity/run_benchmark_serve.py
  • tests/scripts/perf-sanity/parse_benchmark_results.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/scripts/perf-sanity/README.md
  • tests/scripts/perf-sanity/benchmark_config.yaml
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/scripts/perf-sanity/README.md
  • tests/scripts/perf-sanity/benchmark_config.yaml
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • tests/scripts/perf-sanity/README.md
🪛 markdownlint-cli2 (0.17.2)
tests/scripts/perf-sanity/README.md

175-175: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.12.2)
tests/scripts/perf-sanity/run_benchmark_serve.py

2-8: 1 blank line required between summary line and description

(D205)


2-8: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


3-3: Line too long (166 > 120)

(E501)


22-22: Line too long (128 > 120)

(E501)


56-56: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


81-81: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


95-95: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


104-104: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


109-109: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


135-135: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


162-162: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


187-187: Line too long (165 > 120)

(E501)


199-199: Line too long (593 > 120)

(E501)


216-216: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


258-258: Line too long (147 > 120)

(E501)


281-281: Line too long (124 > 120)

(E501)


300-300: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


332-332: Line too long (180 > 120)

(E501)


345-345: Line too long (121 > 120)

(E501)

tests/scripts/perf-sanity/parse_benchmark_results.py

3-5: 1 blank line required between summary line and description

(D205)


3-5: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


16-17: One-line docstring should fit on one line

Reformat to one line

(D200)


16-17: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


46-46: Line too long (128 > 120)

(E501)


91-93: 1 blank line required between summary line and description

(D205)


91-93: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


125-126: One-line docstring should fit on one line

Reformat to one line

(D200)


125-126: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


140-141: One-line docstring should fit on one line

Reformat to one line

(D200)


140-141: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


163-164: One-line docstring should fit on one line

Reformat to one line

(D200)


163-164: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


289-289: Line too long (531 > 120)

(E501)

🪛 Shellcheck (0.10.0)
tests/scripts/perf-sanity/benchmark-serve-prerelease.sh

[warning] 33-33: Quote this to prevent word splitting.

(SC2046)


[warning] 33-33: Quote this to prevent word splitting.

(SC2046)


[warning] 34-34: Quote this to prevent word splitting.

(SC2046)


[warning] 46-46: Quote this to prevent word splitting.

(SC2046)


[warning] 47-47: Quote this to prevent word splitting.

(SC2046)

🪛 YAMLlint (1.37.1)
tests/scripts/perf-sanity/benchmark_config.yaml

[error] 468-468: no new line character at the end of file

(new-line-at-end-of-file)


[error] 468-468: trailing spaces

(trailing-spaces)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@chenfeiz0326 chenfeiz0326 force-pushed the dev/chenfeiz/sa-perf-sweep branch from 99e2fab to 9a9191b Compare August 8, 2025 13:35
@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14612 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14612 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #11038 completed with status: 'FAILURE'

@chenfeiz0326 chenfeiz0326 force-pushed the dev/chenfeiz/sa-perf-sweep branch from 9a9191b to 8483ff7 Compare August 11, 2025 02:04
@chenfeiz0326 chenfeiz0326 force-pushed the dev/chenfeiz/sa-perf-sweep branch from 6fe5ccb to 2ed58f2 Compare August 11, 2025 10:01
@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14792 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14792 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11168 completed with status: 'SUCCESS'

chenfeiz0326 and others added 4 commits August 12, 2025 19:22
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
@chenfeiz0326 chenfeiz0326 force-pushed the dev/chenfeiz/sa-perf-sweep branch from f7dab8c to 446b316 Compare August 13, 2025 10:56
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
cpp/tensorrt_llm/kernels/quantization.cu (5)

165-168: Mirror the padding-aware grid; same guard/constant concerns apply

Same feedback as above for the non-FP8 path: great to pad m when SWIZZLED and clamp by occupancy. Please use the shared constexpr and confirm kernel-side row bounds checks.

Apply within this hunk:

-        // The number of blocks for m. The m dimension will be padded to 128 for swizzled layout.
-        int numBlocksForM = layout == QuantizationSFLayout::SWIZZLED ? PadUpFn(m, 128) : m;
-        dim3 grid(std::min(numBlocksForM, multiProcessorCount * numBlocksPerSM));
+        // The number of blocks for m. The m dimension will be padded to kSWIZZLE_ROWS_PER_TILE for swizzled layout.
+        int const numBlocksForM =
+            (layout == QuantizationSFLayout::SWIZZLED) ? PadUpFn(m, kSWIZZLE_ROWS_PER_TILE) : m;
+        dim3 grid(std::min(numBlocksForM, multiProcessorCount * numBlocksPerSM));

203-206: MXFP8 path: consistent padding; prefer named constexpr and verify kernel bounds

Consistency here is good. Please switch 128 to a named constexpr and confirm quantize kernels safely ignore padded rows (blockIdx.x >= m).

Apply within this hunk:

-    // The number of blocks for m. The m dimension will be padded to 128 for swizzled layout.
-    int numBlocksForM = layout == QuantizationSFLayout::SWIZZLED ? PadUpFn(m, 128) : m;
-    dim3 grid(std::min(numBlocksForM, multiProcessorCount * numBlocksPerSM));
+    // The number of blocks for m. The m dimension will be padded to kSWIZZLE_ROWS_PER_TILE for swizzled layout.
+    int const numBlocksForM =
+        (layout == QuantizationSFLayout::SWIZZLED) ? PadUpFn(m, kSWIZZLE_ROWS_PER_TILE) : m;
+    dim3 grid(std::min(numBlocksForM, multiProcessorCount * numBlocksPerSM));

141-149: De-duplicate grid computation logic across the three call sites

The same grid sizing logic is repeated three times. Suggest extracting a small helper to reduce duplication and keep the occupancy arithmetic in one place.

Example helper (add in an internal namespace in this TU):

namespace {
inline dim3 makeGridForM(int m, QuantizationSFLayout layout, int blockX, int multiProcessorCount) {
    int const numBlocksPerSM = std::max(1, 2048 / std::max(1, blockX)); // blockX cannot be 0
    int const mBlocks = (layout == QuantizationSFLayout::SWIZZLED) ? PadUpFn(m, kSWIZZLE_ROWS_PER_TILE) : m;
    return dim3(std::min(mBlocks, multiProcessorCount * numBlocksPerSM));
}
} // anonymous namespace

Then replace:

dim3 grid = makeGridForM(m, layout, block.x, multiProcessorCount);

Note: If numBlocksPerSM differs per path in the future, pass the divisor as a parameter or create variants.

Also applies to: 160-168, 198-206


1-15: Update copyright year

Per the coding guidelines, source files should carry the current year. Please update to include 2025.

- * Copyright (c) 2019-2023, NVIDIA CORPORATION.  All rights reserved.
+ * Copyright (c) 2019-2025, NVIDIA CORPORATION.  All rights reserved.

146-149: Ensure OOB safety and replace the magic 128 with a named constant

Before landing this change, please:

• Centralize the swizzle-tile size and remove the literal 128. For example, near the top of cpp/tensorrt_llm/kernels/quantization.cu (after your namespace opens) add:

// Swizzled layout uses 128-row tiles; keep this centralized to avoid magic numbers.
constexpr int kSWIZZLE_ROWS_PER_TILE = 128;

• In the block that computes numBlocksForM (around lines 146–149), update to:

-        // The number of blocks for m. The m dimension will be padded to 128 for swizzled layout.
-        int numBlocksForM = layout == QuantizationSFLayout::SWIZZLED ? PadUpFn(m, 128) : m;
+        // The number of blocks for m. The m dimension will be padded to kSWIZZLE_ROWS_PER_TILE for swizzled layout.
+        int const numBlocksForM =
+            (layout == QuantizationSFLayout::SWIZZLED)
+                ? PadUpFn(m, kSWIZZLE_ROWS_PER_TILE)
+                : m;
         dim3 grid(std::min(numBlocksForM, multiProcessorCount * numBlocksPerSM));

Verify that each quantize_with_block_size<…> kernel (in the corresponding .cuh file) guards against extra padded rows when blockIdx.x ≥ m—either via a strided loop (for (rowIdx = blockIdx.x; rowIdx < paddedRows; rowIdx += gridDim.x)) or an early return. If it doesn’t, add an explicit if (blockIdx.x >= m) return; (or similar) at kernel entry.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 99e2fab and 446b316.

📒 Files selected for processing (1)
  • cpp/tensorrt_llm/kernels/quantization.cu (3 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/tensorrt_llm/kernels/quantization.cu
**/*.{cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu}: Avoid literal values except for 0, nullptr, true, false; use named constexpr for other literals
Place semicolon of empty for/while loop on a new line
Always use brace-delimited bodies for switch/while/do-for/if/else
Use inline C comments in argument lists when parameter meaning is unclear (e.g., /* checkForErrors = */ false)
Do not use assignment in subexpressions (e.g., if (x = y) ... is forbidden)
Switch on enums should enumerate all values and omit default to catch new values at compile time
Structure switch statements; prohibit fallthrough except between empty cases; each case ends with break or throw; return at end of case not allowed; put break inside braces for compound case
Prefer anonymous namespaces over static for internal linkage of functions
Every defined function must be called at least once (no unused methods)

Files:

  • cpp/tensorrt_llm/kernels/quantization.cu
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/tensorrt_llm/kernels/quantization.cu
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • cpp/tensorrt_llm/kernels/quantization.cu
🧠 Learnings (1)
📓 Common learnings
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: achartier
PR: NVIDIA/TensorRT-LLM#6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.378Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
@chenfeiz0326 chenfeiz0326 force-pushed the dev/chenfeiz/sa-perf-sweep branch from 446b316 to c6a442c Compare August 13, 2025 12:12
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (3)
tests/scripts/perf-sanity/run_benchmark_serve.py (3)

600-606: Extract log filename generation into a helper method.

The log filename generation logic is duplicated between server and benchmark logs. Consider extracting it into a helper method for better maintainability.

Add a helper method to generate log filenames:

def generate_log_filename(self, test_case: Dict[str, Any], prefix: str, suffix: str = "") -> str:
    """Generate consistent log filename based on test case configuration."""
    filename = (
        f"{prefix}.{test_case['model']}.tp{test_case['tp']}.ep{test_case['ep']}."
        f"attn{test_case['attn_backend']}.moe{test_case['moe_backend']}."
        f"gpu{test_case['free_gpu_mem_fraction']}.batch{test_case['max_batch_size']}."
        f"isl{test_case['isl']}.osl{test_case['osl']}."
        f"tokens{test_case['max_num_tokens']}.moetokens{test_case['moe_max_num_tokens']}"
    )
    if suffix:
        filename = f"{filename}.{suffix}"
    return f"{filename}.log"

Then update the usage:

-        server_log_filename = (
-            f"trtllm-serve.{model_label}.tp{test_case['tp']}.ep{test_case['ep']}."
-            f"attn{test_case['attn_backend']}.moe{test_case['moe_backend']}."
-            f"gpu{test_case['free_gpu_mem_fraction']}.batch{test_case['max_batch_size']}."
-            f"isl{test_case['isl']}.osl{test_case['osl']}."
-            f"tokens{test_case['max_num_tokens']}.moetokens{test_case['moe_max_num_tokens']}.log"
-        )
+        server_log_filename = self.generate_log_filename(test_case, "trtllm-serve")

46-58: Use an external configuration for model paths.

The hard-coded model paths should be moved to an external configuration file for better maintainability and to avoid exposing internal directory structures in the code.

Consider moving the model paths to a separate YAML configuration file (e.g., model_paths.yaml):

model_paths:
  70B-FP4: "/home/scratch.trt_llm_data/llm-models/llama-3.3-models/Llama-3.3-70B-Instruct-FP4"
  70B-FP8: "/home/scratch.trt_llm_data/llm-models/llama-3.3-models/Llama-3.3-70B-Instruct-FP8"
  Scout-FP4: "/home/scratch.trt_llm_data/llm-models/llama4-models/Llama-4-Scout-17B-16E-Instruct-FP4"
  Scout-FP8: "/home/scratch.trt_llm_data/llm-models/llama4-models/Llama-4-Scout-17B-16E-Instruct-FP8"
  R1-FP8: "/home/scratch.trt_llm_data/llm-models/DeepSeek-R1/DeepSeek-R1/"
  R1-FP4: "/home/scratch.trt_llm_data/llm-models/DeepSeek-R1/DeepSeek-R1-0528-FP4"

Then update the initialization to load from the configuration:

-        # Model path mapping
-        self.model_paths = {
-            "70B-FP4":
-            "/home/scratch.trt_llm_data/llm-models/llama-3.3-models/Llama-3.3-70B-Instruct-FP4",
-            "70B-FP8":
-            "/home/scratch.trt_llm_data/llm-models/llama-3.3-models/Llama-3.3-70B-Instruct-FP8",
-            "Scout-FP4":
-            "/home/scratch.trt_llm_data/llm-models/llama4-models/Llama-4-Scout-17B-16E-Instruct-FP4",
-            "Scout-FP8":
-            "/home/scratch.trt_llm_data/llm-models/llama4-models/Llama-4-Scout-17B-16E-Instruct-FP8",
-            "R1-FP8":
-            "/home/scratch.trt_llm_data/llm-models/DeepSeek-R1/DeepSeek-R1/",
-            "R1-FP4":
-            "/home/scratch.trt_llm_data/llm-models/DeepSeek-R1/DeepSeek-R1-0528-FP4"
-        }
+        # Load model path mapping from configuration
+        model_paths_file = self.config_file.parent / "model_paths.yaml"
+        if model_paths_file.exists():
+            with open(model_paths_file, 'r') as f:
+                model_config = yaml.safe_load(f)
+                self.model_paths = model_config.get('model_paths', {})
+        else:
+            self.model_paths = {}
+            print(f"Warning: Model paths configuration file not found: {model_paths_file}")

653-665: Fix UnboundLocalError and improve server cleanup.

The server_process variable may be undefined if subprocess.Popen fails, causing an UnboundLocalError in the finally block. Also, using kill -9 immediately is too aggressive.

Initialize server_process before the try block and use a more graceful shutdown approach:

+        server_process = None
         try:
             with open(server_log_filename, 'w') as log_file:
                 log_file.write(f"extra-llm-api-config.yml:\n")
                 log_file.write(config_content)
                 log_file.write("\n")

             with open(server_log_filename, 'a') as log_file:
                 server_process = subprocess.Popen(serve_cmd,
                                                   stdout=log_file,
                                                   stderr=subprocess.STDOUT)
 
             # ... rest of the try block ...
 
         finally:
-            # Cleanup: Kill server process using shell commands like in the original bash script
             print(f"Stopping server for {model_label}")
             try:
-                # Use shell commands for more reliable process killing
-                subprocess.run(f"kill -9 {server_process.pid}",
-                               shell=True,
-                               check=False)
-                subprocess.run(f"wait {server_process.pid} 2>/dev/null || true",
-                               shell=True,
-                               check=False)
+                if server_process and server_process.poll() is None:
+                    # Try graceful termination first
+                    server_process.terminate()
+                    try:
+                        server_process.wait(timeout=10)
+                    except subprocess.TimeoutExpired:
+                        print("Server didn't exit on SIGTERM; sending SIGKILL.")
+                        server_process.kill()
+                        server_process.wait(timeout=5)
             except Exception as e:
                 print(f"Warning: Error during server cleanup: {e}")
🧹 Nitpick comments (4)
tests/scripts/perf-sanity/run_benchmark_serve.py (2)

16-20: Add copyright header to the file.

All source files should include the NVIDIA copyright header as per the coding guidelines.

Add the following copyright header at the beginning of the file:

# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

384-384: Consider more specific error patterns.

The error checking is comprehensive, but consider checking for more specific patterns to avoid false positives.

-                    if "RuntimeError" in content or "runtime error" in content or "illegal memory access" in content or "terminate called" in content:
+                    error_patterns = [
+                        "RuntimeError:",
+                        "runtime error:",
+                        "CUDA error: an illegal memory access",
+                        "terminate called after throwing",
+                        "Segmentation fault",
+                        "Aborted (core dumped)"
+                    ]
+                    if any(pattern in content for pattern in error_patterns):
                         return True
tests/scripts/perf-sanity/benchmark-serve.sh (2)

1-1: Add copyright header to the file.

All source files should include the NVIDIA copyright header as per the coding guidelines.

Add the following copyright header after the shebang:

# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

51-51: Consider making --pull configurable.

The --pull always flag forces pulling the image every time, which may not be desirable in all environments (e.g., offline or bandwidth-limited scenarios).

Make the pull behavior configurable:

+DOCKER_PULL_POLICY=${DOCKER_PULL_POLICY:-always}
+
 run_benchmark_and_parse() {
     # Run benchmark and parse results in a single Docker container
     docker run --rm --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \
         --gpus all \
         -v /home/scratch.trt_llm_data:/home/scratch.trt_llm_data:ro \
         -v $output_dir:$output_dir:rw \
         -v $bench_dir:$bench_dir:ro \
         -w "$(pwd)"  \
-        --pull always \
+        --pull ${DOCKER_PULL_POLICY} \
         ${IMAGE} \
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 446b316 and c6a442c.

📒 Files selected for processing (3)
  • tests/scripts/perf-sanity/README.md (1 hunks)
  • tests/scripts/perf-sanity/benchmark-serve.sh (1 hunks)
  • tests/scripts/perf-sanity/run_benchmark_serve.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • tests/scripts/perf-sanity/README.md
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/scripts/perf-sanity/run_benchmark_serve.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/scripts/perf-sanity/run_benchmark_serve.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: achartier
PR: NVIDIA/TensorRT-LLM#6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.378Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
🪛 Shellcheck (0.10.0)
tests/scripts/perf-sanity/benchmark-serve.sh

[warning] 50-50: Quote this to prevent word splitting.

(SC2046)

🪛 Ruff (0.12.2)
tests/scripts/perf-sanity/run_benchmark_serve.py

72-72: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


84-84: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


98-98: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


137-137: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


179-179: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


247-247: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


267-267: Line too long (179 > 120)

(E501)


295-295: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


330-330: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


379-379: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


384-384: Line too long (150 > 120)

(E501)


393-393: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


464-464: Line too long (127 > 120)

(E501)


557-557: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


671-671: First line should end with a period, question mark, or exclamation point

Add closing punctuation

(D415)


713-713: Line too long (177 > 120)

(E501)


753-753: Line too long (163 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15139 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15139 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11432 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15204 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15204 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11484 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@kaiyux kaiyux merged commit 5cd8c0f into NVIDIA:main Aug 14, 2025
4 checks passed
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 18, 2025
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 18, 2025
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 18, 2025
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants