Skip to content

Conversation

@QiJune
Copy link
Collaborator

@QiJune QiJune commented Dec 10, 2025

Summary by CodeRabbit

  • Documentation
    • Updated documentation references and URLs throughout guides from TensorRT Model Optimizer to NVIDIA Model Optimizer for consistency.
    • Added JSON metadata example snippets to performance benchmarking documentation to clarify quantization configuration and behavior.
    • Improved link targets in feature documentation and example deployment guides to reflect updated NVIDIA Model Optimizer resources.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
@QiJune QiJune requested a review from a team as a code owner December 10, 2025 06:10
@QiJune QiJune requested a review from StanleySun639 December 10, 2025 06:10
@QiJune
Copy link
Collaborator Author

QiJune commented Dec 10, 2025

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 10, 2025

📝 Walkthrough

Walkthrough

Documentation links and references updated across multiple files to redirect from TensorRT Model Optimizer to NVIDIA Model Optimizer. A JSON snippet demonstrating quantization metadata was added to enhance documentation clarity in one file.

Changes

Cohort / File(s) Summary
Developer Guide Documentation
docs/source/developer-guide/perf-benchmarking.md, docs/source/developer-guide/perf-overview.md
Updated TensorRT Model Optimizer references to Model Optimizer with new deployment documentation URLs; added JSON snippet for quantization metadata (producer and modelopt fields) in perf-benchmarking.md; minor whitespace adjustments.
Features Documentation
docs/source/features/quantization.md
Updated external links for pre-quantized models and ModelOpt Support Matrix from TensorRT Model Optimizer URLs to NVIDIA Model Optimizer URLs.
Examples Documentation
examples/auto_deploy/README.md
Updated external documentation links from TensorRT-Model-Optimizer to NVIDIA Model-Optimizer references in AutoQuantize and model optimizer documentation sections.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Verify all updated URLs are accurate and accessible in the NVIDIA Model Optimizer documentation paths
  • Confirm the added JSON snippet in perf-benchmarking.md is syntactically valid and properly formatted
  • Ensure no broken links remain from the old TensorRT Model Optimizer references

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is largely empty with only the template structure and placeholders; critical sections like Description, Test Coverage are not filled out. Fill in the Description section explaining why links to ModelOpt were updated and the Test Coverage section confirming documentation changes are adequate.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically identifies the main change: fixing broken links to ModelOpt in documentation across multiple files.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8550abf and 0e2d12e.

📒 Files selected for processing (4)
  • docs/source/developer-guide/perf-benchmarking.md (1 hunks)
  • docs/source/developer-guide/perf-overview.md (3 hunks)
  • docs/source/features/quantization.md (3 hunks)
  • examples/auto_deploy/README.md (1 hunks)
🧰 Additional context used
🧠 Learnings (17)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
Learnt from: farshadghodsian
Repo: NVIDIA/TensorRT-LLM PR: 7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.
📚 Learning: 2025-08-21T00:16:56.457Z
Learnt from: farshadghodsian
Repo: NVIDIA/TensorRT-LLM PR: 7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.

Applied to files:

  • docs/source/features/quantization.md
  • docs/source/developer-guide/perf-benchmarking.md
  • examples/auto_deploy/README.md
  • docs/source/developer-guide/perf-overview.md
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • docs/source/features/quantization.md
  • docs/source/developer-guide/perf-benchmarking.md
  • docs/source/developer-guide/perf-overview.md
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • docs/source/features/quantization.md
  • docs/source/developer-guide/perf-benchmarking.md
  • docs/source/developer-guide/perf-overview.md
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • docs/source/features/quantization.md
  • docs/source/developer-guide/perf-benchmarking.md
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • docs/source/features/quantization.md
  • docs/source/developer-guide/perf-benchmarking.md
  • docs/source/developer-guide/perf-overview.md
📚 Learning: 2025-08-11T20:09:24.389Z
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.

Applied to files:

  • docs/source/features/quantization.md
  • docs/source/developer-guide/perf-benchmarking.md
  • docs/source/developer-guide/perf-overview.md
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • docs/source/features/quantization.md
  • docs/source/developer-guide/perf-benchmarking.md
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.

Applied to files:

  • docs/source/features/quantization.md
📚 Learning: 2025-09-16T09:30:09.716Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7763
File: cpp/tensorrt_llm/CMakeLists.txt:297-301
Timestamp: 2025-09-16T09:30:09.716Z
Learning: In the TensorRT-LLM project, NCCL libraries are loaded earlier by PyTorch libraries or the bindings library, so the main shared library doesn't need NCCL paths in its RPATH - the libraries will already be available in the process address space when needed.

Applied to files:

  • docs/source/features/quantization.md
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.

Applied to files:

  • docs/source/features/quantization.md
📚 Learning: 2025-08-27T14:23:55.566Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/modules/rms_norm.py:17-17
Timestamp: 2025-08-27T14:23:55.566Z
Learning: The TensorRT-LLM project requires Python 3.10+ as evidenced by the use of TypeAlias from typing module, match/case statements, and union type | syntax throughout the codebase, despite some documentation still mentioning Python 3.8+.

Applied to files:

  • docs/source/features/quantization.md
📚 Learning: 2025-09-18T05:41:45.847Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7120
File: tensorrt_llm/llmapi/llm.py:690-697
Timestamp: 2025-09-18T05:41:45.847Z
Learning: Kimi model support is currently focused on the PyTorch backend path, with TRT path support potentially coming later.

Applied to files:

  • docs/source/features/quantization.md
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • docs/source/developer-guide/perf-benchmarking.md
  • docs/source/developer-guide/perf-overview.md
📚 Learning: 2025-11-27T09:23:18.742Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 9511
File: tests/integration/defs/examples/serve/test_serve.py:136-186
Timestamp: 2025-11-27T09:23:18.742Z
Learning: In TensorRT-LLM testing, when adding test cases based on RCCA commands, the command format should be copied exactly as it appears in the RCCA case, even if it differs from existing tests. For example, some RCCA commands for trtllm-serve may omit the "serve" subcommand while others include it.

Applied to files:

  • docs/source/developer-guide/perf-benchmarking.md
  • docs/source/developer-guide/perf-overview.md
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • docs/source/developer-guide/perf-benchmarking.md
  • docs/source/developer-guide/perf-overview.md
📚 Learning: 2025-08-13T11:07:11.772Z
Learnt from: Funatiq
Repo: NVIDIA/TensorRT-LLM PR: 6754
File: tests/integration/test_lists/test-db/l0_a30.yml:41-47
Timestamp: 2025-08-13T11:07:11.772Z
Learning: In TensorRT-LLM test configuration files like tests/integration/test_lists/test-db/l0_a30.yml, TIMEOUT values are specified in minutes, not seconds.

Applied to files:

  • docs/source/developer-guide/perf-overview.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (5)
docs/source/developer-guide/perf-overview.md (2)

29-29: Content updates are contextually appropriate.

These changes add clarity to hardware support information and metrics sourcing without altering the documentation's accuracy or integrity.

Also applies to: 67-67


27-28: The Model Optimizer link in line 27 is valid and currently maintained. The URL https://nvidia.github.io/Model-Optimizer/ is active and serving current documentation.

examples/auto_deploy/README.md (1)

93-93: Verify NVIDIA Model Optimizer documentation and examples links are current.

Four references were updated to point to NVIDIA Model Optimizer:

Confirm the AutoQuantize documentation URL is currently valid and accessible, and verify the example path referenced (examples/llm_autodeploy/README.md) exists in the NVIDIA/Model-Optimizer repository.

Also applies to: 95-95, 98-98, 102-102

docs/source/developer-guide/perf-benchmarking.md (1)

449-459: Update version to a current stable ModelOpt release and clarify the context of the example.

The JSON structure is valid for hf_quant_config.json, but version "0.23.0rc1" is not an official PyPI release. Update to a current stable version (0.23.0, 0.23.1, or 0.23.2). Additionally, note that this format is being deprecated in favor of config.json during export; consider adding a brief note indicating when users should expect this transition or refer them to the canonical schema in the ModelOpt documentation.

docs/source/features/quantization.md (1)

26-26: HuggingFace Model Optimizer collection and support matrix URLs are current and valid.

Line 26 correctly references NVIDIA's official HuggingFace collection for pre-quantized models at https://huggingface.co/collections/nvidia/model-optimizer-66aa84f7966b3150262481a4, and line 112's reference to the Model Optimizer support matrix at https://nvidia.github.io/Model-Optimizer/guides/0_support_matrix.html is also current. Both URLs point to active, maintained resources from NVIDIA.

@QiJune QiJune enabled auto-merge (squash) December 10, 2025 06:25
@tensorrt-cicd
Copy link
Collaborator

PR_Github #27652 [ run ] triggered by Bot. Commit: 0e2d12e

@QiJune
Copy link
Collaborator Author

QiJune commented Dec 10, 2025

/bot skip --comment "doc changes"

@chzblych chzblych added the Release Blocker PRs that blocking the final release build or branching out the release branch label Dec 10, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #27663 [ skip ] triggered by Bot. Commit: 0e2d12e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27652 [ run ] completed with state ABORTED. Commit: 0e2d12e
LLM/release-1.1/L0_MergeRequest_PR #567 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27663 [ skip ] completed with state SUCCESS. Commit: 0e2d12e
Release Check Pipeline #2675 failed

@yiqingy0
Copy link
Collaborator

/bot skip --comment "doc changes"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27692 [ skip ] triggered by Bot. Commit: 0e2d12e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27692 [ skip ] completed with state SUCCESS. Commit: 0e2d12e
Skipping testing for commit 0e2d12e

@QiJune QiJune merged commit 67ffa90 into NVIDIA:release/1.1 Dec 10, 2025
8 of 9 checks passed
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 12, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 12, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 12, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 13, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 13, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 13, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 14, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 15, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 15, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 16, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Dec 16, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit that referenced this pull request Dec 16, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 19, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Release Blocker PRs that blocking the final release build or branching out the release branch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants