Skip to content

Conversation

@govind-ramnarayan
Copy link
Collaborator

@govind-ramnarayan govind-ramnarayan commented Dec 3, 2025

Summary by CodeRabbit

  • New Features

    • Added configurable sampler selection to AutoDeploy, allowing users to choose between TorchSampler and TRTLLMSampler for generation workflows.
    • Exposed model configuration properties for improved model introspection and metadata access.
    • Implemented TRTLLMSampler pathway with enhanced decoding capabilities.
  • Tests

    • Added unit tests validating TRTLLMSampler functionality in AutoDeploy execution pipeline.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Support for TRTLLMSampler in AutoDeploy. Fixes: #9602

The main issue is that we need to upcast logits to FP32 for the TRTLLMSampler to run without error. It's possible this is fairly inefficient; we need to test perf and validate.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@govind-ramnarayan govind-ramnarayan requested a review from a team as a code owner December 3, 2025 00:11
@govind-ramnarayan govind-ramnarayan marked this pull request as draft December 3, 2025 00:11
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 3, 2025

📝 Walkthrough

Walkthrough

This change introduces support for an alternative sampling strategy (TRTLLMSampler) in the AutoDeploy framework. A new configuration field allows users to select between TorchSampler and TRTLLMSampler. Helper functions and classes manage sampler instantiation, model configuration exposure, and dtype handling. A unit test validates the TRTLLMSampler pathway.

Changes

Cohort / File(s) Summary
Configuration & Type Exposure
tensorrt_llm/_torch/auto_deploy/llm_args.py
Added sampler_type field to AutoDeployConfig with type Union[str, SamplerType], defaulting to SamplerType.TorchSampler. Exported SamplerType to public import surface.
Model Factory Properties
tensorrt_llm/_torch/auto_deploy/models/hf.py
Added four new accessor properties to AutoModelForCausalLMFactory: dtype, num_hidden_layers, hidden_size, and num_attention_heads. Each retrieves values from model config via _get_model_config().
Sampler Instantiation & Engine Integration
tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
Introduced sampler selection logic: added TRTLLMSamplerModelConfig class, get_torch_dtype() and instantiate_sampler() functions to conditionally create TorchSampler or TRTLLMSampler. Modified ADEngine API to accept and store sampler_type parameter. Updated build_from_config() and create_autodeploy_executor() to propagate sampler type. Added dtype casting for logits in TRTLLMSampler path.
Testing
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
Added new smoke test for TRTLLMSampler pathway. Configures ExperimentConfig with sampler_type: SamplerType.TRTLLMSampler, executes generation, and validates output.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Areas requiring attention:
    • ad_executor.py: Verify sampler instantiation logic correctly handles both TorchSampler and TRTLLMSampler paths; check dtype casting and error handling for unsupported sampler types
    • ad_executor.py: Confirm ADEngine signature changes and parameter threading through build_from_config() and create_autodeploy_executor() are correct and backward compatible
    • test_ad_trtllm_sampler.py: Validate test coverage adequately exercises the TRTLLMSampler pathway

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning PR description does not reference any JIRA ticket, NVBugs ID, or GitHub issue despite the template requiring one in format like "[TRTLLM-1234]" or "[None]" if no ticket exists. Add ticket reference to PR title/description in the required format (e.g., "[TRTLLM-xxxx][feat]" or "[None][feat]") to indicate whether this addresses a tracked issue.
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive The PR description provides a clear issue reference (#9602) and explanation of the main technical change (upcasting logits to FP32), but is missing the Test Coverage section details and incomplete PR checklist items. Fill in the Test Coverage section with specific test names/paths that validate the TRTLLMSampler integration, and verify all checklist items are properly addressed before merging.
✅ Passed checks (2 passed)
Check name Status Explanation
Out of Scope Changes check ✅ Passed PR introduces new configuration option and exposes model properties that alter public API surface; scope appears appropriate for the stated objective of adding TRTLLMSampler support.
Title check ✅ Passed The title clearly and concisely describes the main feature addition: support for TRTLLM Sampler in AutoDeploy, which aligns with the core changes across multiple files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
tensorrt_llm/_torch/auto_deploy/llm_args.py (1)

133-136: Consider documenting the "auto" option in the description.

The SamplerType enum (from tensorrt_llm/llmapi/llm_args.py lines 2474-2478) includes an auto option in addition to TRTLLMSampler and TorchSampler. The field description only mentions the latter two. If auto is a valid selection for AutoDeploy, consider updating the description to include it. Otherwise, the field definition looks good.

     sampler_type: Union[str, SamplerType] = Field(
         default=SamplerType.TorchSampler,
-        description="The type of sampler to use. Options are TRTLLMSampler or TorchSampler. Defaults to TorchSampler.",
+        description="The type of sampler to use. Options are TRTLLMSampler, TorchSampler, or auto. Defaults to TorchSampler.",
     )
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (1)

44-48: Remove debug print statements.

The print statements on lines 44 and 47 appear to be debug artifacts. Consider removing them or replacing with proper test logging if debug output is needed.

-    print(f"Experiment config: {experiment_config}")
     cfg = ExperimentConfig(**experiment_config)
 
-    print("Running smoke test with TRTLLMSampler...")
     results = main(cfg)
tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py (3)

312-318: Logging inside forward path will be noisy; consider logging once or using debug level.

The ad_logger.info() call is inside _compute_logits(), which is called on every forward pass. This will produce repeated log messages. Consider either:

  1. Logging once during initialization
  2. Using ad_logger.debug() for per-iteration logging
  3. Using ad_logger.info_once() if available

Also, as noted in the TODO on line 313, integrating this cast into the AD graph would be more efficient.

         # Ensure logits are float32 as TRTLLMSampler expects float32
         # TODO(govind): Should this be put into the AD graph so it can be fused with other operations?
         if self.sampler_type == SamplerType.TRTLLMSampler and logits.dtype != torch.float32:
-            ad_logger.info(
-                f"Logits type {logits.dtype} is not supported by TRTLLMSampler. Casting to float32."
-            )
             logits = logits.to(torch.float32)

Alternatively, log once during __init__ after checking if the sampler type is TRTLLMSampler:

if self.sampler_type == SamplerType.TRTLLMSampler:
    ad_logger.info("TRTLLMSampler requires float32 logits; casting will be applied if needed.")

354-363: Add docstring to TRTLLMSamplerModelConfig class.

Per coding guidelines, Python interfaces should have docstrings that can be parsed by Sphinx. This class exposes model configuration for the TRTLLMSampler.

 class TRTLLMSamplerModelConfig:
+    """Configuration wrapper exposing model attributes required by TRTLLMSampler.
+
+    Args:
+        ad_config: The LlmArgs configuration containing model settings.
+    """
+
     def __init__(self, ad_config: LlmArgs):
         factory = ad_config.create_factory()
 
         self.config = SimpleNamespace()
         self.config.vocab_size = factory.vocab_size_padded
         self.config.num_hidden_layers = factory.num_hidden_layers
         self.config.hidden_size = factory.hidden_size
         self.config.num_attention_heads = factory.num_attention_heads

365-372: Potential redundant factory creation; add docstring.

The get_torch_dtype function may call ad_config.create_factory() even when model_dtype is not "auto". While the factory creation is internally cached via prefetch, consider:

  1. Adding a docstring for clarity
  2. Verifying that factory creation overhead is acceptable
 def get_torch_dtype(ad_config: LlmArgs):
-    # if the model dtype is "auto", we infer it from the model config
+    """Get the torch dtype from the AutoDeploy configuration.
+
+    If the model dtype is "auto", it is inferred from the model config via the factory.
+
+    Args:
+        ad_config: The LlmArgs configuration.
+
+    Returns:
+        The torch.dtype for the model.
+    """
     model_dtype = ad_config.dtype
     if model_dtype == "auto":
         model_dtype = ad_config.create_factory().dtype
     if isinstance(model_dtype, str):
         model_dtype = str_dtype_to_torch(model_dtype)
     return model_dtype
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 642dfae and 1257e29.

📒 Files selected for processing (4)
  • tensorrt_llm/_torch/auto_deploy/llm_args.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/hf.py (1 hunks)
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py (8 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tensorrt_llm/_torch/auto_deploy/models/hf.py
  • tensorrt_llm/_torch/auto_deploy/llm_args.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tensorrt_llm/_torch/auto_deploy/models/hf.py
  • tensorrt_llm/_torch/auto_deploy/llm_args.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
🧠 Learnings (11)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:368-392
Timestamp: 2025-08-27T15:03:57.149Z
Learning: In TensorRT-LLM's sampler.py, int32 usage for softmax_indices and related tensor indexing is intentional and should not be changed to int64. The torch.IntTensor type hint is correct for the sample() function's softmax_indices parameter.
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
📚 Learning: 2025-08-09T02:04:49.623Z
Learnt from: Fridah-nv
Repo: NVIDIA/TensorRT-LLM PR: 6760
File: tensorrt_llm/_torch/auto_deploy/models/quant_config_reader.py:81-98
Timestamp: 2025-08-09T02:04:49.623Z
Learning: In TensorRT-LLM's auto_deploy module, torch.dtype values in configuration dictionaries must be stored as string representations (e.g., "float16" instead of torch.float16) because OmegaConf.merge does not support torch.dtype types. These string representations are converted to actual torch.dtype objects in downstream code.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/models/hf.py
  • tensorrt_llm/_torch/auto_deploy/llm_args.py
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
📚 Learning: 2025-08-27T15:03:57.149Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:368-392
Timestamp: 2025-08-27T15:03:57.149Z
Learning: In TensorRT-LLM's sampler.py, int32 usage for softmax_indices and related tensor indexing is intentional and should not be changed to int64. The torch.IntTensor type hint is correct for the sample() function's softmax_indices parameter.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/auto_deploy/llm_args.py (2)
tensorrt_llm/llmapi/llm_args.py (2)
  • SamplerType (2475-2479)
  • Field (63-90)
tensorrt_llm/builder.py (1)
  • default (45-50)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (3)
tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py (1)
  • get_small_model_config (514-553)
examples/auto_deploy/build_and_run_ad.py (1)
  • ExperimentConfig (126-239)
tensorrt_llm/llmapi/llm_args.py (1)
  • SamplerType (2475-2479)
🪛 Ruff (0.14.7)
tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py

411-411: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
tensorrt_llm/_torch/auto_deploy/llm_args.py (1)

11-11: LGTM on import addition.

The import of SamplerType follows the coding guideline to maintain namespace when importing.

tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (1)

50-53: LGTM on test assertions.

The assertions verify that the TRTLLMSampler path produces output, which is appropriate for a smoke test.

tensorrt_llm/_torch/auto_deploy/models/hf.py (2)

149-162: LGTM on new config accessor properties.

The num_hidden_layers, hidden_size, and num_attention_heads properties follow the established pattern used by vocab_size_padded and provide a consistent interface for accessing model configuration attributes.


144-148: Return type annotation may not match actual behavior from model_config.

The dtype property is typed as Optional[torch.dtype], but getattr(model_config, "dtype", None) retrieves the raw configuration value. Based on learnings from this codebase, torch.dtype values in configuration dictionaries are stored as string representations (e.g., "float16") due to OmegaConf.merge limitations, with conversion happening in downstream code. Either the return type annotation should reflect Optional[Union[str, torch.dtype]], or verify that model_config.dtype is already converted to torch.dtype before retrieval.

tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py (4)

21-26: LGTM on import additions.

The new imports follow the coding guidelines and are necessary for the TRTLLMSampler integration.


147-154: LGTM on ADEngine constructor changes for sampler_type.

The sampler_type parameter is properly passed through build_from_config to __init__ and stored on the instance for later use in _compute_logits.

Also applies to: 163-179


512-518: LGTM on instantiate_sampler usage.

The refactored sampler instantiation correctly passes all required parameters and uses the new instantiate_sampler helper function.


375-413: Consider handling SamplerType.auto explicitly if it exists in the enum.

The SamplerType enum may include an auto option, but instantiate_sampler will raise a ValueError for it. If auto should be a valid option (perhaps defaulting to TorchSampler), consider adding explicit handling. If auto is intentionally unsupported in AutoDeploy, the error message could be more descriptive.

@govind-ramnarayan govind-ramnarayan changed the title TRTLLMSampler in AD [#9602][feat] AutoDeploy: Support TRTLLM Sampler Dec 3, 2025
@govind-ramnarayan govind-ramnarayan marked this pull request as ready for review December 3, 2025 06:19
@govind-ramnarayan govind-ramnarayan force-pushed the gramnarayan/trtllm-sampler branch from 1257e29 to 3bfe777 Compare December 3, 2025 06:23
@lucaslie lucaslie linked an issue Dec 3, 2025 that may be closed by this pull request
1 task
@govind-ramnarayan govind-ramnarayan force-pushed the gramnarayan/trtllm-sampler branch 2 times, most recently from b17b2b9 to 4627ad1 Compare December 4, 2025 20:44
@govind-ramnarayan
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27016 [ run ] triggered by Bot. Commit: 6cd9d38

@govind-ramnarayan
Copy link
Collaborator Author

/bot kill

@govind-ramnarayan
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27017 [ run ] triggered by Bot. Commit: 6cd9d38

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27016 [ run ] completed with state ABORTED. Commit: 6cd9d38
LLM/main/L0_MergeRequest_PR #20600 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27018 [ kill ] triggered by Bot. Commit: 6cd9d38

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27017 [ run ] completed with state ABORTED. Commit: 6cd9d38

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27018 [ kill ] completed with state SUCCESS. Commit: 6cd9d38
Successfully killed previous jobs for commit 6cd9d38

@govind-ramnarayan
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27023 [ run ] triggered by Bot. Commit: 6cd9d38

@govind-ramnarayan govind-ramnarayan force-pushed the gramnarayan/trtllm-sampler branch from ae173b4 to 7e69d70 Compare December 4, 2025 23:43
Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com>
@govind-ramnarayan govind-ramnarayan force-pushed the gramnarayan/trtllm-sampler branch from 7e69d70 to cdf7652 Compare December 4, 2025 23:47
@govind-ramnarayan
Copy link
Collaborator Author

/bot run --disable-fail-fast

@govind-ramnarayan govind-ramnarayan enabled auto-merge (squash) December 4, 2025 23:47
@tensorrt-cicd
Copy link
Collaborator

PR_Github #27029 [ run ] triggered by Bot. Commit: cdf7652

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27023 [ run ] completed with state ABORTED. Commit: 6cd9d38
LLM/main/L0_MergeRequest_PR #20605 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27029 [ run ] completed with state SUCCESS. Commit: cdf7652
/LLM/main/L0_MergeRequest_PR pipeline #20611 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@govind-ramnarayan govind-ramnarayan merged commit 74df9b1 into NVIDIA:main Dec 5, 2025
5 checks passed
MinaHuai pushed a commit to davidmlw/TensorRT-LLM that referenced this pull request Dec 10, 2025
…VIDIA#8779)

The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

[None][infra] Waive failed cases for main branch on 11/25 (NVIDIA#9429)

Signed-off-by: qqiao <qqiao@nvidia.com>

[NVIDIA#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (NVIDIA#9409)

Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>

[None][ci] Move more test stages to use OCI machines (NVIDIA#9395)

Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Matt Lefebvre <matthewelefebvre@gmail.com>

[None][feat] Improve TRTLLM MoE in small hidden size throughput cases (NVIDIA#9377)

Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>

[https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (NVIDIA#9093)

Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash.

This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens.

Signed-off-by: eopXD <yuehtingc@nvidia.com>

[https://nvbugs/5667922][fix] Update long context evaluation config (NVIDIA#9426)

Signed-off-by: mni <125171826+baize97@users.noreply.github.com>

[None][fix] Mitigate test timeout issues (NVIDIA#9445)

Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>

[None][chore] Fix trtllm-eval for PyTorchLLM (NVIDIA#9427)

Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>

[None][feat] Add a parser to layer-wise benchmarks (NVIDIA#9440)

Signed-off-by: Tailing Yuan <yuantailing@gmail.com>

[None][feat] Support custom chat template for tool calling (NVIDIA#9297)

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

[TRTLLM-8160][feat] Add draft token tree runtime on CDL (NVIDIA#8586)

Signed-off-by: Yue Weng <25103990+yweng0828@users.noreply.github.com>

[None][ci] waive a test (NVIDIA#9458)

Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>

[https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (NVIDIA#9439)

Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>

[TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (NVIDIA#9411)

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>

[TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (NVIDIA#9457)

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>

[https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (NVIDIA#9145)

Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>

[TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (NVIDIA#9308)

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

[None][chore] AutoDeploy add multi stream moe pass to default.yaml (NVIDIA#9430)

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

[https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (NVIDIA#9438)

Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>

[None][chore] Bump version to 1.2.0rc5 (NVIDIA#9455)

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>

[TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (NVIDIA#9356)

Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>

[None][ci] move some slow test cases of DGX-B200 to post merge (NVIDIA#9467)

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

[TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (NVIDIA#9224)

Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (NVIDIA#9246)

Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>

[https://nvbugs/5580099][fix] Cherry pick IMA issue fix from release/1.1 (NVIDIA#9032)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>

[None][chore] Upgrade CuteDSL to 4.3.0 (NVIDIA#9444)

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

[None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (NVIDIA#9376)

Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>

[None][feat] Add environment variable to force spec-dec number of accepted tokens (NVIDIA#9371)

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>

[None][infra] Update allowed list 2025.11.25 (NVIDIA#9468)

Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com>

[None][infra] Fail the pipeline when slurm ssh dropped (NVIDIA#9157)

Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com>

[None][feat] AutoDeploy: Remove redundant copies in mamba layers (NVIDIA#9461)

Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
Co-authored-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

[None][feat] AutoDeploy: Add A_log fusion for Mamba layers (NVIDIA#9422)

Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>

[None][ci] Waive blackwell test on spec gate. (NVIDIA#9502)

Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com>

[https://nvbugs/5608930][fix] Fix a typo (NVIDIA#9487)

Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>

[NVIDIA#9463][feat] Add revision option to trtllm commands (NVIDIA#9498)

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>

[TRTLLM-9085][doc] fix math formula rendering issues (NVIDIA#9481)

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

[None][chore] update comments in llm_args.py (NVIDIA#9472)

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[https://nvbugs/5680310][fix] Fix ctx only timed out test (NVIDIA#9410)

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

[https://nvbugs/5547414][fix] enable case after using local cache model (NVIDIA#9473)

Signed-off-by: Hui Gao <huig@nvidia.com>

[None][fix] Replace PYTORCH_CUDA_ALLOC_CONF with PYTORCH_ALLOC_CONF to fix deprecation warning (NVIDIA#9294)

Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com>

[https://nvbugs/5698581][fix] Init draft tokens for CUDA graph dummy request (NVIDIA#9505)

Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>

[None][infra] Waive failed case in pre-merge on 11/27 (NVIDIA#9507)

Signed-off-by: qqiao <qqiao@nvidia.com>

[TRTLLM-9513][docs] Qwen3 deployment guide (NVIDIA#9488)

Signed-off-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com>
Co-authored-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com>

[None][chore] revert batch_size=1 to prevent timeout and lower accuracy reference by 0.12% as a WAR (NVIDIA#9447)

Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
Co-authored-by: Shi Xiaowei <39303645+Shixiaowei02@users.noreply.github.com>

[TRTLLM-9279][infra] Use flexcache for gh200 nodes since they locate in Austin (NVIDIA#9405)

Signed-off-by: qqiao <qqiao@nvidia.com>
Signed-off-by: Emma Qiao <qqiao@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>

[cherry-pick][https://nvbugs/5670793][fix] Solve trtllm-serve launch_disaggregated issue (NVIDIA#9346)

Signed-off-by: xxi <xxi@nvidia.com>

[None][infra] Fix Slurm job script (NVIDIA#9508)

Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com>

[None][fix] change allreduce workspace dtype to torch.int64 to avoid overflow (NVIDIA#9479)

Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com>

[None][feat] add qwen3-next CI test of accuracy on BF16 and NVFP4 (NVIDIA#9330)

Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com>

[None][fix] fix TP support for DeepSeek-V3.2 on hopper (NVIDIA#9484)

Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>

[TRTLLM-9389][chore] Refactor AlltoallMethodType. (NVIDIA#9388)

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>

[https://nvbugs/5674665][chore] Add test coverage for https://nvbugspro.nvidia.com/bug/5674665 (NVIDIA#9518)

Signed-off-by: eopXD <yuehtingc@nvidia.com>

[TRTLLM-7288][infra] Download merged waive list in slurm script (NVIDIA#8999)

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>

[https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (NVIDIA#9449)

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

[NVIDIA#9150][feat] AutoDeploy Nemotron-Flash support (NVIDIA#9504)

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

[None] [chore] Update to cutlass 4.3 (NVIDIA#8637)

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

[https://nvbugs/5637037][chore] Update waive lists. (NVIDIA#9386)

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[TRTLLM-8970][infra] Fix generate report when has isolation test result (NVIDIA#8861)

Signed-off-by: qqiao <qqiao@nvidia.com>
Signed-off-by: Emma Qiao <qqiao@nvidia.com>

[https://nvbugs/5685015][fix] Update invalid max_token test (NVIDIA#9435)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>

[None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (NVIDIA#9211)

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

[https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (NVIDIA#9441)

Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com>

[None][chore] add spec_decoding configs in perf benchmark scripts and fix typos (NVIDIA#9533)

Signed-off-by: Lanyu Liao <lancelly@users.noreply.github.com>
Co-authored-by: Lanyu Liao <lancelly@users.noreply.github.com>

[None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (NVIDIA#9529)

Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>

[None] [chore] Enhancements and clean up to slurm scripts (NVIDIA#9493)

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

[None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (NVIDIA#9538)

Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com>

[None][infra] Waive failed cases for main branch on 11/28 (NVIDIA#9539)

Signed-off-by: qqiao <qqiao@nvidia.com>

[None][fix] Pass checkpoint_format to create_input_processor (NVIDIA#9521)

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

[TRTLLM-9541][infra] Use artifactory mirror for download.pytorch.org (NVIDIA#9477)

Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
Signed-off-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>

[TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (NVIDIA#9454)

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>

[None][infra] Waive failed case in pre-merge on 11/28 (NVIDIA#9537)

Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>

[None][perf] Helix: improve all-to-all perf for large CP size (NVIDIA#9494)

Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com>
Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com>
Co-authored-by: Zheyu Fu <zheyuf@nvidia.com>

[None][feat] support for more accurate AR calculation (NVIDIA#9323)

Signed-off-by: binghanc <176802681+binghanc@users.noreply.github.com>

[TRTLLM-9488][fix] llmapi references (NVIDIA#9547)

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>

[NVIDIA#8948][feat] Support custom sharding config (NVIDIA#9143)

Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[None][chore] Weekly mass integration of release/1.1 -- rebase (NVIDIA#9522)

Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
Signed-off-by: qgai <qgai@nvidia.com>
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Signed-off-by: Vincent Zhang <vinczhang@nvidia.com>
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
Signed-off-by: Michal Guzek <mguzek@nvidia.com>
Signed-off-by: Michal Guzek <moraxu@users.noreply.github.com>
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
Signed-off-by: leslie-fang25 <leslief@nvidia.com>
Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Co-authored-by: yunruis <205571022+yunruis@users.noreply.github.com>
Co-authored-by: sunnyqgg <159101675+sunnyqgg@users.noreply.github.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Co-authored-by: JunyiXu-nv <219237550+JunyiXu-nv@users.noreply.github.com>
Co-authored-by: Simeng Liu <109828133+SimengLiu-nv@users.noreply.github.com>
Co-authored-by: Guoming Zhang <137257613+nv-guomingz@users.noreply.github.com>
Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Co-authored-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Co-authored-by: Vincent Zhang <vcheungyi@163.com>
Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com>
Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com>
Co-authored-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
Co-authored-by: Leslie Fang <leslief@nvidia.com>
Co-authored-by: Shunkangz <182541032+Shunkangz@users.noreply.github.com>
Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>

[TRTLLM-5971][feat] Integrate helix parallelism (NVIDIA#9342)

Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[None][infra] - Request idle time exemption for OCI jobs (NVIDIA#9528)

Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>

[None][infra] Wiave failed tests for main branch on 11/30 (NVIDIA#9555)

Signed-off-by: qqiao <qqiao@nvidia.com>

[None][fix] Fix port conflict in disagg tests (NVIDIA#9474)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9558)

Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9559)

Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>

[TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (NVIDIA#9486)

[None] [feat] Optimize the algorithm part of RocketKV (NVIDIA#9333)

Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>

[https://nvbugs/5690172][fix] Fix Qwen3-235B ATP accuracy issue with PDL (NVIDIA#9530)

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

[TRTLLM-6222][feat] Extend cute_dsl_nvfp4_gemm to sm103. (NVIDIA#9543)

Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com>

[None][fix] Correct virtual memory allocation alignment (NVIDIA#9491)

Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[https://nvbugs/5684703][fix] Unwaive disagg guided decoding test (NVIDIA#9466)

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

[https://nvbugs/5503479][fix] Temporarily lower reference accuracy to stabilize CI (NVIDIA#9398)

Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>

[None][chore] remove qwen3-next accuracy tests (NVIDIA#9534)

Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com>

[None][doc] fix mtp.py typo (NVIDIA#9307)

Signed-off-by: liugaoji <757394026@qq.com>

[None][feat] add chat template kwargs support to longbench-v2 (NVIDIA#9544)

Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>

[NVIDIA#9496][fix] AutoDeploy: remove auto-tuner from nvfp4_gemm forward (NVIDIA#9497)

Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>

[None][fix] Replace hash method with unique_id for cutedsl MoE runners. (NVIDIA#9569)

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

[None][chore] refactor disaggregated scripts to use named arguments (NVIDIA#9581)

Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com>

[TRTLLM-6222][feat] Several perf opt for cuteDSL nvf4 gemm (NVIDIA#9428)

Signed-off-by: Yuhan Li <51736452+liyuhannnnn@users.noreply.github.com>

[None][chore] reduce the layers of the `devel` docker image (NVIDIA#9077)

Signed-off-by: Martin Marciniszyn Mehringer <11665257+MartinMarciniszyn@users.noreply.github.com>

[https://nvbugs/5651854][infra] Enable perf metrics during accuracy testing (NVIDIA#9140)

[None][fix] Skip Allreduce init for Attention DP (NVIDIA#9542)

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

[None][test] [None][test] Waive main branch test failures 12/1 (NVIDIA#9566)

Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>

[None][ci] Minor change for Slurm scripts (NVIDIA#9561)

Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>

[TRTLLM-6768][infra] Fix params for not updating github status (NVIDIA#6747)

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>

[None][infra] Update the pytest options after MI (NVIDIA#9579)

Signed-off-by: qqiao <qqiao@nvidia.com>

[TRTLLM-6756][feat] Add Beam Search to TorchSampler (NVIDIA#8509)

Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>

[None][chore] Defer exposing context parallel configs (NVIDIA#9552)

Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>

[TRTC-1943][feat] Env vars override support in LLM API (NVIDIA#9104)

Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>

[None][feat] AutoDeploy: Use the router gemm op for nemotron MOE (NVIDIA#9500)

Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>

[NVIDIA#9198][feat] Refactor dist ops in AutoDeploy (NVIDIA#9301)

Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>

[None][fix] Prevent YAML partial kv_cache_config from incorrectly overriding the complete kv_cache_config (NVIDIA#9262)

Signed-off-by: Yuening Li <62227368+Yuening-wa@users.noreply.github.com>

[TRTLLM-9085][doc] fix math formula rendering issues in github (NVIDIA#9605)

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

[None][feat] Unify nvfp4 gemm backend (NVIDIA#8963)

Signed-off-by: Shijie Wang <jaywan@nvidia.com>
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Signed-off-by: Shijie <jaywan@nvidia.com>
Co-authored-by: Yukun He <23156053+hyukn@users.noreply.github.com>

[None][feat] Add support for KVCache reuse for DSv32 (NVIDIA#9383)

Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[None][chroe] Polish qwen3-next modeling code. (NVIDIA#8902)

Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>

[https://nvbugs/5703953][fix] Use random port for disagg tests (NVIDIA#9582)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>

[None][fix] Waive gb200 (NVIDIA#9580)

Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>

[FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (NVIDIA#9261)

Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>

[https://nvbugs/5582091][test] increase warmup times in testing for multi-gpu cases (NVIDIA#9578)

Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com>

[None][chore] Add failed cases into waives.txt (NVIDIA#9588)

Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>

[https://nvbugs/5702793][fix] Fix uncontiguous tensor view (NVIDIA#9576)

Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com>

[None][infra] Waive failed cases for main branch (NVIDIA#9615)

Signed-off-by: qqiao <qqiao@nvidia.com>

[TRTLLM-9488][feat] use FlashInfer.sampling by default (NVIDIA#9545)

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>

[None][infra] Update allowlist 2025/12/01 (NVIDIA#9616)

Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com>

[None][infra] Remove an invalid test name in waives.txt (NVIDIA#9620)

Signed-off-by: qqiao <qqiao@nvidia.com>

Lock the gpu clocks in L0 perf tests (NVIDIA#9585)

Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>

[TRTLLM-9466][test] Evaluate helix parallelism with DSV3 Lite (NVIDIA#9597)

Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>

[None][fix] Extract GPU count from single-node stage names (NVIDIA#9599)

Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>

[https://nvbugs/5667774][fix] Refine Piecewise Cuda Graph Condition for DP (NVIDIA#9393)

Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>

[TRTLLM-9144][fix] enhance RPC robustness (NVIDIA#8711)

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Co-authored-by: Erin Ho <14718778+hchings@users.noreply.github.com>

[https://nvbugs/5627710][fix] Fix synchronization bugs in KvCacheTransferManager that can cause corrupted blocks (NVIDIA#9056)

Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com>
Signed-off-by: Thor Johnsen <41591019+thorjohnsen@users.noreply.github.com>
Co-authored-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

[TRTLLM-8980][test] Clean up spec dec tests in test_llm_api_pytorch (NVIDIA#8889)

Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[NVIDIA#9150][feat] Add code for nano v3 to custom implementation in AD (NVIDIA#9465)

* Why?

We would like to show an alternative to monkey-patching in AutoDeploy.

* What?

This commit builds on the existing custom model implementation for
NemotronH and adds the bits relevant for MoE layers.

Part of NVIDIA#9150.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>

[NVIDIA#9150][feat] AutoDeploy: reviewer comments for NVIDIA#9150 (NVIDIA#9527)

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

[https://nvbugs/5651854][fix] Fix dist-serving perf by clearing CPU affinity (NVIDIA#9549)

Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>

[NVIDIA#9550][feat] AutoDeploy: Add NVFP4 Cutlass MoE kernels  (NVIDIA#9551)

Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>

[https://nvbugs/5688388][fix] fix: Reducing num request in disagg test to speed up (NVIDIA#9598)

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

[TRTLLM-8946][feat] Improved heuristics to detect shardable regions (NVIDIA#9200)

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

[NVIDIA#9632][feat] Support EXTRA_WHEEL_BUILD_ARGS during wheel build (NVIDIA#9633)

Signed-off-by: Yu Chi Li <yuchil@nvidia.com>

[None][chore] Waive test failing on pre-merge (NVIDIA#9638)

Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>

[None][chore] Remove traceback dump for multimodal input processor (NVIDIA#9634)

Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>

[None][chore] Fix trtllm-eval and move GroupedGemmInputsHelper (NVIDIA#9612)

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

[https://nvbugs/5698434][fix] Use separate weight mapper for draft (NVIDIA#9607)

Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com>

[TRTLLM-7101][infra] Reuse passed tests (NVIDIA#6894)

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>

[None][test] Remove duplicate test cases (NVIDIA#9623)

Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[None][feat] Add RocketKV usage doc and e2e accuracy test on LongBenchV2 (NVIDIA#9572)

Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>

[TRTLLM-9242][doc] Add examples showcasing openai compatible APIs (NVIDIA#9520)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>

[None][chore] AutoDeploy update cuda stream manager for multi-device (NVIDIA#9575)

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

[TRTLLM-9391][chore] Automatically estimate required workspace. (NVIDIA#9535)

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>

[https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (NVIDIA#9647)

Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>

[https://nvbugs/5561153][test] Fix log error for perf test (NVIDIA#9622)

Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>

[TRTLLM-8241][feat] Aliasing to comply to LlmArgs (NVIDIA#9586)

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

[None][chore] Add failed cases into waives.txt (NVIDIA#9593)

Signed-off-by: Jie Li <lijie@nvidia.com>
Co-authored-by: Jie Li <lijie@nvidia.com>

[TRTLLM-6842][feat] Support Response API for general purpose (NVIDIA#9392)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>

[None][test] Update Qwen3-next accuracy testing by setting the cuda … (NVIDIA#9613)

Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>

[None][feat] update trtllm-gen nvfp4 kernels with better performance (NVIDIA#9510)

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

[None][doc] Replace the tensorrt icon with torch icon on overview.md (NVIDIA#9644)

Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>

[https://nvbugs/5705197][chore] Unwaive timeout disagg tests (NVIDIA#9637)

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

[https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (NVIDIA#8253)

Signed-off-by: Michal Guzek <mguzek@nvidia.com>

[None][fix] Fix wide ep MoE error (NVIDIA#9642)

Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>

[https://nvbugs/5702795][fix] Remove the warning message for aten.log. (NVIDIA#9665)

Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>

[https://nvbugs/5693853][fix] Fix error handling when querying machin… (NVIDIA#9483)

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>

[OMNIML-2932] [feat] nvfp4 awq support (NVIDIA#8698)

Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com>

[NVIDIA#9643][fix] AutoDeploy: fix nano sharding config (NVIDIA#9668)

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

[NVIDIA#9147][feat] AutoDeploy: Draft Target Speculative Decoding (NVIDIA#9275)

Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com>

[None][feat] Update Qwen3CodeToolParser to align tool-calling parameters (NVIDIA#9540)

Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>

[TRTLLM-7181][infra] Generate test results when pytest timeout happens (NVIDIA#9396)

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[TRTLLM-9522][fix] restore `trtllm-serve mm_embedding_serve` (NVIDIA#9669)

[TRTLLM-5093][infra] Write env variables to a file in the interactive debug session (NVIDIA#6792)

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>

[None][fix] fix error when processing batches containing both text and mm data (NVIDIA#8381)

Signed-off-by: Nekofish-L <liuxiangyang@mail.ustc.edu.cn>

[TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (NVIDIA#7838)

Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>

[None][feat] Add weights initialization and context phase parser to layer-wise benchmarks (NVIDIA#9667)

Signed-off-by: Tailing Yuan <yuantailing@gmail.com>

[TRTLLM-8274][feat] Check if executor is shutdown in /health entrypoint (NVIDIA#9057)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>

[NVIDIA#8733][feat] Add Llama4 MoE handling to AutoDeploy (NVIDIA#9556)

Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
Signed-off-by: tcherckez-nvidia <127761168+tcherckez-nvidia@users.noreply.github.com>
Co-authored-by: Neta Zmora <nzmora@nvidia.com>

[None][ci] unwaive tests (NVIDIA#9651)

Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>

[None][feat] Add NIXL-LIBFABRIC support (NVIDIA#9225)

Signed-off-by: Yoray Zack <62789610+zackyoray@users.noreply.github.com>
Signed-off-by: zackyoray <yorayz@nvidia.com>

[None][test] rename wide ep and disagg metric name in perf test (NVIDIA#9704)

Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com>

[https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (NVIDIA#9617)

Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>

[None][fix] Recover TRTLLM MoE Perf for DEP (NVIDIA#9562)

Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>

[None][chore] Add failed cases into waives.txt (NVIDIA#9662)

Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>

[None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (NVIDIA#9608)

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>

[None][infra] Add container notices and documentation (NVIDIA#9185)

Signed-off-by: Parker Drake <pdrake@nvidia.com>

[TRTLLM-5312][infra] Add triton trigger rules (NVIDIA#6440)

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>

[None][doc] Add feature docs for helix parallelism (NVIDIA#9684)

Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>

[TRTLLM-9579][infra] Set mergeWaiveList stage UNSTABLE when there is any issue (NVIDIA#9692)

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>

[None][doc] Added line about partial reuse (NVIDIA#7846)

Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com>

[TRTLLM-8920][feat] decouple disagg service from fastapi (NVIDIA#8714)

Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>

[https://nvbugs/5633340][fix] start disagg workers and servers on free ports (NVIDIA#9694)

Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>

[TRTLLM-9562] [doc] Add Deployment Guide for Kimi K2 Thinking on TensorRT LLM - Blackwell (NVIDIA#9711)

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

[NVIDIA#9602][feat] AutoDeploy: Support TRTLLM Sampler (NVIDIA#9641)

Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[None] [tests] Unwaive EPLB tests (NVIDIA#9625)

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

[https://nvbugs/5518713][test] Refactor core test lists by merging with llm_perf_cluster.yml (NVIDIA#9714)

Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com>

[TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (NVIDIA#9583)

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

[None][refactor] Improve request processing function in sampler (NVIDIA#9671)

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

[https://nvbugs/5670672][fix] Fix flaky KV connector tests (NVIDIA#9676)

Signed-off-by: jthomson04 <jwillthomson19@gmail.com>

[None][infra] Update allowed list 20251204 (NVIDIA#9718)

Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com>

[None][feat] AutoDeploy: Perf optimization for Attention and rmsnorm (NVIDIA#9719)

Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>

[None][chore] Waive flakey disagg tests (NVIDIA#9749)

Signed-off-by: Mike Iovine <miovine@nvidia.com>

[https://nvbugs/5601682][fix] Fix cacheTransceiver hang (NVIDIA#9311)

Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[TRTLLM-9199][docs] KV Connector Docs (NVIDIA#9325)

Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[TRTLLM-9160][doc] add doc to llm_runtime.py (NVIDIA#9482)

Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[None][doc] VDR 1.0 trtllm-serve doc enhancement (NVIDIA#9443)

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[TRTLLM-9086][doc] Clean up TODOs in documentation (NVIDIA#9292)

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[TRTLLM-9157][doc] Guided decoding doc improvement (NVIDIA#9359)

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[None][infra] Updated Linux installation guide (NVIDIA#9485)

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[TRTLLM-9075][doc] refine the slurm examples (NVIDIA#9548)

Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[TRTLLM-9093][doc] update hyper links in overview (NVIDIA#9568)

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[TRTLLM-9092][doc] link to modelopt checkpoints in quick start guide (NVIDIA#9571)

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[None][fix] Fix triton moe load_weight (NVIDIA#9649)

Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com>

[None][fix] fix a bug: deepseek_fp8_block_scales in TRTLLMGEN-MoE use 2D x_sf instead of 1D (NVIDIA#9658)

Signed-off-by: xxi <xxi@nvidia.com>

[TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (NVIDIA#9592)

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

[TRTLLM-9522][chore] implement default `attach_multimodal_embeddings` (NVIDIA#9664)

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>

[TRTLLM-9660][feat] Convert cuteDSL GEMM to opt-in feature (NVIDIA#9682)

Signed-off-by: Jonas Li <6110159+longlee0622@users.noreply.github.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

[None][fix] enable hmac in RPC (NVIDIA#9745)

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[https://nvbugs/5703953][fix] Preserving ip:port for trtllm-serve before initializing llm (NVIDIA#9646)

Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>

[None][infra] Waive failed cases for main branch on 12/07 (NVIDIA#9769)

Signed-off-by: qqiao <qqiao@nvidia.com>

[None][fix] Several minor fixes to CI setting (NVIDIA#9765)

Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>

[OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (NVIDIA#9679)

Signed-off-by: Chenjie Luo <chenjiel@nvidia.com>

[None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce (NVIDIA#9314)

Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>

[TRTLLM-9000][feat] Add multi-node Perf Tests into CI (NVIDIA#8800)

Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>

[None][test] add ntp tolerance in time metrics verification (NVIDIA#9741)

Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com>

[TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (NVIDIA#9645)

[https://nvbugs/5422621][test] Add GB 200 WIDEEP test case for RCCA 5422621 (NVIDIA#9506)

Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>

[None][fix] Fix two tuning cache miss issues. (NVIDIA#9743)

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>

[TRTLLM-9706] [doc] Update wide EP documents (NVIDIA#9724)

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

[https://nvbugs/5666804][test] only adding sampler config for limited models (NVIDIA#9512)

Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com>
Co-authored-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com>
Co-authored-by: Larry Xu <197874197+LarryXFly@users.noreply.github.com>

[None][infra] Waive failed cases for main on 12/08 (NVIDIA#9773)

Signed-off-by: qqiao <qqiao@nvidia.com>

[None][chore] Move the rocketkv e2e test to post-merge (NVIDIA#9768)

Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>

[None][chore] Enable tvm_ffi for cute dsl nvfp4_gemm to reduce host overhead. (NVIDIA#9690)

Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com>

[TRTLLM-9431][perf] Enable multistream for Linear Attention in Qwen3-… (NVIDIA#9696)

Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>

[None][chore] Remove closed bugs (NVIDIA#9770)

Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>

[None][infra] update mooncake in docker images (NVIDIA#9584)

Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com>
Signed-off-by: Zheng Duan <200704041+zhengd-nv@users.noreply.github.com>

[None][test] Add Kimi k2 WIDEEP perf and accuracy cases (NVIDIA#9686)

Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

[https://nvbugs/5527655][test] Add test case for RCCA 5527655 (NVIDIA#9511)

Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>

[http://nvbugs/5649010][fix] fix test_auto_scaling.py::test_worker_restart timeout (NVIDIA#9775)

Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>

[None][fix] Switch AutoDeploy's default allreduce strategy to NCCL (NVIDIA#9666)

Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>

[TRTLLM-9506][fix] Fix AR for DeepSeek-R1 2 model path (NVIDIA#9661)

Signed-off-by: qgai <qgai@nvidia.com>

ray + updatew works

trtllm works in async env

trtllm works in sync and async env

ray + updatew works

rebase to the updated verl

server mode

still cherry pick

still cherry pick

still cherry pick

integrated http interface

hang at RyExecutor create workers ray.remote

clean code

use tensorrt_llm.rlhf_utils

Signed-off-by: Liwei Ma <liweim@nvidia.com>

placement, asyncllm, and basic tests
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

connect sleep and wakeup; Add support to pass None to update_weights
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

Batching ctx for IFB scheduler

Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>

accuracy WAR for TP>1: always use AllReduceStrategy.NCCL, refactored
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

fix e2e integration

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

update asyncllm, other nits
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

fix init setup

Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

Fix TRTLLMSampler logprobs perf

Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>

fix and cleanup
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

fix server

Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

Revert "Batching ctx for IFB scheduler"

This reverts commit b51aac0

Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>

update & address comments

Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
usberkeley pushed a commit to usberkeley/TensorRT-LLM that referenced this pull request Dec 11, 2025
Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com>
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 11, 2025
Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: Enable the use of TRTLLMSampler in AutoDeploy

3 participants