-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTC-1943][feat] Env vars override support in LLM API #9104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRTC-1943][feat] Env vars override support in LLM API #9104
Conversation
📝 WalkthroughWalkthroughThis PR adds a Changes
Sequence Diagram(s)sequenceDiagram
participant CLI
participant LLM as BaseLLM.__init__
participant Worker as worker_main
participant Env as os.environ
CLI->>LLM: Pass env_overrides in kwargs
LLM->>LLM: Process env_overrides dict
LLM->>Env: Update os.environ with overrides
LLM->>LLM: Log old→new values
Note over Worker: Spawned MPI Process
Worker->>Worker: Check llm_args.env_overrides
Worker->>Env: Apply overrides to process env
Worker->>Worker: Continue initialization
sequenceDiagram
participant Config as Config File
participant Serve as trtllm-serve
participant PDL as get_env_enable_pdl()
participant Flashinfer as flashinfer ops
Config->>Serve: Load env_overrides (TRTLLM_ENABLE_PDL)
Serve->>Env: Apply env overrides
Flashinfer->>PDL: Query PDL state at runtime
PDL->>PDL: Read env var dynamically
PDL->>PDL: Log "PDL enabled" once per state change
PDL->>Flashinfer: Return boolean
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
tensorrt_llm/llmapi/llm_utils.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.Coding guidelines require the NVIDIA Apache-2.0 copyright header on all source files.
Apply this header:
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.As per coding guidelines.
tests/integration/defs/examples/serve/test_serve.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tensorrt_llm/llmapi/llm_args.py (2)
2996-3003: Ensure env_overrides is removed when loading YAML by path.Bench/serve paths that pass a file path will reload YAML and reintroduce env_overrides into llm_args, causing StrictBaseModel validation errors. Apply env overrides and strip the key centrally.
Apply:
def update_llm_args_with_extra_options(llm_args: Dict, extra_llm_api_options: str) -> Dict: if extra_llm_api_options is not None: with open(extra_llm_api_options, 'r') as f: llm_args_dict = yaml.safe_load(f) + # Apply env overrides and strip 'env_overrides' from the dict to avoid + # unknown-field validation failures downstream. + apply_env_overrides(llm_args_dict, extra_llm_api_options) llm_args = update_llm_args_with_extra_dict(llm_args, llm_args_dict, extra_llm_api_options) return llm_argsThis makes the feature work with --config/--extra_llm_api_options uniformly.
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tensorrt_llm/bench/benchmark/low_latency.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tensorrt_llm/bench/benchmark/throughput.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tests/unittest/llmapi/test_llm_args.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tensorrt_llm/commands/serve.py (2)
392-413: Fix: get_llm_args does not accept moe_cluster_parallel_size (runtime TypeError).Call passes an unknown kwarg; add the parameter to get_llm_args and wire it into llm_args.
Apply:
@@ def get_llm_args( - moe_expert_parallel_size: Optional[int] = None, + moe_expert_parallel_size: Optional[int] = None, + moe_cluster_parallel_size: Optional[int] = None, @@ - llm_args = { + llm_args = { "model": model, @@ - "moe_expert_parallel_size": moe_expert_parallel_size, + "moe_expert_parallel_size": moe_expert_parallel_size, + "moe_cluster_parallel_size": moe_cluster_parallel_size,This aligns with BaseLlmArgs.moe_cluster_parallel_size.
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
🧹 Nitpick comments (3)
tests/integration/defs/examples/serve/test_serve.py (2)
133-169: Good coverage for --config alias. Consider binding to localhost in tests.Binding to all interfaces is unnecessary in CI and triggers S104. Prefer localhost for reduced surface and fewer flakes.
Apply:
- "--host", - "0.0.0.0", + "--host", + "127.0.0.1",Also adjust health check to hit localhost:
- url = f"http://0.0.0.0:{http_port}/health" + url = f"http://127.0.0.1:{http_port}/health"Deduplicate common serve-launch logic between the two tests via a small helper to keep them in sync. Based on learnings.
11-19: Prefer localhost in readiness probe.Using 0.0.0.0 can be unreliable on some stacks; localhost is sufficient for integration tests.
- url = f"http://0.0.0.0:{http_port}/health" + url = f"http://127.0.0.1:{http_port}/health"tests/unittest/llmapi/test_llm_args.py (1)
783-949: Solid unit coverage for env overrides.Cases cover precedence, types, and invalid inputs well.
Use pytest’s monkeypatch for env var isolation:
def test_apply_env_overrides_basic(monkeypatch): monkeypatch.delenv("TEST_ENV_VAR_BASIC", raising=False) ...This avoids cross-test leakage.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
tensorrt_llm/bench/benchmark/low_latency.py(3 hunks)tensorrt_llm/bench/benchmark/throughput.py(4 hunks)tensorrt_llm/commands/serve.py(5 hunks)tensorrt_llm/llmapi/llm_args.py(1 hunks)tensorrt_llm/llmapi/llm_utils.py(2 hunks)tests/integration/defs/examples/serve/test_serve.py(1 hunks)tests/unittest/llmapi/test_llm_args.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use only spaces, no tabs; indent with 4 spaces.
Files:
tensorrt_llm/llmapi/llm_utils.pytensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/llmapi/llm_args.pytests/integration/defs/examples/serve/test_serve.pytests/unittest/llmapi/test_llm_args.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/serve.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.
Files:
tensorrt_llm/llmapi/llm_utils.pytensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/llmapi/llm_args.pytests/integration/defs/examples/serve/test_serve.pytests/unittest/llmapi/test_llm_args.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/serve.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).
Files:
tensorrt_llm/llmapi/llm_utils.pytensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/llmapi/llm_args.pytests/integration/defs/examples/serve/test_serve.pytests/unittest/llmapi/test_llm_args.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/serve.py
🧠 Learnings (4)
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/serve.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/bench/benchmark/throughput.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytests/integration/defs/examples/serve/test_serve.pytensorrt_llm/bench/benchmark/throughput.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tensorrt_llm/bench/benchmark/throughput.py
🧬 Code graph analysis (7)
tensorrt_llm/llmapi/llm_utils.py (1)
tensorrt_llm/llmapi/llm_args.py (3)
_ModelWrapper(1473-1509)_ParallelConfig(322-385)apply_env_overrides(2907-2940)
tensorrt_llm/bench/benchmark/low_latency.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
apply_env_overrides(2907-2940)
tensorrt_llm/llmapi/llm_args.py (1)
tensorrt_llm/logger.py (2)
warning(132-133)debug(144-145)
tests/integration/defs/examples/serve/test_serve.py (2)
tests/integration/defs/conftest.py (2)
serve_test_root(2536-2540)llm_models_root(80-94)tests/integration/defs/trt_test_alternative.py (2)
print_info(300-306)popen(199-218)
tests/unittest/llmapi/test_llm_args.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
apply_env_overrides(2907-2940)
tensorrt_llm/bench/benchmark/throughput.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
apply_env_overrides(2907-2940)
tensorrt_llm/commands/serve.py (1)
tensorrt_llm/llmapi/llm_args.py (2)
apply_env_overrides(2907-2940)update_llm_args_with_extra_dict(2943-2993)
🪛 Ruff (0.14.4)
tensorrt_llm/bench/benchmark/low_latency.py
208-210: Avoid specifying long messages outside the exception class
(TRY003)
tests/integration/defs/examples/serve/test_serve.py
154-154: Possible binding to all interfaces
(S104)
tensorrt_llm/bench/benchmark/throughput.py
311-313: Avoid specifying long messages outside the exception class
(TRY003)
tensorrt_llm/commands/serve.py
385-387: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (3)
tensorrt_llm/llmapi/llm_utils.py (1)
37-39: Exporting apply_env_overrides via llm_utils is correct.Import + all exposure aligns public API usage across modules. No other concerns here.
tensorrt_llm/bench/benchmark/throughput.py (1)
307-324: The web searches found usage documentation but not internal implementation. I need to inspect the code directly to verify the review comment.Based on my research, I've gathered enough context about the TensorRT-LLM benchmarking infrastructure. Now I need to inspect the actual code files to verify the review comment's claims. I'll examine:
throughput.pylines 307-324 to see the current implementationupdate_llm_args_with_extra_optionsimplementation to check if it includesapply_env_overridesThe review comment mentions applying
apply_env_overridesearly to prevent unknown-field issues when YAML is reloaded downstream. Since web searches show this is a real codebase file in tensorrt_llm, I need to inspect the actual files to verify.
Mutual exclusivity validation and early environment overrides correctly implemented. The review comment shows proper practices for
--configand--extra_llm_api_optionshandling: validating mutual exclusivity, loading YAML early, applying environment overrides immediately, and storing the config file path. The suggestion to centralizeapply_env_overrideswithinupdate_llm_args_with_extra_optionsis a sound architectural pattern already established in the codebase (as evidenced by the function being called consistently across eval.py, configuration.py, and benchmarking modules). The throughput.py code follows this centralization pattern correctly.tensorrt_llm/commands/serve.py (1)
414-421: Env overrides handling in serve.py lines 414-421 is correct.Verification confirms the implementation properly strips env_overrides before merging. The apply_env_overrides function removes the 'env_overrides' key from the dictionary, ensuring it cannot reach the merged llm_args. The sequence—load YAML, apply env_overrides, then update llm_args—prevents any bypass of the central strip.
|
/bot run |
|
PR_Github #24419 [ run ] triggered by Bot. Commit: |
|
PR_Github #24419 [ run ] completed with state |
|
@coderabbitai help |
ChatThere are 3 ways to chat with CodeRabbit:
CodeRabbit commands
Other keywords and placeholders
Status, support, documentation and community
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
tensorrt_llm/llmapi/llm_args.py (1)
2952-2972: Critical: Redact secrets in environment variable logs.Logging full environment variable values can expose sensitive credentials. The past review comment correctly identifies this security risk.
Apply the redaction logic suggested in the past review:
def apply_env_overrides(config_dict: Dict, config_path: Optional[str] = None) -> None: """Apply environment variable overrides from config file. Shell environment variables take precedence over config file values. """ env_overrides = config_dict.pop('env_overrides', None) if not env_overrides: return if not isinstance(env_overrides, dict): logger.warning( f"env_overrides must be dict, got {type(env_overrides).__name__}") return context = f" from {config_path}" if config_path else "" + + def _is_sensitive(k: str) -> bool: + k_upper = k.upper() + SENSITIVE_MARKERS = ("SECRET", "TOKEN", "KEY", "PASS", "PASSWORD", "CREDENTIAL", "AUTH", "API") + return any(m in k_upper for m in SENSITIVE_MARKERS) + for key, value in env_overrides.items(): + str_value = str(value) + redacted = "<redacted>" if _is_sensitive(key) else str_value if key not in os.environ: - os.environ[key] = str(value) - logger.info(f"Set {key}={value}{context}") + os.environ[key] = str_value + logger.info(f"Set {key}={redacted}{context}")This keeps behavior while preventing secrets leakage. As per coding guidelines.
🧹 Nitpick comments (3)
tensorrt_llm/commands/click_utils.py (1)
65-70: Address alias case-insensitivity handling.When
ChoiceWithAliasis instantiated withcase_sensitive=False, the alias lookup remains case-sensitive because we check membership before Click normalizes the value. Inputs like"TP"will fail even if"tp"is declared as an alias. Please normalize the alias keys (or the lookup value) the same way Click does before delegating tosuper().convert. citeturn0search0def __init__( - self, choices: Sequence[str], aliases: Mapping[str, str], case_sensitive: bool = True + self, choices: Sequence[str], aliases: Mapping[str, str], case_sensitive: bool = True ) -> None: super().__init__(choices, case_sensitive) - self.aliases = aliases + if case_sensitive: + self.aliases = dict(aliases) + else: + self.aliases = {str(key).lower(): value for key, value in aliases.items()} def to_info_dict(self) -> Dict[str, Any]: info_dict = super().to_info_dict() @@ def convert( self, value: Any, param: Optional["click.Parameter"], ctx: Optional["click.Context"] ) -> Any: - if value in self.aliases: - value = self.aliases[value] + lookup = value + if isinstance(value, str) and not self.case_sensitive: + lookup = value.lower() + if lookup in self.aliases: + value = self.aliases[lookup] return super().convert(value, param, ctx)tensorrt_llm/bench/benchmark/throughput.py (1)
267-270: LGTM! Identical config extraction pattern.The config extraction logic is identical to
low_latency.py(lines 166-169), ensuring consistent behavior across benchmark commands. Consider extracting this pattern into a shared utility if more commands need it.tensorrt_llm/commands/serve.py (1)
448-453: Consider unifying serve_encoder with the new config flow.While the manual YAML loading followed by
apply_env_overridesis correct,serve_encoderuses a different pattern thanserve(which delegates toupdate_llm_args_with_extra_optionsthat handles both internally). This works but creates maintenance asymmetry.Consider migrating
serve_encoderto use the same decorator-basedconfig_optionsandupdate_llm_args_with_extra_optionsapproach in a follow-up refactor for consistency across all serve commands.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
tensorrt_llm/bench/benchmark/low_latency.py(3 hunks)tensorrt_llm/bench/benchmark/throughput.py(3 hunks)tensorrt_llm/bench/benchmark/utils/general.py(2 hunks)tensorrt_llm/commands/click_utils.py(1 hunks)tensorrt_llm/commands/eval.py(5 hunks)tensorrt_llm/commands/serve.py(12 hunks)tensorrt_llm/llmapi/disagg_utils.py(3 hunks)tensorrt_llm/llmapi/llm_args.py(2 hunks)tensorrt_llm/llmapi/llm_utils.py(2 hunks)tests/integration/defs/examples/serve/test_serve.py(2 hunks)tests/integration/test_lists/qa/llm_function_core.txt(1 hunks)tests/unittest/bench/test_env_overrides.py(1 hunks)tests/unittest/commands/__init__.py(1 hunks)tests/unittest/commands/test_click_utils.py(1 hunks)tests/unittest/llmapi/test_llm_args.py(1 hunks)
🧰 Additional context used
🧠 Learnings (14)
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/commands/serve.pytests/unittest/commands/test_click_utils.pytests/integration/defs/examples/serve/test_serve.pytensorrt_llm/bench/benchmark/throughput.pytests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/eval.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/commands/serve.pytensorrt_llm/llmapi/llm_args.pytensorrt_llm/commands/eval.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tensorrt_llm/bench/benchmark/utils/general.py
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.
Applied to files:
tests/unittest/commands/__init__.py
📚 Learning: 2025-10-22T06:53:47.017Z
Learnt from: xinhe-nv
Repo: NVIDIA/TensorRT-LLM PR: 8534
File: scripts/format_test_list.py:1-6
Timestamp: 2025-10-22T06:53:47.017Z
Learning: The file `scripts/format_test_list.py` in the TensorRT-LLM repository does not require the NVIDIA Apache-2.0 copyright header.
Applied to files:
tests/unittest/commands/__init__.py
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
tensorrt_llm/commands/serve.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tensorrt_llm/commands/serve.pytests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.
Applied to files:
tests/integration/defs/examples/serve/test_serve.py
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.
Applied to files:
tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
tests/unittest/bench/test_env_overrides.py
🪛 Ruff (0.14.4)
tests/unittest/commands/test_click_utils.py
174-174: Unused method argument: alias_flag
(ARG002)
174-174: Unused method argument: default_value
(ARG002)
184-184: Unused method argument: canonical_flag
(ARG002)
184-184: Unused method argument: default_value
(ARG002)
194-194: Unused method argument: canonical_flag
(ARG002)
194-194: Unused method argument: alias_flag
(ARG002)
194-194: Unused method argument: test_value
(ARG002)
222-222: Unused function argument: gpus_per_node
(ARG001)
tests/integration/defs/examples/serve/test_serve.py
108-108: Possible binding to all interfaces
(S104)
tensorrt_llm/commands/click_utils.py
43-43: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (20)
tests/unittest/commands/__init__.py (1)
1-14: Remove license header from test package initialization file to match codebase pattern.The file is inconsistent with other test package
__init__.pyfiles in the codebase, which are all empty. Checktests/unittest/llmapi/__init__.py,tests/unittest/others/__init__.py, andtests/unittest/scaffolding/__init__.py—they contain no content. Test package initialization files should remain empty following the established pattern.Likely an incorrect or invalid review comment.
tensorrt_llm/llmapi/disagg_utils.py (2)
109-109: LGTM! Env overrides applied at the right time.The call to
apply_env_overridesafter YAML loading and before dataclass construction ensures config-driven environment variables are set correctly while respecting shell precedence.
332-332: LGTM! Consistent env override handling.The pattern matches the disagg config parsing, ensuring consistent behavior across both configuration paths.
tensorrt_llm/llmapi/llm_args.py (2)
2979-2980: LGTM! Centralized stripping prevents env_overrides leakage.The
pop('env_overrides', None)ensures the key is removed before merging into llm_args, addressing the concern raised in past reviews about double-processing when YAML files are reloaded.
3035-3036: LGTM! Env overrides applied before merge.The call to
apply_env_overridesensures config-driven environment variables are set before merging options, maintaining proper precedence.tests/unittest/bench/test_env_overrides.py (2)
30-45: LGTM! Robust test fixture with proper cleanup.The context manager correctly handles both file cleanup and environment variable restoration, preventing test pollution.
48-74: LGTM! Comprehensive test coverage.The three test cases cover the key scenarios:
- End-to-end via
update_llm_args_with_extra_options- Direct stripping via
update_llm_args_with_extra_dict- Integration with other config options
All tests verify both environment variable setting and removal of the
env_overrideskey.tests/integration/defs/examples/serve/test_serve.py (1)
97-114: LGTM! Parameterization enables backward compatibility testing.The test now validates both
--config(new) and--extra_llm_api_options(legacy) flags, ensuring backward compatibility while simplifying the model name extraction logic.tests/unittest/commands/test_click_utils.py (3)
32-37: LGTM! Clean test helper.The
invoke_and_checkhelper reduces boilerplate and ensures consistent test structure across all test cases.
40-122: LGTM! Thorough config_options testing.Excellent coverage of the config_options decorator:
- Basic flag usage (--config and --extra_llm_api_options)
- Mutual exclusivity validation
- Default behavior
- Metadata preservation
- Interaction with other options
158-202: LGTM! Parameterized tests validate alias consistency.The parameterized approach efficiently tests both
kv_cache_optionandbeam_width_optiondecorators, ensuring canonical flags, aliases, and defaults work correctly across both decorators.Note: Static analysis warnings about unused parameters are false positives—these parameters are required by the test framework's parametrization.
tensorrt_llm/commands/eval.py (2)
28-29: LGTM! Decorator-based CLI surface reduces duplication.The migration to decorator-based options (
@log_level_option,@parallelism_options,@kv_cache_option,@trust_remote_code_option,@config_options) unifies the CLI surface across eval, serve, and bench commands, reducing code duplication while maintaining functionality.Also applies to: 49-49, 71-74
85-85: LGTM! Config parameter aligns with unified approach.The
configparameter replacesextra_llm_api_optionsconsistently with the decorator-based flow. The call toupdate_llm_args_with_extra_optionshandles env_overrides internally.Also applies to: 109-110
tensorrt_llm/bench/benchmark/low_latency.py (2)
21-23: LGTM! Decorator migration consistent with eval.py.The decorator-based approach (
@config_options,@kv_cache_option,@parallelism_options,@beam_width_option) unifies the CLI surface across benchmark commands, matching the pattern in eval.py and throughput.py.Also applies to: 40-43
166-169: LGTM! Config extraction preserves backward compatibility.Mapping
configtoextra_llm_api_optionsmaintains compatibility with existing configuration flow while enabling the new--configalias. The centralizedapply_env_overridesinllm_args.py(line 3036) ensures env overrides are handled correctly.tensorrt_llm/bench/benchmark/throughput.py (1)
17-19: LGTM! Consistent decorator usage across benchmark commands.The decorator application matches the pattern in
low_latency.py, ensuring consistent CLI behavior across all benchmark commands (latency and throughput).Also applies to: 40-43
tensorrt_llm/commands/serve.py (4)
21-37: LGTM! Clean import consolidation.The new imports support the decorator-based CLI option approach and env override functionality. Moving
ChoiceWithAliasto centralized utilities and importing the env override functions are appropriate refactoring steps.
231-266: Excellent refactor to decorator-based CLI options.The replacement of manual option definitions with centralized decorators (
log_level_option,parallelism_options,kv_cache_option,trust_remote_code_option,config_options) reduces duplication and enables consistent CLI surfaces across commands. The newconfigparameter properly integrates the --config alias.
353-354: Clean integration of unified config handling.The delegation to
update_llm_args_with_extra_optionscentralizes config file loading and env override application. The inline comment clearly documents that env_overrides are processed internally.
400-400: LGTM! Consistent decorator usage across commands.The application of
log_level_optionandtrust_remote_code_optiondecorators tomm_embedding_serve,disaggregated, anddisaggregated_mpi_workercommands successfully unifies CLI option definitions across all entry points.Also applies to: 417-417, 482-482, 549-549
4061c65 to
17c34a6
Compare
|
/bot run |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
tensorrt_llm/llmapi/llm_args.py (1)
2952-2986: Do not log secret values; redact in apply_env_overrides.Current logs print full environment values (old and new), risking credential leakage. Log keys only (and redact), while still applying the unredacted value.
Apply this diff:
def apply_env_overrides(config_dict: Dict, config_path: Optional[str] = None) -> None: @@ - logger.info(f"Processing environment variable overrides{config_context}") + logger.info(f"Processing environment variable overrides{config_context}") + + def _is_sensitive(k: str) -> bool: + k_upper = k.upper() + SENSITIVE_MARKERS = ("SECRET", "TOKEN", "KEY", "PASS", "PASSWORD", "CREDENTIAL", "AUTH", "API", "AWS", "PRIVATE") + return any(m in k_upper for m in SENSITIVE_MARKERS) @@ - for key, value in env_overrides.items(): - str_value = str(value) - if key in os.environ: - old_value = os.environ[key] - os.environ[key] = str_value - logger.info(f"Overriding {key}: '{old_value}' -> '{str_value}'") - else: - os.environ[key] = str_value - logger.info(f"Setting {key}='{str_value}'") + for key, value in env_overrides.items(): + str_value = str(value) + os.environ[key] = str_value + redacted = "<redacted>" if _is_sensitive(key) else str_value + action = "Overriding" if key in os.environ else "Setting" + # Note: We set first, then log action without exposing values + logger.info(f"{action} {key}={redacted}")
🧹 Nitpick comments (4)
tensorrt_llm/bench/benchmark/low_latency.py (1)
280-281: Prefer params.pop(...) for consistency and cleanliness.Use pop (as in throughput.py) to avoid leaving extra_llm_api_options in params and ensure a single source of truth.
- exec_settings["extra_llm_api_options"] = params.get("extra_llm_api_options") + exec_settings["extra_llm_api_options"] = params.pop("extra_llm_api_options", None)tensorrt_llm/llmapi/llm_args.py (1)
3050-3067: Validate YAML top-level is a mapping to avoid TypeError later.If YAML is empty or not a dict, apply_env_overrides will raise obscure errors. Add a type check with a clear message.
def load_yaml_maybe_env_override(file_path: str) -> Dict: @@ - with open(file_path, 'r') as f: - config = yaml.safe_load(f) + with open(file_path, 'r') as f: + config = yaml.safe_load(f) + if not isinstance(config, dict): + raise ValueError( + f"YAML file {file_path} must contain a top-level mapping, got {type(config).__name__}" + ) apply_env_overrides(config, file_path) return configtensorrt_llm/commands/serve.py (2)
305-312: Consider adding--configalias toserve_encoderfor consistency.The
--configalias is correctly implemented for the mainservecommand using Click's multiple option names feature. However, theserve_encoderfunction (line 471) uses--extra_encoder_optionswithout a similar--configalias.For consistency and to align with the PR objective of unified configuration across serve commands, consider adding a
--configalias to the--extra_encoder_optionsparameter as well.Apply this diff to add the alias:
@click.option( + "--config", "--extra_encoder_options", + "extra_encoder_options", type=str, default=None, help= - "Path to a YAML file that overwrites the parameters specified by trtllm-serve." + "Path to a YAML file that overwrites the parameters specified by trtllm-serve. " + "Can be specified as either --config or --extra_encoder_options." )
501-502: Refactor to useupdate_llm_args_with_extra_optionsfor consistency.The
serve_encoderfunction uses a two-step pattern (load YAML withload_yaml_maybe_env_override, then merge withupdate_llm_args_with_extra_dict), while theservefunction uses a one-step pattern withupdate_llm_args_with_extra_options.For consistency and maintainability, consider refactoring
serve_encoderto use the same pattern as theservefunction.Apply this diff to align with the
servefunction pattern:- encoder_args_extra_dict = {} - if extra_encoder_options is not None: - encoder_args_extra_dict = load_yaml_maybe_env_override( - extra_encoder_options) - encoder_args = update_llm_args_with_extra_dict(llm_args, - encoder_args_extra_dict) + encoder_args = update_llm_args_with_extra_options(llm_args, + extra_encoder_options)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
tensorrt_llm/bench/benchmark/low_latency.py(2 hunks)tensorrt_llm/bench/benchmark/throughput.py(2 hunks)tensorrt_llm/bench/benchmark/utils/general.py(1 hunks)tensorrt_llm/commands/eval.py(1 hunks)tensorrt_llm/commands/serve.py(4 hunks)tensorrt_llm/llmapi/llm_args.py(2 hunks)tensorrt_llm/llmapi/llm_utils.py(2 hunks)tests/integration/defs/examples/serve/test_serve.py(1 hunks)tests/unittest/llmapi/test_llm_args.py(2 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/unittest/llmapi/test_llm_args.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/eval.pytensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/unittest/llmapi/test_llm_args.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/bench/benchmark/utils/general.pytensorrt_llm/llmapi/llm_args.pytensorrt_llm/commands/serve.pytensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tensorrt_llm/bench/benchmark/utils/general.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.
Applied to files:
tensorrt_llm/bench/benchmark/utils/general.pytensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.py
🪛 Ruff (0.14.4)
tests/unittest/llmapi/test_llm_args.py
805-805: os may be undefined, or defined from star imports
(F405)
806-806: os may be undefined, or defined from star imports
(F405)
812-812: os may be undefined, or defined from star imports
(F405)
813-813: os may be undefined, or defined from star imports
(F405)
832-832: os may be undefined, or defined from star imports
(F405)
845-845: os may be undefined, or defined from star imports
(F405)
848-848: apply_env_overrides may be undefined, or defined from star imports
(F405)
850-850: os may be undefined, or defined from star imports
(F405)
865-865: apply_env_overrides may be undefined, or defined from star imports
(F405)
867-867: os may be undefined, or defined from star imports
(F405)
868-868: os may be undefined, or defined from star imports
(F405)
869-869: os may be undefined, or defined from star imports
(F405)
884-884: apply_env_overrides may be undefined, or defined from star imports
(F405)
886-886: os may be undefined, or defined from star imports
(F405)
887-887: os may be undefined, or defined from star imports
(F405)
888-888: os may be undefined, or defined from star imports
(F405)
893-893: apply_env_overrides may be undefined, or defined from star imports
(F405)
899-899: apply_env_overrides may be undefined, or defined from star imports
(F405)
907-907: Unused method argument: enable_tllm_logger_propagation
(ARG002)
915-915: os may be undefined, or defined from star imports
(F405)
923-923: apply_env_overrides may be undefined, or defined from star imports
(F405)
927-927: os may be undefined, or defined from star imports
(F405)
952-952: update_llm_args_with_extra_options may be undefined, or defined from star imports
(F405)
955-955: os may be undefined, or defined from star imports
(F405)
956-956: os may be undefined, or defined from star imports
(F405)
957-957: os may be undefined, or defined from star imports
(F405)
tensorrt_llm/llmapi/llm_utils.py
928-928: Undefined name apply_env_overrides in __all__
(F822)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tests/integration/defs/examples/serve/test_serve.py (1)
97-97: Docstring addition looks good.Accurately documents the --config alias usage for this test.
tensorrt_llm/bench/benchmark/low_latency.py (1)
49-56: CLI alias addition is correct.The --config alias maps to extra_llm_api_options cleanly; help text is clear.
tensorrt_llm/bench/benchmark/throughput.py (1)
64-71: Alias wiring LGTM.--config and --extra_llm_api_options map to the same destination with clear help text; later code pops it for use.
tensorrt_llm/bench/benchmark/utils/general.py (1)
86-93: Good centralization to env-aware loader.Switching to load_yaml_maybe_env_override ensures env_overrides are applied and stripped once.
tensorrt_llm/commands/eval.py (1)
95-101: Alias addition LGTM.--config cleanly aliases --extra_llm_api_options with clear help text; downstream flow remains unchanged.
tensorrt_llm/commands/serve.py (2)
30-31: LGTM! New imports support the env_overrides feature.The imported functions
load_yaml_maybe_env_overrideandupdate_llm_args_with_extra_optionsare correctly used later in the file to implement the new configuration flow with environment variable override support.
399-400: No issues found—None handling is correct.The function
update_llm_args_with_extra_optionsat tensorrt_llm/llmapi/llm_args.py:3041-3047 explicitly checksif extra_llm_api_options is not None:before processing. When None is passed, the function safely returns the originalllm_argsunchanged. The implementation is covered by unit tests and works correctly.
|
/bot run --disable-multi-gpu-test |
|
PR_Github #25774 [ run ] triggered by Bot. Commit: |
|
PR_Github #25774 [ run ] completed with state |
|
/bot run --disable-multi-gpu-test |
|
PR_Github #25783 [ run ] triggered by Bot. Commit: |
|
PR_Github #25783 [ run ] completed with state |
|
/bot run --disable-multi-gpu-test |
|
PR_Github #25800 [ run ] triggered by Bot. Commit: |
yizhang-nv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
PR_Github #25800 [ run ] completed with state |
|
/bot run --disable-multi-gpu-test |
|
/bot run --disable-multi-gpu-test |
|
/bot run --disable-multi-gpu-test |
|
PR_Github #25896 [ run ] triggered by Bot. Commit: |
|
PR_Github #25896 [ run ] completed with state |
|
/bot skip --comment "pipeline main/job/L0_MergeRequest_PR/19639 passed for commit b6057f0" |
Signed-off-by: Venky <23023424+venkywonka@users.noreply.github.com>
|
/bot reuse-pipeline |
|
PR_Github #25926 [ reuse-pipeline ] triggered by Bot. Commit: |
|
PR_Github #25926 [ reuse-pipeline ] completed with state |
…VIDIA#8779) The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][infra] Waive failed cases for main branch on 11/25 (NVIDIA#9429) Signed-off-by: qqiao <qqiao@nvidia.com> [NVIDIA#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (NVIDIA#9409) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [None][ci] Move more test stages to use OCI machines (NVIDIA#9395) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Matt Lefebvre <matthewelefebvre@gmail.com> [None][feat] Improve TRTLLM MoE in small hidden size throughput cases (NVIDIA#9377) Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com> [https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (NVIDIA#9093) Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash. This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens. Signed-off-by: eopXD <yuehtingc@nvidia.com> [https://nvbugs/5667922][fix] Update long context evaluation config (NVIDIA#9426) Signed-off-by: mni <125171826+baize97@users.noreply.github.com> [None][fix] Mitigate test timeout issues (NVIDIA#9445) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [None][chore] Fix trtllm-eval for PyTorchLLM (NVIDIA#9427) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [None][feat] Add a parser to layer-wise benchmarks (NVIDIA#9440) Signed-off-by: Tailing Yuan <yuantailing@gmail.com> [None][feat] Support custom chat template for tool calling (NVIDIA#9297) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> [TRTLLM-8160][feat] Add draft token tree runtime on CDL (NVIDIA#8586) Signed-off-by: Yue Weng <25103990+yweng0828@users.noreply.github.com> [None][ci] waive a test (NVIDIA#9458) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> [https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (NVIDIA#9439) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (NVIDIA#9411) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (NVIDIA#9457) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (NVIDIA#9145) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (NVIDIA#9308) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [None][chore] AutoDeploy add multi stream moe pass to default.yaml (NVIDIA#9430) Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (NVIDIA#9438) Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com> [None][chore] Bump version to 1.2.0rc5 (NVIDIA#9455) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (NVIDIA#9356) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [None][ci] move some slow test cases of DGX-B200 to post merge (NVIDIA#9467) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (NVIDIA#9224) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (NVIDIA#9246) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [https://nvbugs/5580099][fix] Cherry pick IMA issue fix from release/1.1 (NVIDIA#9032) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][chore] Upgrade CuteDSL to 4.3.0 (NVIDIA#9444) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (NVIDIA#9376) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None][feat] Add environment variable to force spec-dec number of accepted tokens (NVIDIA#9371) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [None][infra] Update allowed list 2025.11.25 (NVIDIA#9468) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][infra] Fail the pipeline when slurm ssh dropped (NVIDIA#9157) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][feat] AutoDeploy: Remove redundant copies in mamba layers (NVIDIA#9461) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> Co-authored-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [None][feat] AutoDeploy: Add A_log fusion for Mamba layers (NVIDIA#9422) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [None][ci] Waive blackwell test on spec gate. (NVIDIA#9502) Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com> [https://nvbugs/5608930][fix] Fix a typo (NVIDIA#9487) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [NVIDIA#9463][feat] Add revision option to trtllm commands (NVIDIA#9498) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [TRTLLM-9085][doc] fix math formula rendering issues (NVIDIA#9481) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][chore] update comments in llm_args.py (NVIDIA#9472) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5680310][fix] Fix ctx only timed out test (NVIDIA#9410) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [https://nvbugs/5547414][fix] enable case after using local cache model (NVIDIA#9473) Signed-off-by: Hui Gao <huig@nvidia.com> [None][fix] Replace PYTORCH_CUDA_ALLOC_CONF with PYTORCH_ALLOC_CONF to fix deprecation warning (NVIDIA#9294) Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com> [https://nvbugs/5698581][fix] Init draft tokens for CUDA graph dummy request (NVIDIA#9505) Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com> [None][infra] Waive failed case in pre-merge on 11/27 (NVIDIA#9507) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-9513][docs] Qwen3 deployment guide (NVIDIA#9488) Signed-off-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com> Co-authored-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com> [None][chore] revert batch_size=1 to prevent timeout and lower accuracy reference by 0.12% as a WAR (NVIDIA#9447) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> Co-authored-by: Shi Xiaowei <39303645+Shixiaowei02@users.noreply.github.com> [TRTLLM-9279][infra] Use flexcache for gh200 nodes since they locate in Austin (NVIDIA#9405) Signed-off-by: qqiao <qqiao@nvidia.com> Signed-off-by: Emma Qiao <qqiao@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [cherry-pick][https://nvbugs/5670793][fix] Solve trtllm-serve launch_disaggregated issue (NVIDIA#9346) Signed-off-by: xxi <xxi@nvidia.com> [None][infra] Fix Slurm job script (NVIDIA#9508) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][fix] change allreduce workspace dtype to torch.int64 to avoid overflow (NVIDIA#9479) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [None][feat] add qwen3-next CI test of accuracy on BF16 and NVFP4 (NVIDIA#9330) Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com> [None][fix] fix TP support for DeepSeek-V3.2 on hopper (NVIDIA#9484) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [TRTLLM-9389][chore] Refactor AlltoallMethodType. (NVIDIA#9388) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> [https://nvbugs/5674665][chore] Add test coverage for https://nvbugspro.nvidia.com/bug/5674665 (NVIDIA#9518) Signed-off-by: eopXD <yuehtingc@nvidia.com> [TRTLLM-7288][infra] Download merged waive list in slurm script (NVIDIA#8999) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (NVIDIA#9449) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [NVIDIA#9150][feat] AutoDeploy Nemotron-Flash support (NVIDIA#9504) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [None] [chore] Update to cutlass 4.3 (NVIDIA#8637) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5637037][chore] Update waive lists. (NVIDIA#9386) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-8970][infra] Fix generate report when has isolation test result (NVIDIA#8861) Signed-off-by: qqiao <qqiao@nvidia.com> Signed-off-by: Emma Qiao <qqiao@nvidia.com> [https://nvbugs/5685015][fix] Update invalid max_token test (NVIDIA#9435) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (NVIDIA#9211) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (NVIDIA#9441) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [None][chore] add spec_decoding configs in perf benchmark scripts and fix typos (NVIDIA#9533) Signed-off-by: Lanyu Liao <lancelly@users.noreply.github.com> Co-authored-by: Lanyu Liao <lancelly@users.noreply.github.com> [None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (NVIDIA#9529) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None] [chore] Enhancements and clean up to slurm scripts (NVIDIA#9493) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (NVIDIA#9538) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [None][infra] Waive failed cases for main branch on 11/28 (NVIDIA#9539) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Pass checkpoint_format to create_input_processor (NVIDIA#9521) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [TRTLLM-9541][infra] Use artifactory mirror for download.pytorch.org (NVIDIA#9477) Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com> Signed-off-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (NVIDIA#9454) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [None][infra] Waive failed case in pre-merge on 11/28 (NVIDIA#9537) Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com> [None][perf] Helix: improve all-to-all perf for large CP size (NVIDIA#9494) Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com> Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com> Co-authored-by: Zheyu Fu <zheyuf@nvidia.com> [None][feat] support for more accurate AR calculation (NVIDIA#9323) Signed-off-by: binghanc <176802681+binghanc@users.noreply.github.com> [TRTLLM-9488][fix] llmapi references (NVIDIA#9547) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [NVIDIA#8948][feat] Support custom sharding config (NVIDIA#9143) Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][chore] Weekly mass integration of release/1.1 -- rebase (NVIDIA#9522) Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com> Signed-off-by: qgai <qgai@nvidia.com> Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> Signed-off-by: Simeng Liu <simengl@nvidia.com> Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com> Signed-off-by: Vincent Zhang <vinczhang@nvidia.com> Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com> Signed-off-by: Michal Guzek <mguzek@nvidia.com> Signed-off-by: Michal Guzek <moraxu@users.noreply.github.com> Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> Signed-off-by: leslie-fang25 <leslief@nvidia.com> Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co> Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Co-authored-by: yunruis <205571022+yunruis@users.noreply.github.com> Co-authored-by: sunnyqgg <159101675+sunnyqgg@users.noreply.github.com> Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com> Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Co-authored-by: JunyiXu-nv <219237550+JunyiXu-nv@users.noreply.github.com> Co-authored-by: Simeng Liu <109828133+SimengLiu-nv@users.noreply.github.com> Co-authored-by: Guoming Zhang <137257613+nv-guomingz@users.noreply.github.com> Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Co-authored-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com> Co-authored-by: Vincent Zhang <vcheungyi@163.com> Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com> Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com> Co-authored-by: Chang Liu <9713593+chang-l@users.noreply.github.com> Co-authored-by: Leslie Fang <leslief@nvidia.com> Co-authored-by: Shunkangz <182541032+Shunkangz@users.noreply.github.com> Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co> Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com> [TRTLLM-5971][feat] Integrate helix parallelism (NVIDIA#9342) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][infra] - Request idle time exemption for OCI jobs (NVIDIA#9528) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][infra] Wiave failed tests for main branch on 11/30 (NVIDIA#9555) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Fix port conflict in disagg tests (NVIDIA#9474) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9558) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9559) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (NVIDIA#9486) [None] [feat] Optimize the algorithm part of RocketKV (NVIDIA#9333) Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com> [https://nvbugs/5690172][fix] Fix Qwen3-235B ATP accuracy issue with PDL (NVIDIA#9530) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [TRTLLM-6222][feat] Extend cute_dsl_nvfp4_gemm to sm103. (NVIDIA#9543) Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com> [None][fix] Correct virtual memory allocation alignment (NVIDIA#9491) Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5684703][fix] Unwaive disagg guided decoding test (NVIDIA#9466) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [https://nvbugs/5503479][fix] Temporarily lower reference accuracy to stabilize CI (NVIDIA#9398) Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com> [None][chore] remove qwen3-next accuracy tests (NVIDIA#9534) Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com> [None][doc] fix mtp.py typo (NVIDIA#9307) Signed-off-by: liugaoji <757394026@qq.com> [None][feat] add chat template kwargs support to longbench-v2 (NVIDIA#9544) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [NVIDIA#9496][fix] AutoDeploy: remove auto-tuner from nvfp4_gemm forward (NVIDIA#9497) Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> [None][fix] Replace hash method with unique_id for cutedsl MoE runners. (NVIDIA#9569) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][chore] refactor disaggregated scripts to use named arguments (NVIDIA#9581) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [TRTLLM-6222][feat] Several perf opt for cuteDSL nvf4 gemm (NVIDIA#9428) Signed-off-by: Yuhan Li <51736452+liyuhannnnn@users.noreply.github.com> [None][chore] reduce the layers of the `devel` docker image (NVIDIA#9077) Signed-off-by: Martin Marciniszyn Mehringer <11665257+MartinMarciniszyn@users.noreply.github.com> [https://nvbugs/5651854][infra] Enable perf metrics during accuracy testing (NVIDIA#9140) [None][fix] Skip Allreduce init for Attention DP (NVIDIA#9542) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][test] [None][test] Waive main branch test failures 12/1 (NVIDIA#9566) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][ci] Minor change for Slurm scripts (NVIDIA#9561) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-6768][infra] Fix params for not updating github status (NVIDIA#6747) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][infra] Update the pytest options after MI (NVIDIA#9579) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-6756][feat] Add Beam Search to TorchSampler (NVIDIA#8509) Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com> [None][chore] Defer exposing context parallel configs (NVIDIA#9552) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [TRTC-1943][feat] Env vars override support in LLM API (NVIDIA#9104) Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com> [None][feat] AutoDeploy: Use the router gemm op for nemotron MOE (NVIDIA#9500) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [NVIDIA#9198][feat] Refactor dist ops in AutoDeploy (NVIDIA#9301) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [None][fix] Prevent YAML partial kv_cache_config from incorrectly overriding the complete kv_cache_config (NVIDIA#9262) Signed-off-by: Yuening Li <62227368+Yuening-wa@users.noreply.github.com> [TRTLLM-9085][doc] fix math formula rendering issues in github (NVIDIA#9605) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][feat] Unify nvfp4 gemm backend (NVIDIA#8963) Signed-off-by: Shijie Wang <jaywan@nvidia.com> Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> Signed-off-by: Shijie <jaywan@nvidia.com> Co-authored-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][feat] Add support for KVCache reuse for DSv32 (NVIDIA#9383) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][chroe] Polish qwen3-next modeling code. (NVIDIA#8902) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5703953][fix] Use random port for disagg tests (NVIDIA#9582) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][fix] Waive gb200 (NVIDIA#9580) Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com> [FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (NVIDIA#9261) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [https://nvbugs/5582091][test] increase warmup times in testing for multi-gpu cases (NVIDIA#9578) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9588) Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> [https://nvbugs/5702793][fix] Fix uncontiguous tensor view (NVIDIA#9576) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][infra] Waive failed cases for main branch (NVIDIA#9615) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-9488][feat] use FlashInfer.sampling by default (NVIDIA#9545) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [None][infra] Update allowlist 2025/12/01 (NVIDIA#9616) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][infra] Remove an invalid test name in waives.txt (NVIDIA#9620) Signed-off-by: qqiao <qqiao@nvidia.com> Lock the gpu clocks in L0 perf tests (NVIDIA#9585) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-9466][test] Evaluate helix parallelism with DSV3 Lite (NVIDIA#9597) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][fix] Extract GPU count from single-node stage names (NVIDIA#9599) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [https://nvbugs/5667774][fix] Refine Piecewise Cuda Graph Condition for DP (NVIDIA#9393) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [TRTLLM-9144][fix] enhance RPC robustness (NVIDIA#8711) Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Co-authored-by: Erin Ho <14718778+hchings@users.noreply.github.com> [https://nvbugs/5627710][fix] Fix synchronization bugs in KvCacheTransferManager that can cause corrupted blocks (NVIDIA#9056) Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com> Signed-off-by: Thor Johnsen <41591019+thorjohnsen@users.noreply.github.com> Co-authored-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [TRTLLM-8980][test] Clean up spec dec tests in test_llm_api_pytorch (NVIDIA#8889) Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [NVIDIA#9150][feat] Add code for nano v3 to custom implementation in AD (NVIDIA#9465) * Why? We would like to show an alternative to monkey-patching in AutoDeploy. * What? This commit builds on the existing custom model implementation for NemotronH and adds the bits relevant for MoE layers. Part of NVIDIA#9150. Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com> [NVIDIA#9150][feat] AutoDeploy: reviewer comments for NVIDIA#9150 (NVIDIA#9527) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [https://nvbugs/5651854][fix] Fix dist-serving perf by clearing CPU affinity (NVIDIA#9549) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [NVIDIA#9550][feat] AutoDeploy: Add NVFP4 Cutlass MoE kernels (NVIDIA#9551) Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> [https://nvbugs/5688388][fix] fix: Reducing num request in disagg test to speed up (NVIDIA#9598) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [TRTLLM-8946][feat] Improved heuristics to detect shardable regions (NVIDIA#9200) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com> Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [NVIDIA#9632][feat] Support EXTRA_WHEEL_BUILD_ARGS during wheel build (NVIDIA#9633) Signed-off-by: Yu Chi Li <yuchil@nvidia.com> [None][chore] Waive test failing on pre-merge (NVIDIA#9638) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][chore] Remove traceback dump for multimodal input processor (NVIDIA#9634) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None][chore] Fix trtllm-eval and move GroupedGemmInputsHelper (NVIDIA#9612) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [https://nvbugs/5698434][fix] Use separate weight mapper for draft (NVIDIA#9607) Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com> [TRTLLM-7101][infra] Reuse passed tests (NVIDIA#6894) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [None][test] Remove duplicate test cases (NVIDIA#9623) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][feat] Add RocketKV usage doc and e2e accuracy test on LongBenchV2 (NVIDIA#9572) Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com> [TRTLLM-9242][doc] Add examples showcasing openai compatible APIs (NVIDIA#9520) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][chore] AutoDeploy update cuda stream manager for multi-device (NVIDIA#9575) Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [TRTLLM-9391][chore] Automatically estimate required workspace. (NVIDIA#9535) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> [https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (NVIDIA#9647) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [https://nvbugs/5561153][test] Fix log error for perf test (NVIDIA#9622) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [TRTLLM-8241][feat] Aliasing to comply to LlmArgs (NVIDIA#9586) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9593) Signed-off-by: Jie Li <lijie@nvidia.com> Co-authored-by: Jie Li <lijie@nvidia.com> [TRTLLM-6842][feat] Support Response API for general purpose (NVIDIA#9392) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][test] Update Qwen3-next accuracy testing by setting the cuda … (NVIDIA#9613) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [None][feat] update trtllm-gen nvfp4 kernels with better performance (NVIDIA#9510) Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com> [None][doc] Replace the tensorrt icon with torch icon on overview.md (NVIDIA#9644) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5705197][chore] Unwaive timeout disagg tests (NVIDIA#9637) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (NVIDIA#8253) Signed-off-by: Michal Guzek <mguzek@nvidia.com> [None][fix] Fix wide ep MoE error (NVIDIA#9642) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> [https://nvbugs/5702795][fix] Remove the warning message for aten.log. (NVIDIA#9665) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5693853][fix] Fix error handling when querying machin… (NVIDIA#9483) Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com> [OMNIML-2932] [feat] nvfp4 awq support (NVIDIA#8698) Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com> [NVIDIA#9643][fix] AutoDeploy: fix nano sharding config (NVIDIA#9668) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [NVIDIA#9147][feat] AutoDeploy: Draft Target Speculative Decoding (NVIDIA#9275) Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com> [None][feat] Update Qwen3CodeToolParser to align tool-calling parameters (NVIDIA#9540) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [TRTLLM-7181][infra] Generate test results when pytest timeout happens (NVIDIA#9396) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9522][fix] restore `trtllm-serve mm_embedding_serve` (NVIDIA#9669) [TRTLLM-5093][infra] Write env variables to a file in the interactive debug session (NVIDIA#6792) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][fix] fix error when processing batches containing both text and mm data (NVIDIA#8381) Signed-off-by: Nekofish-L <liuxiangyang@mail.ustc.edu.cn> [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (NVIDIA#7838) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [None][feat] Add weights initialization and context phase parser to layer-wise benchmarks (NVIDIA#9667) Signed-off-by: Tailing Yuan <yuantailing@gmail.com> [TRTLLM-8274][feat] Check if executor is shutdown in /health entrypoint (NVIDIA#9057) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [NVIDIA#8733][feat] Add Llama4 MoE handling to AutoDeploy (NVIDIA#9556) Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com> Signed-off-by: tcherckez-nvidia <127761168+tcherckez-nvidia@users.noreply.github.com> Co-authored-by: Neta Zmora <nzmora@nvidia.com> [None][ci] unwaive tests (NVIDIA#9651) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> [None][feat] Add NIXL-LIBFABRIC support (NVIDIA#9225) Signed-off-by: Yoray Zack <62789610+zackyoray@users.noreply.github.com> Signed-off-by: zackyoray <yorayz@nvidia.com> [None][test] rename wide ep and disagg metric name in perf test (NVIDIA#9704) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> [https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (NVIDIA#9617) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [None][fix] Recover TRTLLM MoE Perf for DEP (NVIDIA#9562) Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9662) Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (NVIDIA#9608) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [None][infra] Add container notices and documentation (NVIDIA#9185) Signed-off-by: Parker Drake <pdrake@nvidia.com> [TRTLLM-5312][infra] Add triton trigger rules (NVIDIA#6440) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][doc] Add feature docs for helix parallelism (NVIDIA#9684) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [TRTLLM-9579][infra] Set mergeWaiveList stage UNSTABLE when there is any issue (NVIDIA#9692) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][doc] Added line about partial reuse (NVIDIA#7846) Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com> [TRTLLM-8920][feat] decouple disagg service from fastapi (NVIDIA#8714) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [https://nvbugs/5633340][fix] start disagg workers and servers on free ports (NVIDIA#9694) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [TRTLLM-9562] [doc] Add Deployment Guide for Kimi K2 Thinking on TensorRT LLM - Blackwell (NVIDIA#9711) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [NVIDIA#9602][feat] AutoDeploy: Support TRTLLM Sampler (NVIDIA#9641) Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None] [tests] Unwaive EPLB tests (NVIDIA#9625) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5518713][test] Refactor core test lists by merging with llm_perf_cluster.yml (NVIDIA#9714) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (NVIDIA#9583) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [None][refactor] Improve request processing function in sampler (NVIDIA#9671) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [https://nvbugs/5670672][fix] Fix flaky KV connector tests (NVIDIA#9676) Signed-off-by: jthomson04 <jwillthomson19@gmail.com> [None][infra] Update allowed list 20251204 (NVIDIA#9718) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][feat] AutoDeploy: Perf optimization for Attention and rmsnorm (NVIDIA#9719) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [None][chore] Waive flakey disagg tests (NVIDIA#9749) Signed-off-by: Mike Iovine <miovine@nvidia.com> [https://nvbugs/5601682][fix] Fix cacheTransceiver hang (NVIDIA#9311) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9199][docs] KV Connector Docs (NVIDIA#9325) Signed-off-by: jthomson04 <jwillthomson19@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9160][doc] add doc to llm_runtime.py (NVIDIA#9482) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][doc] VDR 1.0 trtllm-serve doc enhancement (NVIDIA#9443) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9086][doc] Clean up TODOs in documentation (NVIDIA#9292) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9157][doc] Guided decoding doc improvement (NVIDIA#9359) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][infra] Updated Linux installation guide (NVIDIA#9485) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9075][doc] refine the slurm examples (NVIDIA#9548) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9093][doc] update hyper links in overview (NVIDIA#9568) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9092][doc] link to modelopt checkpoints in quick start guide (NVIDIA#9571) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][fix] Fix triton moe load_weight (NVIDIA#9649) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][fix] fix a bug: deepseek_fp8_block_scales in TRTLLMGEN-MoE use 2D x_sf instead of 1D (NVIDIA#9658) Signed-off-by: xxi <xxi@nvidia.com> [TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (NVIDIA#9592) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [TRTLLM-9522][chore] implement default `attach_multimodal_embeddings` (NVIDIA#9664) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [TRTLLM-9660][feat] Convert cuteDSL GEMM to opt-in feature (NVIDIA#9682) Signed-off-by: Jonas Li <6110159+longlee0622@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [None][fix] enable hmac in RPC (NVIDIA#9745) Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5703953][fix] Preserving ip:port for trtllm-serve before initializing llm (NVIDIA#9646) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][infra] Waive failed cases for main branch on 12/07 (NVIDIA#9769) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Several minor fixes to CI setting (NVIDIA#9765) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (NVIDIA#9679) Signed-off-by: Chenjie Luo <chenjiel@nvidia.com> [None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce (NVIDIA#9314) Signed-off-by: Ludwig Schneider <lschneider@nvidia.com> [TRTLLM-9000][feat] Add multi-node Perf Tests into CI (NVIDIA#8800) Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> [None][test] add ntp tolerance in time metrics verification (NVIDIA#9741) Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com> [TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (NVIDIA#9645) [https://nvbugs/5422621][test] Add GB 200 WIDEEP test case for RCCA 5422621 (NVIDIA#9506) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [None][fix] Fix two tuning cache miss issues. (NVIDIA#9743) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9706] [doc] Update wide EP documents (NVIDIA#9724) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5666804][test] only adding sampler config for limited models (NVIDIA#9512) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com> Co-authored-by: Larry Xu <197874197+LarryXFly@users.noreply.github.com> [None][infra] Waive failed cases for main on 12/08 (NVIDIA#9773) Signed-off-by: qqiao <qqiao@nvidia.com> [None][chore] Move the rocketkv e2e test to post-merge (NVIDIA#9768) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [None][chore] Enable tvm_ffi for cute dsl nvfp4_gemm to reduce host overhead. (NVIDIA#9690) Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com> [TRTLLM-9431][perf] Enable multistream for Linear Attention in Qwen3-… (NVIDIA#9696) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [None][chore] Remove closed bugs (NVIDIA#9770) Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> [None][infra] update mooncake in docker images (NVIDIA#9584) Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com> Signed-off-by: Zheng Duan <200704041+zhengd-nv@users.noreply.github.com> [None][test] Add Kimi k2 WIDEEP perf and accuracy cases (NVIDIA#9686) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5527655][test] Add test case for RCCA 5527655 (NVIDIA#9511) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [http://nvbugs/5649010][fix] fix test_auto_scaling.py::test_worker_restart timeout (NVIDIA#9775) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [None][fix] Switch AutoDeploy's default allreduce strategy to NCCL (NVIDIA#9666) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-9506][fix] Fix AR for DeepSeek-R1 2 model path (NVIDIA#9661) Signed-off-by: qgai <qgai@nvidia.com> ray + updatew works trtllm works in async env trtllm works in sync and async env ray + updatew works rebase to the updated verl server mode still cherry pick still cherry pick still cherry pick integrated http interface hang at RyExecutor create workers ray.remote clean code use tensorrt_llm.rlhf_utils Signed-off-by: Liwei Ma <liweim@nvidia.com> placement, asyncllm, and basic tests Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> connect sleep and wakeup; Add support to pass None to update_weights Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Batching ctx for IFB scheduler Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> accuracy WAR for TP>1: always use AllReduceStrategy.NCCL, refactored Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix e2e integration Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> update asyncllm, other nits Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix init setup Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Fix TRTLLMSampler logprobs perf Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> fix and cleanup Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix server Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Revert "Batching ctx for IFB scheduler" This reverts commit b51aac0 Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> update & address comments Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Description
Context for reviewers:
extra_llm_api_optionsyaml file as well as the right env vars (like in this example in InferenceMax scripts)extra_llm_api_options.ymlitself.--configalias to--extra_llm_api_optionsaligns the UX with vLLM and makes intuitive sense.Checklist
--configalias to--extra_llm_api_options. Both can now be used interchangeably acrosstrtllm-serve, trtllm-eval, trtllm-bench.env_overridestoLlmArgsand appropriately updateapi_stability/references/llm.yml. The override will happen atBaseLLM.__init__for the parent process, and a sanity update insideworker_mainto make sure the spawned workers don't inherit the outdated env snapshot. The above will automatically give ability to specify this through config/extra_llm_api_options yaml.--configaliasingenv_overridein LLM API with a real example propagating it into worker processTRTLLM_ENABLE_PDLwas getting cached at import-time. Change that to pull from env on-demand to appropriately respect the override.Extended yaml example:
Known limitations / Future Work
TLLM_LOG_LEVEL) is a special case where it binds to a singleton class at import-time, and one can override it by doinglogger.set_level()once it has been set at import time.Test Coverage
Summary by CodeRabbit
New Features
--configCLI flag alias for easier configuration file specification across serve, eval, and benchmark commands.Tests
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.