-
Notifications
You must be signed in to change notification settings - Fork 2k
[None][test] Add Kimi k2 WIDEEP perf and accuracy cases #9686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[None][test] Add Kimi k2 WIDEEP perf and accuracy cases #9686
Conversation
📝 WalkthroughWalkthroughThis pull request updates performance benchmarking configurations for disaggregated inference. Changes include disabling temporary file cleanup in an executor, adding error logging to subprocess utilities, adding GPU resource requests (4 GPUs) and environment variables across 40+ YAML configuration files for different model variants, introducing new FP8 test configurations, adding test entries to benchmark lists, and updating pytest configuration and log path generation. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (6)
tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx2_gen1_dep32_bs128_eplb288_mtp3_ccb-DEFAULT.yaml (1)
18-18: Configuration additions mirror File 1; consistency confirmed.The GPU resource request and environment variable additions are identical to the first file and appropriately aligned with this configuration's
hardware.gpus_per_node: 4. The speculative MTP configuration (lines 93–95, 115–117) is orthogonal to these changes.Also applies to: 41-42
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx8_gen1_dep32_bs16_eplb0_mtp3_ccb-NIXL.yaml (1)
17-17: Configuration additions are consistent.Same pattern as other configs: GPU allocation matches hardware setup, and environment variables provide consistent runtime controls.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb0_mtp1_ccb-NIXL.yaml (1)
17-17: LGTM!Consistent configuration pattern applied correctly.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp3_ccb-NIXL.yaml (1)
18-18: LGTM!Consistent configuration pattern applied.
Also applies to: 41-42
tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep16_bs64_eplb288_mtp3_ccb-UCX.yaml (1)
18-18: LGTM!Configuration additions are consistent with the established pattern.
Also applies to: 41-42
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb0_mtp0_ccb-NIXL.yaml (1)
17-17: LGTM!Configuration changes follow the established pattern across all test configs.
Also applies to: 40-41
🧹 Nitpick comments (2)
tests/integration/defs/perf/disagg/test_configs/wideep/perf/kimi-k2-thinking-fp4_8k1k_ctx8_gen1_dep32_bs256_eplb416_mtp0_ccb-UCX.yaml (1)
14-15: Verify placeholder values are populated before execution.The configuration contains multiple template placeholders (
<partition>,<account>,<container_mount>,<container_image>,<model_path>,<full_path_to_work_dir>) that must be substituted with actual values at runtime. Ensure that the test runner or CI pipeline correctly injects these values before submitting the SLURM job or running the benchmark.Also applies to: 35-40
tests/integration/defs/perf/disagg/execution/subprocess_utils.py (1)
60-63: Consider logging stderr only on command failure.The current implementation logs stderr for all commands, including successful ones. This could generate noise if commands write informational messages to stderr.
Consider this refinement to log stderr only when it's relevant:
result = subprocess.run( *popenargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE, timeout=timeout, check=True, **kwargs, ) - - # Log stderr if it exists - if result.stderr: - stderr_output = result.stderr.decode() - logger.error(f"Command stderr: {stderr_output}") - return result.stdout.decode()Then handle stderr in exception cases where it's caught in the caller (like in
executor.pylines 246-249).Alternatively, if stderr visibility is valuable for all commands, the current implementation is acceptable.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (52)
tests/integration/defs/perf/disagg/execution/executor.py(1 hunks)tests/integration/defs/perf/disagg/execution/subprocess_utils.py(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep16_bs64_eplb0_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep16_bs64_eplb0_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep32_bs16_eplb0_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep32_bs16_eplb0_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb0_mtp1_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb0_mtp1_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-NIXL.yaml(1 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml(1 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen1_dep32_bs32_eplb0_mtp0_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen1_dep32_bs32_eplb0_mtp0_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb0_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb0_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx1_gen3_tep8_bs16_eplb0_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx1_gen3_tep8_bs16_eplb0_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx1_gen3_tep8_bs32_eplb0_mtp0_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx1_gen3_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb0_mtp0_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb0_mtp0_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx8_gen1_dep32_bs16_eplb0_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx8_gen1_dep32_bs16_eplb0_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/accuracy/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp3_ccb-NIXL.yaml(1 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/accuracy/kimi-k2-thinking-fp4_1k1k_ctx3_gen1_dep32_bs1024_eplb384_mtp0_ccb-UCX.yaml(1 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep16_bs64_eplb288_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep16_bs64_eplb288_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep32_bs16_eplb288_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep32_bs16_eplb288_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp1_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp1_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx1_gen1_dep32_bs32_eplb288_mtp0_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx1_gen1_dep32_bs32_eplb288_mtp0_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep48_bs16_eplb288_mtp3_ccb-DEFAULT.yaml(1 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx2_gen1_dep32_bs128_eplb288_mtp3_ccb-DEFAULT.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb288_mtp0_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb288_mtp0_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx8_gen1_dep32_bs16_eplb288_mtp3_ccb-NIXL.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx8_gen1_dep32_bs16_eplb288_mtp3_ccb-UCX.yaml(2 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/kimi-k2-thinking-fp4_1k1k_ctx3_gen1_dep32_bs1024_eplb384_mtp0_ccb-UCX.yaml(1 hunks)tests/integration/defs/perf/disagg/test_configs/wideep/perf/kimi-k2-thinking-fp4_8k1k_ctx8_gen1_dep32_bs256_eplb416_mtp0_ccb-UCX.yaml(1 hunks)tests/integration/defs/perf/disagg/testlist/disagg.txt(1 hunks)tests/integration/defs/perf/disagg/testlist/wideep.txt(1 hunks)tests/integration/defs/perf/disagg/utils/common.py(2 hunks)tests/integration/defs/pytest.ini(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., usefrom package.subpackage import fooand thenfoo.SomeClass()instead offrom package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g.,some_file.py)
Python class names should use PascalCase (e.g.,class SomeClass)
Python function and method names should use snake_case (e.g.,def my_awesome_function():)
Python local variable names should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile = ...)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g.,MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g.,self.x = 5followed by"""<type>: Description of 'x'""")
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic
Files:
tests/integration/defs/perf/disagg/execution/executor.pytests/integration/defs/perf/disagg/utils/common.pytests/integration/defs/perf/disagg/execution/subprocess_utils.py
**/*.{cpp,h,cu,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top
Files:
tests/integration/defs/perf/disagg/execution/executor.pytests/integration/defs/perf/disagg/utils/common.pytests/integration/defs/perf/disagg/execution/subprocess_utils.py
🧠 Learnings (7)
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep16_bs64_eplb288_mtp3_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep32_bs16_eplb0_mtp3_ccb-UCX.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep16_bs64_eplb0_mtp3_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-UCX.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx8_gen1_dep32_bs16_eplb0_mtp3_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp1_ccb-UCX.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb0_mtp1_ccb-UCX.yamltests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx8_gen1_dep32_bs16_eplb288_mtp3_ccb-UCX.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb288_mtp0_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx1_gen3_tep8_bs16_eplb0_mtp3_ccb-UCX.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb0_mtp0_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-UCX.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen1_dep32_bs32_eplb0_mtp0_ccb-UCX.yamltests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep48_bs16_eplb288_mtp3_ccb-DEFAULT.yamltests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp3_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb288_mtp0_ccb-UCX.yamltests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep16_bs64_eplb288_mtp3_ccb-UCX.yaml
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.
Applied to files:
tests/integration/defs/pytest.ini
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.
Applied to files:
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml
📚 Learning: 2025-08-18T08:42:02.640Z
Learnt from: samuellees
Repo: NVIDIA/TensorRT-LLM PR: 6974
File: tensorrt_llm/serve/scripts/benchmark_dataset.py:558-566
Timestamp: 2025-08-18T08:42:02.640Z
Learning: In TensorRT-LLM's RandomDataset (tensorrt_llm/serve/scripts/benchmark_dataset.py), when using --random-token-ids option, sequence length accuracy is prioritized over semantic correctness for benchmarking purposes. The encode/decode operations should use skip_special_tokens=True and add_special_tokens=False to ensure exact target token lengths.
Applied to files:
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep48_bs16_eplb288_mtp3_ccb-DEFAULT.yaml
📚 Learning: 2025-09-17T06:01:01.836Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7785
File: tests/integration/defs/perf/utils.py:321-333
Timestamp: 2025-09-17T06:01:01.836Z
Learning: In test infrastructure code for disaggregated serving tests, prefer logging errors and continuing execution rather than raising exceptions on timeout, to avoid disrupting test cleanup and causing cascading failures.
Applied to files:
tests/integration/defs/perf/disagg/execution/executor.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (58)
tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx1_gen1_dep32_bs32_eplb288_mtp0_ccb-UCX.yaml (3)
18-18: GPU resource request aligns with hardware configuration.The SLURM
--gres=gpu:4request (line 18) correctly matches the declaredgpus_per_node: 4(line 31), ensuring resource consistency.Also applies to: 31-31
41-42: Verify consistency and safety of new environment variables across all modified files.These environment variables disable garbage collection and enable container device access—critical for performance benchmarks but potentially risky if not uniformly applied. The PR description mentions 40+ YAML files being modified with similar changes.
Ensure:
- All modified YAML files apply these same environment variable settings consistently.
- The
ENROOT_ALLOW_DEV=yessetting is justified and does not introduce security gaps.- Disabling GC does not negatively impact accuracy test reproducibility (though accuracy is currently disabled here).
1-110: Inconsistency detected in AI-generated summary.The enriched summary references a different file path (
eplb0) than the file being reviewed (eplb288). Verify that the summary reflects the correct file and that all similar configuration updates are appropriately attributed.tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep32_bs16_eplb288_mtp3_ccb-NIXL.yaml (2)
18-18: GPU resource request is consistent with hardware configuration.The
--gres=gpu:4allocation aligns withhardware.gpus_per_node: 4on line 31. Configuration is correct.
41-42: Verify the intent of overlapping environment variables between worker and server configs.Line 41 includes
TRTLLM_SERVER_DISABLE_GC=1in the worker environment, while line 42 sets it again in the server environment. This duplication suggests either:
- The worker_env_var should not include server-specific GC flags (only
TRTLLM_WORKER_DISABLE_GC=1), or- Both processes need both flags set in their respective environments for proper effect
Clarify whether this is intentional for the disaggregated setup or if worker_env_var should be cleaned up to exclude server-scoped variables.
tests/integration/defs/perf/disagg/test_configs/wideep/accuracy/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp3_ccb-NIXL.yaml (2)
1-120: Inconsistency between AI summary and provided code segment.The AI-generated summary states that
environment.worker_env_varandenvironment.server_env_varhave been added with specific values (TLLM_LOG_LEVEL=INFO, TRTLLM_SERVER_DISABLE_GC=1, etc.), but these fields are not present in the provided code segment. The environment section (lines 40–46) contains only standard configuration fields.Additionally, the filename referenced in the summary (
eplb0_mtp1_ccb) differs from the actual file under review (eplb288_mtp3_ccb), which suggests the summary may apply to a different configuration file.Please clarify: Are the environment variables mentioned in the summary intended to be added to this file, or are they part of changes in other configuration files? If they should be added here, they should appear in the environment section starting at line 40.
24-24: GPU resource request via SLURM looks appropriate for disaggregated inference testing.The addition of
extra_args: "--gres=gpu:4"aligns with the test configuration's hardware setup (4 GPUs per node per line 37) and is consistent with disaggregated inference requirements where separate context and generation servers benefit from dedicated GPU allocation.tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp3_ccb-NIXL.yaml (2)
17-17: GPU resource request is appropriate.The
--gres=gpu:4correctly aligns with the hardware configuration (num_gen_servers: 4 at line 32) and benchmark setup. ✓
40-41: Clarify intentional duplication of TRTLLM_SERVER_DISABLE_GC.
TRTLLM_SERVER_DISABLE_GC=1appears in bothworker_env_var(line 40) andserver_env_var(line 41). If both worker and server processes should receive this flag, the duplication is harmless but may indicate unclear intent.
- If all worker env vars should apply to both processes, consider documenting this.
- If this is unintentional, remove it from line 40 and keep only in line 41.
Otherwise, the environment variable configuration is appropriate for performance benchmarking (GC disabling ensures consistent results, PDL and logging flags support the benchmark harness).
tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp1_ccb-UCX.yaml (3)
18-18: GPU resource allocation is consistent with hardware configuration.Line 18 correctly specifies
--gres=gpu:4for Slurm, which aligns with thegpus_per_node: 4declared in the hardware section (line 31). This ensures the job requests the correct GPU count.
41-42: Verify environment variable settings are correct for WIDEEP disaggregated inference.The addition of
worker_env_varandserver_env_varintroduces garbage collection disabling, PDL enablement, and device access controls. While these appear aligned with disaggregated inference optimization patterns, please verify:
- These variable names and values are documented and correct for the TensorRT-LLM WIDEEP backend.
- The specific flags (e.g.,
TRTLLM_ENABLE_PDL=1,ENROOT_ALLOW_DEV=yes) are necessary for this benchmark configuration.- No conflicting or redundant settings (e.g.,
TRTLLM_SERVER_DISABLE_GC=1appears in both lines 41 and 42).
86-86: Filename encoding may not reflect actual configuration parameter.The filename includes
eplb0but line 86 setsnum_slots: 288, suggesting the encoding key may need updating to reflect the actual load-balancer slot count (e.g.,eplb288instead ofeplb0). Verify the naming convention is intentional or update the filename for consistency.tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx1_gen3_tep8_bs32_eplb0_mtp0_ccb-NIXL.yaml (1)
17-17: Verify environment variable configuration and GC disabling strategy.The additions include Slurm GPU resource requests (line 17) and environment variables (lines 40–41) that disable garbage collection on both worker and server. While the YAML syntax is valid, several aspects need verification:
- GC disabling: Disabling
TRTLLM_SERVER_DISABLE_GCandTRTLLM_WORKER_DISABLE_GCin production-like benchmarks may impact memory management and should be documented (why is GC disabled? expected memory growth?).- ENROOT_ALLOW_DEV=yes: This grants device access to the container—clarify the security implications and whether this is necessary for benchmarking.
- Environment variable naming: Verify that
TRTLLM_*andTLLM_LOG_LEVELfollow the project's official naming conventions.Please search the codebase or documentation to confirm:
- Whether disabling GC during benchmarks is an intentional trade-off and what the expected behavior is.
- Whether
ENROOT_ALLOW_DEV=yesis required for this configuration or if it's overly permissive.- The official documentation for these environment variable flags.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx8_gen1_dep32_bs16_eplb0_mtp3_ccb-UCX.yaml (1)
17-17: Configuration additions are consistent with WIDEEP backend objectives.The environment and Slurm additions (lines 17, 40–41) mirror those in the NIXL configuration (File 1). This file includes
moe_config.backend: WIDEEP(line 80), which aligns with the PR objective to add "Kimi k2 WIDEEP perf and accuracy cases." However, the same concerns regarding GC disabling andENROOT_ALLOW_DEV=yesapply here.Since the AI summary indicates 40+ YAML files were updated with these same additions, please verify that:
- These environment variable settings are intentional across all configurations.
- The configuration is documented or there are inline comments explaining the rationale for disabling GC globally.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/wideep/perf/kimi-k2-thinking-fp4_8k1k_ctx8_gen1_dep32_bs256_eplb416_mtp0_ccb-UCX.yaml (1)
103-103: Clarify intent of disabling CUDA graphs for context server.The
ctxsection hascuda_graph_config: nullwhile thegensection defines a detailed CUDA graph configuration. This asymmetry may be intentional (e.g., context encoding often doesn't benefit from fixed-graph optimization), but it warrants verification to ensure this is not an oversight.Confirm whether disabling CUDA graphs for the context server is intentional, or if a similar configuration should be applied to both sections.
tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb288_mtp0_ccb-NIXL.yaml (2)
18-18: GPU resource allocation aligns with hardware configuration.The Slurm
--gres=gpu:4allocation on line 18 correctly corresponds togpus_per_node: 4specified in the hardware section (line 31), ensuring proper GPU provisioning for this disaggregated inference benchmark.
41-42: Verify environment variable recognition and review filename encoding claim.The environment variables added (TRTLLM_SERVER_DISABLE_GC, TRTLLM_WORKER_DISABLE_GC, TRTLLM_ENABLE_PDL, ENROOT_ALLOW_DEV) should be confirmed as recognized by the TensorRT-LLM system. Additionally, the review's claim about filename encoding
eplb0appears inconsistent with the actual filename, which encodeseplb288and matches thenum_slots: 288setting—clarify whether this filename-to-config alignment is correct.tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep48_bs16_eplb288_mtp3_ccb-DEFAULT.yaml (5)
27-27: Verify the high concurrency value is intentional and tested.The
concurrency_list: '12288'on line 27 is very high. Ensure this value has been:
- Validated in testing (not a typo or placeholder), and
- Confirmed to be achievable and meaningful for the DeepSeek-R1 model under these hardware constraints
1-12: Verify metadata consistency with actual configuration.Cross-reference the metadata fields (especially
config_index: 7,dataset_file, andmodel_dir_name) against:
- The test harness configuration index mapping to ensure correctness, and
- The referenced dataset file path to confirm it exists and is properly versioned
Also confirm that
supported_gpus: [GB200, GB300]aligns with the test environment where this will run.
42-42: This environment variable is appropriate for its context; no action needed.
ENROOT_ALLOW_DEV=yesis necessary for GPU access in containerized performance benchmarking. This setting is standard across the benchmark test configurations in the repository and aligns with the performance-tuning purpose of the other environment variables in this configuration (GC disabling, PDL enabling, etc.). Since this is a test configuration file rather than production code, the security implications are limited to the testing environment.
47-47: Verify whether accuracy-focused configurations exist in this PR.The
enable_accuracy_test: falsesetting appears intentional given the "perf" designation throughout the file path. However, confirm that the PR includes corresponding accuracy test configurations (likely in a separate directory or config file) to ensure the PR objectives of adding "accuracy cases" are fully addressed. If only this performance configuration exists without accuracy counterparts, this warrants discussion.
54-55: Verify the tensor and MOE expert parallel sizes are appropriate for disaggregated MOE inference testing.The generation config specifies
tensor_parallel_size: 48andmoe_expert_parallel_size: 48with 4 GPUs allocated. For disaggregated MOE benchmarks using WIDEEP backend, confirm whether these parameters represent logical expert/tensor distribution (appropriate for DeepSeek-R1's 48-expert architecture) or physical GPU requirements. The filename patterndep48suggests 48-way distribution is intentional. If this is testing distributed MOE strategies across logical servers rather than requiring 48 physical GPUs, the configuration may be correct.tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-NIXL.yaml (2)
17-17: GPU resource request aligns with hardware configuration.Line 17 adds
--gres=gpu:4which correctly matchesgpus_per_node: 4specified in the hardware section.
40-41: Environment variable format requires verification.The
worker_env_varandserver_env_varfields contain space-separated environment variables as a single string. Verify that the YAML parser and downstream SLURM/container runtime correctly split and apply these as separate environment variables, or confirm that this format is intentionally designed to be passed as a single string value.tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb0_mtp3_ccb-UCX.yaml (1)
17-17: Consistent GPU and environment configuration across files.This file follows the same pattern as other configurations with matching GPU resource request and environment variables. The consistency is appropriate for standardized benchmark setup.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep16_bs64_eplb0_mtp3_ccb-UCX.yaml (1)
17-17: Configuration follows standardized pattern across variant backends.GPU resource request and environment variables are consistent with other configurations, properly supporting both WIDEEP and traditional MOE backends.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp1_ccb-NIXL.yaml (1)
18-18: WIDEEP-specific configuration properly integrated with resource/environment setup.The additional metadata field (dataset_file) shifts line numbers appropriately. Load balancer configuration with 288 slots aligns with the specialized WIDEEP benchmark case.
Also applies to: 41-42
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep32_bs16_eplb0_mtp3_ccb-NIXL.yaml (1)
17-17: Resource configuration appropriate for aggressive parallelism settings.With tensor_parallel_size and moe_expert_parallel_size at 32, the 4-GPU allocation and environment controls ensure proper distributed execution.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-NIXL.yaml (1)
17-17: GPU allocation for multi-server generation topology.With 4 generation servers and 4 GPUs total (gpus_per_node: 4), verify that the resource allocation aligns with the intended distributed generation strategy. The TRTLLM MOE backend with MNNVL allreduce may have specific GPU requirements.
Confirm that 4 GPUs are sufficient for the 4-server generation topology with TRTLLM backend. Check related documentation or test infrastructure for expected GPU-to-server ratios.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx1_gen1_dep32_bs32_eplb288_mtp0_ccb-NIXL.yaml (1)
18-18: WIDEEP-specialized configuration with load balancer properly initialized.The load balancer with 288 slots (matching filename pattern) is correctly configured alongside the GPU resource request and environment variables.
Also applies to: 41-42
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx1_gen3_tep8_bs16_eplb0_mtp3_ccb-UCX.yaml (1)
17-17: Resource configuration appropriate for extended context length.With 8k input length and correspondingly larger token buffers (8448), the GPU resource request and environment configuration remain consistent with shorter context variants.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx2_gen1_dep16_bs128_eplb0_mtp1_ccb-UCX.yaml (1)
17-17: LGTM! GPU resource allocation and environment configuration added consistently.The additions align GPU resource requests with the hardware configuration (gpus_per_node: 4) and standardize environment variables across test configurations. The environment flags enable logging, disable GC for performance benchmarking stability, enable PDL features, and grant device access needed for GPU workloads.
Note: This review applies to all YAML configuration files in this PR with identical changes (lines 17-18 for slurm.extra_args, lines 40-42 for environment variables).
Also applies to: 40-41
tests/integration/defs/perf/disagg/utils/common.py (1)
173-174: Verify that date prefix doesn't cause issues across midnight boundaries.The date prefix is computed via
datetime.now()at runtime. If this function is called multiple times during a test run or if log operations span midnight, tests could generate logs under different date directories. This could cause issues if test execution or analysis tools expect a consistent log directory structure within a single run.tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen1_dep32_bs32_eplb0_mtp0_ccb-UCX.yaml (1)
17-17: LGTM! Configuration additions align with hardware requirements.The GPU resource request (4 GPUs) matches the
hardware.gpus_per_node: 4setting, and the environment variables provide runtime controls for logging, garbage collection, and device access. These changes are consistent across multiple test configurations in this PR.Also applies to: 40-41
tests/integration/defs/perf/disagg/execution/executor.py (1)
252-253: Preserving temp config on failure aids debugging.The commented cleanup allows the temporary configuration file to persist when job submission fails, making it easier to diagnose issues. The
backup_logsmethod will handle cleanup on successful job completion or cancellation.Based on learnings, this aligns with the preference for preserving context on errors in test infrastructure.
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml (1)
17-17: Configuration additions are consistent with established patterns.The GPU resource request and environment variables align with other benchmark configs in the PR and match the hardware configuration (4 GPUs requested,
gpus_per_node: 4).Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx1_gen3_tep8_bs16_eplb0_mtp3_ccb-NIXL.yaml (1)
17-17: Configuration additions follow the consistent pattern; NIXL backend properly configured.Environment variables and GPU resource request are identical to other configs and appropriate for this DeepSeek variant.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep32_bs16_eplb0_mtp3_ccb-UCX.yaml (1)
17-17: Configuration additions consistent; WIDEEP backend aligns with PR objective.This configuration uses higher parallelism (32×32 tensor/MOE) with a single generation server and WIDEEP MOE backend, which matches the PR's focus on adding Kimi k2 WIDEEP perf cases.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/wideep/perf/Qwen3-235B-A22B-FP4_1k1k_ctx1_gen1_dep32_bs16_eplb288_mtp3_ccb-UCX.yaml (1)
18-18: Primary WIDEEP configuration properly structured with advanced MOE features.This wideep/ subdirectory config includes the full set of WIDEEP-specific optimizations (num_slots: 288, low_precision_moe_combine, NVTX markers) alongside consistent GPU resource and environment variable configuration.
Also applies to: 41-42
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx1_gen3_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml (1)
17-17: DeepSeek 8k configuration properly scaled for larger context.GPU resource and environment variable additions are consistent; configuration appropriately scales buffer sizes and token limits for the 8k context length.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-NIXL.yaml (2)
1-4: Verify nvbugs comment format and link validity.Line 1 includes
# nvbugs: 5561153as a YAML comment. Please confirm this format properly links to your bug tracking system and is the correct syntax for these benchmark configs.
1-95: New FP8 configuration structure is complete and properly scaled.The new FP8 variant appropriately reduces parallelism (4×4 vs 8×8 in FP4), enables block reuse for memory efficiency, and includes all required resource and environment variable specifications. Configuration structure mirrors FP4 variants with precision correctly specified throughout.
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb0_mtp0_ccb-UCX.yaml (1)
17-17: WIDEEP configuration with scaled context server architecture.This DeepSeek configuration properly scales to 6 context servers with WIDEEP MOE backend, supporting high concurrency (1075) testing. All resource and environment additions are consistent with established patterns.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml (2)
1-4: Verify config_index uniqueness and dual backend nvbugs reference.This new FP8-UCX config shares
config_index: 21with the preceding FP8-NIXL config (File 6) and both referencenvbugs: 5561153. Please confirm: (1) whether config_index values must be unique across all config files, and (2) whether both UCX and NIXL backends are affected by the same issue or if these should reference different bugs.
1-95: New FP8-UCX configuration properly structured with appropriate backend selection.Configuration structure and all technical parameters (parallelism, batch sizes, block reuse, kv_cache settings) are consistent with the corresponding FP8-NIXL variant. The UCX backend selection appropriately mirrors the existing FP4 variant pattern (both FP4 and FP8 have UCX and NIXL options).
tests/integration/defs/perf/disagg/test_configs/wideep/accuracy/kimi-k2-thinking-fp4_1k1k_ctx3_gen1_dep32_bs1024_eplb384_mtp0_ccb-UCX.yaml (1)
1-118: Configuration structure and completeness look good overall.Apart from the duplicate batch size, the accuracy test configuration is properly structured with all required sections (metadata, slurm, benchmark, hardware, environment, worker_config). The accuracy test parameters (gsm8k task, expected value 0.9454) are clearly defined.
tests/integration/defs/perf/disagg/testlist/wideep.txt (2)
10-12: Verify test list entries reference existing configuration files.The new test entries follow the expected naming convention and correspond to the provided configuration files (File 1 for accuracy, File 8 for 1k1k perf). However, line 12 references a configuration for
kimi-k2-thinking-fp4_8k1k_ctx8_gen1_dep32_bs256_eplb416_mtp0_ccb-UCXthat was not provided for review.Please confirm that the following configuration files exist in the repository:
tests/integration/defs/perf/disagg/test_configs/wideep/perf/kimi-k2-thinking-fp4_8k1k_ctx8_gen1_dep32_bs256_eplb416_mtp0_ccb-UCX.yamlAlso applies to: 20-20
1-20: Test list formatting and structure are correct.The new entries follow the established naming and formatting conventions, and the test identifiers are properly formatted.
tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_1k1k_ctx2_gen1_dep16_bs128_eplb288_mtp3_ccb-UCX.yaml (1)
18-18: Configuration additions are appropriate for performance testing.The GPU resource request and environment variable settings are consistent with disaggregated inference performance benchmarking best practices. The environment variables enable optimized memory management and logging configurations.
Also applies to: 41-42
tests/integration/defs/pytest.ini (2)
9-9: pytest.ini norecursedirs update is correct.Adding
./perf/disaggprevents pytest from recursively discovering tests in the disaggregated benchmark directory during normal test runs, which is appropriate for performance/benchmarking tests that require specialized infrastructure.
13-17: New pytest markers are well-motivated.The five new markers (skip_less_device_memory, skip_less_host_memory, support_fp8, skip_device_not_contain, timeout) align with the new test configurations and resource management requirements for disaggregated inference benchmarks.
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml (1)
17-17: Configuration additions follow the established pattern.The GPU resource request and environment variable settings are consistent with other disaggregated inference configurations and support the performance benchmarking infrastructure.
Also applies to: 40-41
tests/integration/defs/perf/disagg/testlist/disagg.txt (2)
19-20: Verify FP8 Qwen3 configuration files exist.The new FP8 test entries reference configurations for Qwen3-235B-A22B in both NIXL and UCX variants that were not provided for review.
Please confirm that the following configuration files exist in the repository:
tests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-NIXL.yamltests/integration/defs/perf/disagg/test_configs/disagg/perf/Qwen3-235B-A22B-FP8_1k1k_ctx1_gen1_tep8_bs32_eplb0_mtp0_ccb-UCX.yaml
1-26: Test list structure and formatting are correct.The new entries follow established naming conventions and test path formatting. Existing entries remain unchanged.
tests/integration/defs/perf/disagg/test_configs/disagg/perf/deepseek-r1-fp4_1k1k_ctx1_gen4_tep8_bs32_eplb0_mtp3_ccb-UCX.yaml (1)
17-17: Configuration additions are consistent and appropriate.The GPU resource allocation and environment variable settings align with the disaggregated inference performance benchmarking infrastructure.
Also applies to: 40-41
tests/integration/defs/perf/disagg/test_configs/wideep/perf/kimi-k2-thinking-fp4_1k1k_ctx3_gen1_dep32_bs1024_eplb384_mtp0_ccb-UCX.yaml (2)
59-75: Batch sizes configuration is correct in this perf variant.Unlike the accuracy configuration (File 1), this perf configuration does not have the duplicate 1024 entry. The batch sizes array is properly structured.
1-112: WIDEEP perf configuration for Kimi K2 is properly structured.The new performance test configuration includes all required sections with appropriate settings for WIDEEP disaggregated inference benchmarking. The UCX cache transceiver backend and MOE configuration are properly specified. This configuration correctly pairs with the accuracy test configuration (File 1).
tests/integration/defs/perf/disagg/test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb288_mtp0_ccb-UCX.yaml (1)
18-18: GPU resource request is consistent with configuration.The
--gres=gpu:4argument correctly aligns with thegpus_per_node: 4configuration on line 31, ensuring the SLURM job requests the appropriate GPU resources.
...s/wideep/accuracy/kimi-k2-thinking-fp4_1k1k_ctx3_gen1_dep32_bs1024_eplb384_mtp0_ccb-UCX.yaml
Show resolved
Hide resolved
...test_configs/wideep/perf/deepseek-r1-fp4_8k1k_ctx6_gen1_dep16_bs64_eplb288_mtp0_ccb-UCX.yaml
Show resolved
Hide resolved
...onfigs/wideep/perf/kimi-k2-thinking-fp4_8k1k_ctx8_gen1_dep32_bs256_eplb416_mtp0_ccb-UCX.yaml
Show resolved
Hide resolved
7bbda50 to
e445c7b
Compare
|
/bot run -skip-pipeline |
|
/bot run --skip-test |
|
PR_Github #27250 Bot args parsing error: usage: /bot [-h] |
|
PR_Github #27253 [ run ] triggered by Bot. Commit: |
5b4778f to
d2aa8ad
Compare
|
PR_Github #27253 [ run ] completed with state |
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
5a6baf8 to
0848ca8
Compare
|
/bot --reuse-pipeline |
GitHub Bot Help
Provide a user friendly way for developers to interact with a Jenkins server. Run See details below for each supported subcommand. Details
Launch build/test pipelines. All previously running jobs will be killed.
kill
Kill all running builds associated with pull request. skip
Skip testing for latest commit on pull request. reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break. |
|
/bot reuse-pipeline |
|
PR_Github #27275 [ reuse-pipeline ] triggered by Bot. Commit: |
|
PR_Github #27275 [ reuse-pipeline ] completed with state |
…VIDIA#8779) The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][infra] Waive failed cases for main branch on 11/25 (NVIDIA#9429) Signed-off-by: qqiao <qqiao@nvidia.com> [NVIDIA#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (NVIDIA#9409) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [None][ci] Move more test stages to use OCI machines (NVIDIA#9395) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Matt Lefebvre <matthewelefebvre@gmail.com> [None][feat] Improve TRTLLM MoE in small hidden size throughput cases (NVIDIA#9377) Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com> [https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (NVIDIA#9093) Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash. This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens. Signed-off-by: eopXD <yuehtingc@nvidia.com> [https://nvbugs/5667922][fix] Update long context evaluation config (NVIDIA#9426) Signed-off-by: mni <125171826+baize97@users.noreply.github.com> [None][fix] Mitigate test timeout issues (NVIDIA#9445) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [None][chore] Fix trtllm-eval for PyTorchLLM (NVIDIA#9427) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [None][feat] Add a parser to layer-wise benchmarks (NVIDIA#9440) Signed-off-by: Tailing Yuan <yuantailing@gmail.com> [None][feat] Support custom chat template for tool calling (NVIDIA#9297) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> [TRTLLM-8160][feat] Add draft token tree runtime on CDL (NVIDIA#8586) Signed-off-by: Yue Weng <25103990+yweng0828@users.noreply.github.com> [None][ci] waive a test (NVIDIA#9458) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> [https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (NVIDIA#9439) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (NVIDIA#9411) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (NVIDIA#9457) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (NVIDIA#9145) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (NVIDIA#9308) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [None][chore] AutoDeploy add multi stream moe pass to default.yaml (NVIDIA#9430) Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (NVIDIA#9438) Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com> [None][chore] Bump version to 1.2.0rc5 (NVIDIA#9455) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (NVIDIA#9356) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [None][ci] move some slow test cases of DGX-B200 to post merge (NVIDIA#9467) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (NVIDIA#9224) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (NVIDIA#9246) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [https://nvbugs/5580099][fix] Cherry pick IMA issue fix from release/1.1 (NVIDIA#9032) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][chore] Upgrade CuteDSL to 4.3.0 (NVIDIA#9444) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (NVIDIA#9376) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None][feat] Add environment variable to force spec-dec number of accepted tokens (NVIDIA#9371) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [None][infra] Update allowed list 2025.11.25 (NVIDIA#9468) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][infra] Fail the pipeline when slurm ssh dropped (NVIDIA#9157) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][feat] AutoDeploy: Remove redundant copies in mamba layers (NVIDIA#9461) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> Co-authored-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [None][feat] AutoDeploy: Add A_log fusion for Mamba layers (NVIDIA#9422) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [None][ci] Waive blackwell test on spec gate. (NVIDIA#9502) Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com> [https://nvbugs/5608930][fix] Fix a typo (NVIDIA#9487) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [NVIDIA#9463][feat] Add revision option to trtllm commands (NVIDIA#9498) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [TRTLLM-9085][doc] fix math formula rendering issues (NVIDIA#9481) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][chore] update comments in llm_args.py (NVIDIA#9472) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5680310][fix] Fix ctx only timed out test (NVIDIA#9410) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [https://nvbugs/5547414][fix] enable case after using local cache model (NVIDIA#9473) Signed-off-by: Hui Gao <huig@nvidia.com> [None][fix] Replace PYTORCH_CUDA_ALLOC_CONF with PYTORCH_ALLOC_CONF to fix deprecation warning (NVIDIA#9294) Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com> [https://nvbugs/5698581][fix] Init draft tokens for CUDA graph dummy request (NVIDIA#9505) Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com> [None][infra] Waive failed case in pre-merge on 11/27 (NVIDIA#9507) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-9513][docs] Qwen3 deployment guide (NVIDIA#9488) Signed-off-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com> Co-authored-by: Lanyu Liao <laliao@laliao-mlt.client.nvidia.com> [None][chore] revert batch_size=1 to prevent timeout and lower accuracy reference by 0.12% as a WAR (NVIDIA#9447) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> Co-authored-by: Shi Xiaowei <39303645+Shixiaowei02@users.noreply.github.com> [TRTLLM-9279][infra] Use flexcache for gh200 nodes since they locate in Austin (NVIDIA#9405) Signed-off-by: qqiao <qqiao@nvidia.com> Signed-off-by: Emma Qiao <qqiao@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [cherry-pick][https://nvbugs/5670793][fix] Solve trtllm-serve launch_disaggregated issue (NVIDIA#9346) Signed-off-by: xxi <xxi@nvidia.com> [None][infra] Fix Slurm job script (NVIDIA#9508) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][fix] change allreduce workspace dtype to torch.int64 to avoid overflow (NVIDIA#9479) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [None][feat] add qwen3-next CI test of accuracy on BF16 and NVFP4 (NVIDIA#9330) Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com> [None][fix] fix TP support for DeepSeek-V3.2 on hopper (NVIDIA#9484) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [TRTLLM-9389][chore] Refactor AlltoallMethodType. (NVIDIA#9388) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> [https://nvbugs/5674665][chore] Add test coverage for https://nvbugspro.nvidia.com/bug/5674665 (NVIDIA#9518) Signed-off-by: eopXD <yuehtingc@nvidia.com> [TRTLLM-7288][infra] Download merged waive list in slurm script (NVIDIA#8999) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (NVIDIA#9449) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [NVIDIA#9150][feat] AutoDeploy Nemotron-Flash support (NVIDIA#9504) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [None] [chore] Update to cutlass 4.3 (NVIDIA#8637) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5637037][chore] Update waive lists. (NVIDIA#9386) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-8970][infra] Fix generate report when has isolation test result (NVIDIA#8861) Signed-off-by: qqiao <qqiao@nvidia.com> Signed-off-by: Emma Qiao <qqiao@nvidia.com> [https://nvbugs/5685015][fix] Update invalid max_token test (NVIDIA#9435) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (NVIDIA#9211) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (NVIDIA#9441) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [None][chore] add spec_decoding configs in perf benchmark scripts and fix typos (NVIDIA#9533) Signed-off-by: Lanyu Liao <lancelly@users.noreply.github.com> Co-authored-by: Lanyu Liao <lancelly@users.noreply.github.com> [None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (NVIDIA#9529) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None] [chore] Enhancements and clean up to slurm scripts (NVIDIA#9493) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (NVIDIA#9538) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [None][infra] Waive failed cases for main branch on 11/28 (NVIDIA#9539) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Pass checkpoint_format to create_input_processor (NVIDIA#9521) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [TRTLLM-9541][infra] Use artifactory mirror for download.pytorch.org (NVIDIA#9477) Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com> Signed-off-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (NVIDIA#9454) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [None][infra] Waive failed case in pre-merge on 11/28 (NVIDIA#9537) Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com> [None][perf] Helix: improve all-to-all perf for large CP size (NVIDIA#9494) Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com> Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com> Co-authored-by: Zheyu Fu <zheyuf@nvidia.com> [None][feat] support for more accurate AR calculation (NVIDIA#9323) Signed-off-by: binghanc <176802681+binghanc@users.noreply.github.com> [TRTLLM-9488][fix] llmapi references (NVIDIA#9547) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [NVIDIA#8948][feat] Support custom sharding config (NVIDIA#9143) Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][chore] Weekly mass integration of release/1.1 -- rebase (NVIDIA#9522) Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com> Signed-off-by: qgai <qgai@nvidia.com> Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> Signed-off-by: Simeng Liu <simengl@nvidia.com> Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com> Signed-off-by: Vincent Zhang <vinczhang@nvidia.com> Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com> Signed-off-by: Michal Guzek <mguzek@nvidia.com> Signed-off-by: Michal Guzek <moraxu@users.noreply.github.com> Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> Signed-off-by: leslie-fang25 <leslief@nvidia.com> Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co> Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Co-authored-by: yunruis <205571022+yunruis@users.noreply.github.com> Co-authored-by: sunnyqgg <159101675+sunnyqgg@users.noreply.github.com> Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com> Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Co-authored-by: JunyiXu-nv <219237550+JunyiXu-nv@users.noreply.github.com> Co-authored-by: Simeng Liu <109828133+SimengLiu-nv@users.noreply.github.com> Co-authored-by: Guoming Zhang <137257613+nv-guomingz@users.noreply.github.com> Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Co-authored-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com> Co-authored-by: Vincent Zhang <vcheungyi@163.com> Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com> Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com> Co-authored-by: Chang Liu <9713593+chang-l@users.noreply.github.com> Co-authored-by: Leslie Fang <leslief@nvidia.com> Co-authored-by: Shunkangz <182541032+Shunkangz@users.noreply.github.com> Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co> Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com> [TRTLLM-5971][feat] Integrate helix parallelism (NVIDIA#9342) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][infra] - Request idle time exemption for OCI jobs (NVIDIA#9528) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][infra] Wiave failed tests for main branch on 11/30 (NVIDIA#9555) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Fix port conflict in disagg tests (NVIDIA#9474) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9558) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9559) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (NVIDIA#9486) [None] [feat] Optimize the algorithm part of RocketKV (NVIDIA#9333) Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com> [https://nvbugs/5690172][fix] Fix Qwen3-235B ATP accuracy issue with PDL (NVIDIA#9530) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [TRTLLM-6222][feat] Extend cute_dsl_nvfp4_gemm to sm103. (NVIDIA#9543) Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com> [None][fix] Correct virtual memory allocation alignment (NVIDIA#9491) Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5684703][fix] Unwaive disagg guided decoding test (NVIDIA#9466) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [https://nvbugs/5503479][fix] Temporarily lower reference accuracy to stabilize CI (NVIDIA#9398) Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com> [None][chore] remove qwen3-next accuracy tests (NVIDIA#9534) Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com> [None][doc] fix mtp.py typo (NVIDIA#9307) Signed-off-by: liugaoji <757394026@qq.com> [None][feat] add chat template kwargs support to longbench-v2 (NVIDIA#9544) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [NVIDIA#9496][fix] AutoDeploy: remove auto-tuner from nvfp4_gemm forward (NVIDIA#9497) Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> [None][fix] Replace hash method with unique_id for cutedsl MoE runners. (NVIDIA#9569) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][chore] refactor disaggregated scripts to use named arguments (NVIDIA#9581) Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com> [TRTLLM-6222][feat] Several perf opt for cuteDSL nvf4 gemm (NVIDIA#9428) Signed-off-by: Yuhan Li <51736452+liyuhannnnn@users.noreply.github.com> [None][chore] reduce the layers of the `devel` docker image (NVIDIA#9077) Signed-off-by: Martin Marciniszyn Mehringer <11665257+MartinMarciniszyn@users.noreply.github.com> [https://nvbugs/5651854][infra] Enable perf metrics during accuracy testing (NVIDIA#9140) [None][fix] Skip Allreduce init for Attention DP (NVIDIA#9542) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [None][test] [None][test] Waive main branch test failures 12/1 (NVIDIA#9566) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [None][ci] Minor change for Slurm scripts (NVIDIA#9561) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [TRTLLM-6768][infra] Fix params for not updating github status (NVIDIA#6747) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][infra] Update the pytest options after MI (NVIDIA#9579) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-6756][feat] Add Beam Search to TorchSampler (NVIDIA#8509) Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com> [None][chore] Defer exposing context parallel configs (NVIDIA#9552) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [TRTC-1943][feat] Env vars override support in LLM API (NVIDIA#9104) Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com> [None][feat] AutoDeploy: Use the router gemm op for nemotron MOE (NVIDIA#9500) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [NVIDIA#9198][feat] Refactor dist ops in AutoDeploy (NVIDIA#9301) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [None][fix] Prevent YAML partial kv_cache_config from incorrectly overriding the complete kv_cache_config (NVIDIA#9262) Signed-off-by: Yuening Li <62227368+Yuening-wa@users.noreply.github.com> [TRTLLM-9085][doc] fix math formula rendering issues in github (NVIDIA#9605) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> [None][feat] Unify nvfp4 gemm backend (NVIDIA#8963) Signed-off-by: Shijie Wang <jaywan@nvidia.com> Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> Signed-off-by: Shijie <jaywan@nvidia.com> Co-authored-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][feat] Add support for KVCache reuse for DSv32 (NVIDIA#9383) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][chroe] Polish qwen3-next modeling code. (NVIDIA#8902) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5703953][fix] Use random port for disagg tests (NVIDIA#9582) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][fix] Waive gb200 (NVIDIA#9580) Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com> [FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (NVIDIA#9261) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [https://nvbugs/5582091][test] increase warmup times in testing for multi-gpu cases (NVIDIA#9578) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9588) Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> [https://nvbugs/5702793][fix] Fix uncontiguous tensor view (NVIDIA#9576) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][infra] Waive failed cases for main branch (NVIDIA#9615) Signed-off-by: qqiao <qqiao@nvidia.com> [TRTLLM-9488][feat] use FlashInfer.sampling by default (NVIDIA#9545) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [None][infra] Update allowlist 2025/12/01 (NVIDIA#9616) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][infra] Remove an invalid test name in waives.txt (NVIDIA#9620) Signed-off-by: qqiao <qqiao@nvidia.com> Lock the gpu clocks in L0 perf tests (NVIDIA#9585) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-9466][test] Evaluate helix parallelism with DSV3 Lite (NVIDIA#9597) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][fix] Extract GPU count from single-node stage names (NVIDIA#9599) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [https://nvbugs/5667774][fix] Refine Piecewise Cuda Graph Condition for DP (NVIDIA#9393) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [TRTLLM-9144][fix] enhance RPC robustness (NVIDIA#8711) Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Co-authored-by: Erin Ho <14718778+hchings@users.noreply.github.com> [https://nvbugs/5627710][fix] Fix synchronization bugs in KvCacheTransferManager that can cause corrupted blocks (NVIDIA#9056) Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com> Signed-off-by: Thor Johnsen <41591019+thorjohnsen@users.noreply.github.com> Co-authored-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [TRTLLM-8980][test] Clean up spec dec tests in test_llm_api_pytorch (NVIDIA#8889) Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [NVIDIA#9150][feat] Add code for nano v3 to custom implementation in AD (NVIDIA#9465) * Why? We would like to show an alternative to monkey-patching in AutoDeploy. * What? This commit builds on the existing custom model implementation for NemotronH and adds the bits relevant for MoE layers. Part of NVIDIA#9150. Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com> [NVIDIA#9150][feat] AutoDeploy: reviewer comments for NVIDIA#9150 (NVIDIA#9527) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [https://nvbugs/5651854][fix] Fix dist-serving perf by clearing CPU affinity (NVIDIA#9549) Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> [NVIDIA#9550][feat] AutoDeploy: Add NVFP4 Cutlass MoE kernels (NVIDIA#9551) Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> [https://nvbugs/5688388][fix] fix: Reducing num request in disagg test to speed up (NVIDIA#9598) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [TRTLLM-8946][feat] Improved heuristics to detect shardable regions (NVIDIA#9200) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com> Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [NVIDIA#9632][feat] Support EXTRA_WHEEL_BUILD_ARGS during wheel build (NVIDIA#9633) Signed-off-by: Yu Chi Li <yuchil@nvidia.com> [None][chore] Waive test failing on pre-merge (NVIDIA#9638) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [None][chore] Remove traceback dump for multimodal input processor (NVIDIA#9634) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> [None][chore] Fix trtllm-eval and move GroupedGemmInputsHelper (NVIDIA#9612) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [https://nvbugs/5698434][fix] Use separate weight mapper for draft (NVIDIA#9607) Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com> [TRTLLM-7101][infra] Reuse passed tests (NVIDIA#6894) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [None][test] Remove duplicate test cases (NVIDIA#9623) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][feat] Add RocketKV usage doc and e2e accuracy test on LongBenchV2 (NVIDIA#9572) Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com> [TRTLLM-9242][doc] Add examples showcasing openai compatible APIs (NVIDIA#9520) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][chore] AutoDeploy update cuda stream manager for multi-device (NVIDIA#9575) Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> [TRTLLM-9391][chore] Automatically estimate required workspace. (NVIDIA#9535) Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com> [https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (NVIDIA#9647) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [https://nvbugs/5561153][test] Fix log error for perf test (NVIDIA#9622) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [TRTLLM-8241][feat] Aliasing to comply to LlmArgs (NVIDIA#9586) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9593) Signed-off-by: Jie Li <lijie@nvidia.com> Co-authored-by: Jie Li <lijie@nvidia.com> [TRTLLM-6842][feat] Support Response API for general purpose (NVIDIA#9392) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][test] Update Qwen3-next accuracy testing by setting the cuda … (NVIDIA#9613) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [None][feat] update trtllm-gen nvfp4 kernels with better performance (NVIDIA#9510) Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com> [None][doc] Replace the tensorrt icon with torch icon on overview.md (NVIDIA#9644) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5705197][chore] Unwaive timeout disagg tests (NVIDIA#9637) Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> [https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (NVIDIA#8253) Signed-off-by: Michal Guzek <mguzek@nvidia.com> [None][fix] Fix wide ep MoE error (NVIDIA#9642) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> [https://nvbugs/5702795][fix] Remove the warning message for aten.log. (NVIDIA#9665) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [https://nvbugs/5693853][fix] Fix error handling when querying machin… (NVIDIA#9483) Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com> [OMNIML-2932] [feat] nvfp4 awq support (NVIDIA#8698) Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com> [NVIDIA#9643][fix] AutoDeploy: fix nano sharding config (NVIDIA#9668) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com> [NVIDIA#9147][feat] AutoDeploy: Draft Target Speculative Decoding (NVIDIA#9275) Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com> [None][feat] Update Qwen3CodeToolParser to align tool-calling parameters (NVIDIA#9540) Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> [TRTLLM-7181][infra] Generate test results when pytest timeout happens (NVIDIA#9396) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9522][fix] restore `trtllm-serve mm_embedding_serve` (NVIDIA#9669) [TRTLLM-5093][infra] Write env variables to a file in the interactive debug session (NVIDIA#6792) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][fix] fix error when processing batches containing both text and mm data (NVIDIA#8381) Signed-off-by: Nekofish-L <liuxiangyang@mail.ustc.edu.cn> [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (NVIDIA#7838) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [None][feat] Add weights initialization and context phase parser to layer-wise benchmarks (NVIDIA#9667) Signed-off-by: Tailing Yuan <yuantailing@gmail.com> [TRTLLM-8274][feat] Check if executor is shutdown in /health entrypoint (NVIDIA#9057) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [NVIDIA#8733][feat] Add Llama4 MoE handling to AutoDeploy (NVIDIA#9556) Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com> Signed-off-by: tcherckez-nvidia <127761168+tcherckez-nvidia@users.noreply.github.com> Co-authored-by: Neta Zmora <nzmora@nvidia.com> [None][ci] unwaive tests (NVIDIA#9651) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> [None][feat] Add NIXL-LIBFABRIC support (NVIDIA#9225) Signed-off-by: Yoray Zack <62789610+zackyoray@users.noreply.github.com> Signed-off-by: zackyoray <yorayz@nvidia.com> [None][test] rename wide ep and disagg metric name in perf test (NVIDIA#9704) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> [https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (NVIDIA#9617) Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> [None][fix] Recover TRTLLM MoE Perf for DEP (NVIDIA#9562) Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com> [None][chore] Add failed cases into waives.txt (NVIDIA#9662) Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> [None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (NVIDIA#9608) Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com> [None][infra] Add container notices and documentation (NVIDIA#9185) Signed-off-by: Parker Drake <pdrake@nvidia.com> [TRTLLM-5312][infra] Add triton trigger rules (NVIDIA#6440) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][doc] Add feature docs for helix parallelism (NVIDIA#9684) Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> [TRTLLM-9579][infra] Set mergeWaiveList stage UNSTABLE when there is any issue (NVIDIA#9692) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> [None][doc] Added line about partial reuse (NVIDIA#7846) Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com> [TRTLLM-8920][feat] decouple disagg service from fastapi (NVIDIA#8714) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [https://nvbugs/5633340][fix] start disagg workers and servers on free ports (NVIDIA#9694) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [TRTLLM-9562] [doc] Add Deployment Guide for Kimi K2 Thinking on TensorRT LLM - Blackwell (NVIDIA#9711) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [NVIDIA#9602][feat] AutoDeploy: Support TRTLLM Sampler (NVIDIA#9641) Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None] [tests] Unwaive EPLB tests (NVIDIA#9625) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5518713][test] Refactor core test lists by merging with llm_perf_cluster.yml (NVIDIA#9714) Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com> [TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (NVIDIA#9583) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [None][refactor] Improve request processing function in sampler (NVIDIA#9671) Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> [https://nvbugs/5670672][fix] Fix flaky KV connector tests (NVIDIA#9676) Signed-off-by: jthomson04 <jwillthomson19@gmail.com> [None][infra] Update allowed list 20251204 (NVIDIA#9718) Signed-off-by: Yuanjing Xue <197832395+yuanjingx87@users.noreply.github.com> [None][feat] AutoDeploy: Perf optimization for Attention and rmsnorm (NVIDIA#9719) Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com> [None][chore] Waive flakey disagg tests (NVIDIA#9749) Signed-off-by: Mike Iovine <miovine@nvidia.com> [https://nvbugs/5601682][fix] Fix cacheTransceiver hang (NVIDIA#9311) Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9199][docs] KV Connector Docs (NVIDIA#9325) Signed-off-by: jthomson04 <jwillthomson19@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9160][doc] add doc to llm_runtime.py (NVIDIA#9482) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][doc] VDR 1.0 trtllm-serve doc enhancement (NVIDIA#9443) Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9086][doc] Clean up TODOs in documentation (NVIDIA#9292) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9157][doc] Guided decoding doc improvement (NVIDIA#9359) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][infra] Updated Linux installation guide (NVIDIA#9485) Signed-off-by: Yiqing Yan <yiqingy@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9075][doc] refine the slurm examples (NVIDIA#9548) Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9093][doc] update hyper links in overview (NVIDIA#9568) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [TRTLLM-9092][doc] link to modelopt checkpoints in quick start guide (NVIDIA#9571) Signed-off-by: junq <22017000+QiJune@users.noreply.github.com> Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com> Signed-off-by: Mike Iovine <miovine@nvidia.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [None][fix] Fix triton moe load_weight (NVIDIA#9649) Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com> [None][fix] fix a bug: deepseek_fp8_block_scales in TRTLLMGEN-MoE use 2D x_sf instead of 1D (NVIDIA#9658) Signed-off-by: xxi <xxi@nvidia.com> [TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (NVIDIA#9592) Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com> [TRTLLM-9522][chore] implement default `attach_multimodal_embeddings` (NVIDIA#9664) Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com> [TRTLLM-9660][feat] Convert cuteDSL GEMM to opt-in feature (NVIDIA#9682) Signed-off-by: Jonas Li <6110159+longlee0622@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [None][fix] enable hmac in RPC (NVIDIA#9745) Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [https://nvbugs/5703953][fix] Preserving ip:port for trtllm-serve before initializing llm (NVIDIA#9646) Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com> [None][infra] Waive failed cases for main branch on 12/07 (NVIDIA#9769) Signed-off-by: qqiao <qqiao@nvidia.com> [None][fix] Several minor fixes to CI setting (NVIDIA#9765) Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> [OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (NVIDIA#9679) Signed-off-by: Chenjie Luo <chenjiel@nvidia.com> [None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce (NVIDIA#9314) Signed-off-by: Ludwig Schneider <lschneider@nvidia.com> [TRTLLM-9000][feat] Add multi-node Perf Tests into CI (NVIDIA#8800) Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> [None][test] add ntp tolerance in time metrics verification (NVIDIA#9741) Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com> [TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (NVIDIA#9645) [https://nvbugs/5422621][test] Add GB 200 WIDEEP test case for RCCA 5422621 (NVIDIA#9506) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [None][fix] Fix two tuning cache miss issues. (NVIDIA#9743) Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> [None][infra] Check in most recent lock file from nightly pipeline Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com> [TRTLLM-9706] [doc] Update wide EP documents (NVIDIA#9724) Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5666804][test] only adding sampler config for limited models (NVIDIA#9512) Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com> Co-authored-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com> Co-authored-by: Larry Xu <197874197+LarryXFly@users.noreply.github.com> [None][infra] Waive failed cases for main on 12/08 (NVIDIA#9773) Signed-off-by: qqiao <qqiao@nvidia.com> [None][chore] Move the rocketkv e2e test to post-merge (NVIDIA#9768) Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com> [None][chore] Enable tvm_ffi for cute dsl nvfp4_gemm to reduce host overhead. (NVIDIA#9690) Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com> [TRTLLM-9431][perf] Enable multistream for Linear Attention in Qwen3-… (NVIDIA#9696) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> [None][chore] Remove closed bugs (NVIDIA#9770) Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> [None][infra] update mooncake in docker images (NVIDIA#9584) Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com> Signed-off-by: Zheng Duan <200704041+zhengd-nv@users.noreply.github.com> [None][test] Add Kimi k2 WIDEEP perf and accuracy cases (NVIDIA#9686) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> [https://nvbugs/5527655][test] Add test case for RCCA 5527655 (NVIDIA#9511) Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> [http://nvbugs/5649010][fix] fix test_auto_scaling.py::test_worker_restart timeout (NVIDIA#9775) Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com> [None][fix] Switch AutoDeploy's default allreduce strategy to NCCL (NVIDIA#9666) Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com> [TRTLLM-9506][fix] Fix AR for DeepSeek-R1 2 model path (NVIDIA#9661) Signed-off-by: qgai <qgai@nvidia.com> ray + updatew works trtllm works in async env trtllm works in sync and async env ray + updatew works rebase to the updated verl server mode still cherry pick still cherry pick still cherry pick integrated http interface hang at RyExecutor create workers ray.remote clean code use tensorrt_llm.rlhf_utils Signed-off-by: Liwei Ma <liweim@nvidia.com> placement, asyncllm, and basic tests Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> connect sleep and wakeup; Add support to pass None to update_weights Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Batching ctx for IFB scheduler Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> accuracy WAR for TP>1: always use AllReduceStrategy.NCCL, refactored Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix e2e integration Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> update asyncllm, other nits Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix init setup Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Fix TRTLLMSampler logprobs perf Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> fix and cleanup Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> fix server Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Revert "Batching ctx for IFB scheduler" This reverts commit b51aac0 Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com> update & address comments Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com> Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Summary by CodeRabbit
Release Notes
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.
Add Kimi k2 WIDEEP perf and accuracy cases