-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTLLM-8994][infra] upgrade to DLFW 25.10 and pytorch 2.9.0 / triton 3.5.0 #8838
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRTLLM-8994][infra] upgrade to DLFW 25.10 and pytorch 2.9.0 / triton 3.5.0 #8838
Conversation
|
/bot run --post-merge --disable-fail-fast |
📝 WalkthroughWalkthroughThis PR upgrades the infrastructure stack from CUDA 12.9 to CUDA 13.0, updates PyTorch from 2.8.0 to 2.9.0, bumps container base images to 25.10, updates TensorRT and related CUDA libraries, removes CUDA 12.9-specific build configurations from Jenkins pipelines, and adjusts code compatibility for newer PyTorch versions. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docker/common/install_mpi4py.sh (1)
43-57: Simplify the rank detection logic.The rank detection logic has unnecessary complexity. Since Line 46 uses
elif, it's impossible for bothhas_slurm_rankandhas_ompi_rankto be true simultaneously. The check on Lines 51-53 for both being positive will never trigger.Apply this diff to simplify the logic:
- if(os.getenv("SLURM_PROCID")): - slurm_rank = int(os.environ["SLURM_PROCID"]) - has_slurm_rank=True - elif(os.getenv("OMPI_COMM_WORLD_RANK")): - ompi_rank = int(os.environ["OMPI_COMM_WORLD_RANK"]) - has_ompi_rank=True - else: + slurm_procid = os.getenv("SLURM_PROCID") + ompi_rank_env = os.getenv("OMPI_COMM_WORLD_RANK") + + if slurm_procid: + rank = int(slurm_procid) + elif ompi_rank_env: + rank = int(ompi_rank_env) + else: raise RuntimeError("No SLURM_PROCID or OMPI_COMM_WORLD_RANK environment variable found When TRTLLM_USE_MPI_KVCACHE is set to 1") - if(has_slurm_rank and has_ompi_rank): - if(slurm_rank>0 and ompi_rank>0): - raise RuntimeError("Only one of SLURM_PROCID or OMPI_COMM_WORLD_RANK should >0 when TRTLLM_USE_MPI_KVCACHE is set to 1") - else: - rank=slurm_rank if slurm_rank>0 else ompi_rank - else: - rank = ompi_rank if has_ompi_rank else slurm_rank
🧹 Nitpick comments (1)
jenkins/L0_Test.groovy (1)
2887-2891: Clarify extra PyTorch CUDA 13.0 installation conditions.The code installs PyTorch 2.9.0 with CUDA 13.0 for "SBSA platform and Blackwell GPUs bare-metal environments" (line 2887-2888). However:
- The condition
if (values[6])at line 2888 applies to both Ubuntu 24.04 x86_64 (line 2765) and SBSA Ubuntu 24.04 (line 2777) configurations, not just SBSA.- Consider making the comment more accurate to reflect that this applies to both x86_64 Blackwell (RTX5090, GB202) and SBSA (GH200) bare-metal environments with Ubuntu 24.04.
Apply this diff to improve the comment accuracy:
- // Extra PyTorch CUDA 13.0 install for SBSA platform and Blackwell GPUs bare-metal environments + // Extra PyTorch CUDA 13.0 install for Ubuntu 24.04 bare-metal environments (Blackwell/GB202 GPUs and SBSA GH200) if (values[6]) { echo "###### Extra PyTorch CUDA 13.0 install Start ######" trtllm_utils.llmExecStepWithRetry(pipeline, script: "pip3 install torch==2.9.0 torchvision --index-url https://download.pytorch.org/whl/cu130")
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
constraints.txt(1 hunks)docker/Dockerfile.multi(3 hunks)docker/Makefile(1 hunks)docker/common/install.sh(0 hunks)docker/common/install_cuda_toolkit.sh(1 hunks)docker/common/install_mpi4py.sh(2 hunks)docker/common/install_pytorch.sh(2 hunks)docker/common/install_tensorrt.sh(1 hunks)docs/source/installation/build-from-source-linux.md(0 hunks)docs/source/installation/linux.md(1 hunks)docs/source/legacy/reference/support-matrix.md(1 hunks)jenkins/Build.groovy(1 hunks)jenkins/L0_Test.groovy(10 hunks)jenkins/controlCCache.groovy(1 hunks)jenkins/current_image_tags.properties(1 hunks)requirements.txt(3 hunks)scripts/build_wheel.py(0 hunks)tensorrt_llm/_utils.py(1 hunks)tests/integration/test_lists/waives.txt(1 hunks)tests/unittest/_torch/modeling/test_modeling_nemotron_h.py(1 hunks)
💤 Files with no reviewable changes (3)
- docs/source/installation/build-from-source-linux.md
- docker/common/install.sh
- scripts/build_wheel.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use only spaces, no tabs; indent with 4 spaces.
Files:
tensorrt_llm/_utils.pytests/unittest/_torch/modeling/test_modeling_nemotron_h.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.
Files:
tensorrt_llm/_utils.pytests/unittest/_torch/modeling/test_modeling_nemotron_h.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).
Files:
tensorrt_llm/_utils.pytests/unittest/_torch/modeling/test_modeling_nemotron_h.py
🧠 Learnings (16)
📚 Learning: 2025-08-21T00:16:56.457Z
Learnt from: farshadghodsian
Repo: NVIDIA/TensorRT-LLM PR: 7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.
Applied to files:
docs/source/legacy/reference/support-matrix.mddocker/common/install_tensorrt.shjenkins/current_image_tags.propertiesdocs/source/installation/linux.md
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.
Applied to files:
tests/integration/test_lists/waives.txt
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.
Applied to files:
tests/integration/test_lists/waives.txttests/unittest/_torch/modeling/test_modeling_nemotron_h.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/integration/test_lists/waives.txtjenkins/current_image_tags.propertiesjenkins/L0_Test.groovy
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/integration/test_lists/waives.txtjenkins/L0_Test.groovy
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
tests/integration/test_lists/waives.txt
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tests/integration/test_lists/waives.txtjenkins/L0_Test.groovy
📚 Learning: 2025-08-18T08:42:02.640Z
Learnt from: samuellees
Repo: NVIDIA/TensorRT-LLM PR: 6974
File: tensorrt_llm/serve/scripts/benchmark_dataset.py:558-566
Timestamp: 2025-08-18T08:42:02.640Z
Learning: In TensorRT-LLM's RandomDataset (tensorrt_llm/serve/scripts/benchmark_dataset.py), when using --random-token-ids option, sequence length accuracy is prioritized over semantic correctness for benchmarking purposes. The encode/decode operations should use skip_special_tokens=True and add_special_tokens=False to ensure exact target token lengths.
Applied to files:
tests/integration/test_lists/waives.txt
📚 Learning: 2025-08-11T20:09:24.389Z
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Applied to files:
tests/integration/test_lists/waives.txtjenkins/L0_Test.groovy
📚 Learning: 2025-08-20T15:04:42.885Z
Learnt from: dbari
Repo: NVIDIA/TensorRT-LLM PR: 7095
File: docker/Dockerfile.multi:168-168
Timestamp: 2025-08-20T15:04:42.885Z
Learning: In docker/Dockerfile.multi, wildcard COPY for benchmarks (${CPP_BUILD_DIR}/benchmarks/*Benchmark) is intentionally used instead of directory copy because the benchmarks directory contains various other build artifacts during C++ builds, and only specific benchmark executables should be copied to the final image.
Applied to files:
docker/Dockerfile.multi
📚 Learning: 2025-08-18T09:08:07.687Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 6984
File: cpp/tensorrt_llm/CMakeLists.txt:297-299
Timestamp: 2025-08-18T09:08:07.687Z
Learning: In the TensorRT-LLM project, artifacts are manually copied rather than installed via `cmake --install`, so INSTALL_RPATH properties are not needed - only BUILD_RPATH affects the final artifacts.
Applied to files:
jenkins/Build.groovyjenkins/L0_Test.groovydocs/source/installation/linux.md
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
jenkins/current_image_tags.propertiesjenkins/L0_Test.groovydocs/source/installation/linux.md
📚 Learning: 2025-08-27T14:23:55.566Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/modules/rms_norm.py:17-17
Timestamp: 2025-08-27T14:23:55.566Z
Learning: The TensorRT-LLM project requires Python 3.10+ as evidenced by the use of TypeAlias from typing module, match/case statements, and union type | syntax throughout the codebase, despite some documentation still mentioning Python 3.8+.
Applied to files:
docs/source/installation/linux.md
📚 Learning: 2025-10-17T13:21:31.724Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 8398
File: tensorrt_llm/_torch/pyexecutor/sampling_utils.py:237-272
Timestamp: 2025-10-17T13:21:31.724Z
Learning: The setup.py file in TensorRT-LLM explicitly requires Python 3.10+ via `python_requires=">=3.10, <4"`, making match/case statements and other Python 3.10+ features appropriate throughout the codebase.
Applied to files:
docs/source/installation/linux.md
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.
Applied to files:
docs/source/installation/linux.md
📚 Learning: 2025-09-16T09:30:09.716Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7763
File: cpp/tensorrt_llm/CMakeLists.txt:297-301
Timestamp: 2025-09-16T09:30:09.716Z
Learning: In the TensorRT-LLM project, NCCL libraries are loaded earlier by PyTorch libraries or the bindings library, so the main shared library doesn't need NCCL paths in its RPATH - the libraries will already be available in the process address space when needed.
Applied to files:
docs/source/installation/linux.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (24)
jenkins/controlCCache.groovy (1)
4-4: LGTM!The Docker image tag update to 25.10 aligns with the infrastructure upgrade objectives.
docker/Makefile (1)
195-204: LGTM!The BASE_TAG updates from 13.0.0 to 13.0.1 are consistent across all three target definitions and align with the broader CUDA 13.x stack upgrade in this PR.
docker/common/install_tensorrt.sh (1)
5-18: LGTM!The version updates for TensorRT (10.13.3.9), cuDNN (9.14.0.64), cuBLAS (13.1.0.3), NVRTC (13.0.88), CUDA Runtime (13.0.96), and CUDA Driver (580.95.05) are consistent with the DLFW 25.10 upgrade and align with the rel-25-10 release notes reference.
docker/common/install_mpi4py.sh (1)
77-80: LGTM!The architecture-specific Cython installation for aarch64 is a good practice to ensure compatibility.
docker/common/install_cuda_toolkit.sh (1)
8-8: LGTM!The CUDA version update to 13.0.2_580.95.05 is consistent with the CUDA driver version in
install_tensorrt.shand aligns with the broader CUDA 13.x stack upgrade.docker/common/install_pytorch.sh (2)
7-8: LGTM!The PyTorch version update to 2.9.0 and the documentation reference to rel-25-10 align with the PR objectives.
72-73: LGTM!The changes are consistent:
- Removed torchaudio from both uninstall and install operations
- Updated wheel index URL from cu128 to cu130, matching the CUDA 13.0 upgrade
tensorrt_llm/_utils.py (1)
1180-1185: Need to search for PyTorch 2.9.0 actual implementation to verify the ABI string format.Need to search for PyTorch 2.9.0 source code directly to find the actual ABI string format.
Confirm the accuracy of the PyTorch 2.9.0+ pybind11 ABI string format with literal "1xxx".
PyTorch 2.9.0 deprecated
_get_pybind11_abi_build_flags(), indicating a shift in how pybind11 ABI is handled in newer versions. The code generates ABI strings in the format"system_libstdcpp_gxx_abi_1xxx_use_cxx11_abi_{0 or 1}". The literal"1xxx"substring does not follow standard version numbering patterns and warrants confirmation:
- Is this the documented ABI string format expected by PyTorch 2.9.0+?
- Should
"1xxx"be replaced with an actual GCC ABI version number?Verify the PyTorch 2.9.0+ source code or release notes to confirm this format is correct.
constraints.txt (1)
1-2: LGTM! Base image tag update aligns with PR objectives.The update from
pytorch:25.06-py3topytorch:25.10-py3is consistent with the infrastructure upgrade described in the PR title and summary.tests/integration/test_lists/waives.txt (1)
402-416: Verify that test skips are related to the infrastructure upgrade.The addition of 15 new test skips covers various test categories (triton_server, unittest, accuracy, examples, disaggregated). While the skip format and bug references look correct, please confirm that these test failures are specifically caused by the DLFW 25.10/PyTorch 2.9.0/CUDA 13.0 upgrade and are being actively tracked for resolution.
Based on learnings.
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py (1)
346-346: LGTM! Skip decorator properly applied.The skip decorator with bug reference is correctly applied to the test function. Based on learnings, TensorRT-LLM's CI infrastructure properly honors pytest skip markers.
Based on learnings.
docker/Dockerfile.multi (2)
4-6: Base image tags updated appropriately with TODO for Triton.The base image tag has been updated to 25.10-py3, aligning with the PR objectives. The TODO comment at line 5 correctly notes that Triton 25.10 is not yet available, so 25.09 is used as an interim solution. Please ensure this TODO is tracked and Triton is updated to 25.10 when available.
44-46: LGTM! Installation flow restructured for better clarity.The changes improve the Dockerfile structure by:
- Adding explicit COPY commands for UCX, NIXL, and etcd installation scripts
- Replacing consolidated installation calls with individual RUN commands for each component
- This improves layer caching and makes the build process more transparent
Also applies to: 78-87
requirements.txt (3)
1-1: LGTM! CUDA stack consistently upgraded to 13.x.The requirements file has been systematically updated from CUDA 12.x (cu12) to CUDA 13.x (cu13):
- PyTorch wheel index:
cu129→cu130cuda-python>=13(was>=12)nvidia-ml-py>=13(was>=12)nvidia-nccl-cu13(wasnvidia-nccl-cu12)This is consistent with the infrastructure upgrade described in the PR.
Also applies to: 6-6, 16-16, 28-28
23-24: Verify compatibility of major version upgrades.Several key dependencies have been upgraded to match the new stack:
torch>=2.9.0a0,<=2.9.0(from 2.8.0)nvidia-modelopt[torch]~=0.37.0(version bump)triton==3.5.0(from 3.1.0)Please confirm that these version combinations have been tested together and are compatible with TensorRT 10.13.0 and CUDA 13.x.
Also applies to: 26-26, 67-67
36-36: Verify unpinning pillow and pinning datasets.Two packaging changes:
pillowhas been unpinned (waspillow==10.3.0)datasets==3.1.0has been newly pinnedThe comment at line 39 explains the datasets pin, but please confirm:
- Unpinning pillow won't introduce compatibility issues with the new PyTorch/CUDA stack
- The datasets pin at 3.1.0 is the intended stable version (the comment mentions datasets>3.1.0 is not stable)
Also applies to: 40-40
jenkins/Build.groovy (1)
457-457: Fixed Triton tag aligns with Dockerfile but reduces flexibility.The
tritonShortTaghas been changed from conditional logic to a fixed value of"r25.09". This aligns with theTRITON_BASE_TAG=25.09-py3setting indocker/Dockerfile.multi(line 6) and simplifies the build logic. However, this hard-coded value means any future changes to the Triton version will require updates in multiple locations.Consider whether this fixed value is appropriate for all build configurations or if the TODO comment in Dockerfile.multi (line 5) about updating to Triton 25.10 should trigger a corresponding update here.
docs/source/installation/linux.md (1)
16-16: Confirm intentionality of torchaudio removal from installation command.PyTorch 2.9.0 cu130 wheels are confirmed available at download.pytorch.org. However, verify that removing
torchaudiofrom the installation line is intentional—no project dependencies list (requirements.txt) was found, and torchaudio references in the codebase appear only in a FairSeq workaround context, not as a core dependency. Clarify this change to avoid confusion.jenkins/L0_Test.groovy (5)
41-41: LGTM: DLFW image updated to 25.10.The DLFW base image has been correctly updated to version 25.10, which aligns with the PR objective.
2618-2618: LGTM: Test configurations updated to use standard image variables.The test job configurations have been correctly updated to use
LLM_DOCKER_IMAGEdirectly, removing CUDA 12.9-specific variants. This simplifies the configuration and aligns with the CUDA 13.0 upgrade.
2879-2886: LGTM: CUDA 13.0 toolkit installation for Ubuntu base images.The conditional installation of CUDA 13.0 toolkit for non-DLFW Ubuntu base images is appropriate. DLFW images already include the CUDA toolkit, so this correctly handles the Ubuntu 22.04/24.04 bare-metal environments.
2326-2326: No issues found—all call sites are correctly updated.The
is_cu12parameter has been successfully removed from therunLLMBuildmethod signature at line 2326. Both call sites in the codebase (lines 2845 and 3161 in L0_Test.groovy) correctly pass the expected parameters, and no orphanedis_cu12references remain.
1496-1496: ****The current Triton release is version 2.61.0, which corresponds to the 25.09 container release on NVIDIA GPU Cloud (NGC), and this release was published on October 7, 2025. The version 25.09 specified in the code aligns with the latest published container image and is the correct choice. Following the principle that container tags should reference published NGC images, using 25.09 is appropriate and requires no changes.
jenkins/current_image_tags.properties (1)
16-19: Staging image tags are intentional—no action required.The
jenkins/current_image_tags.propertiesfile is designed to hold docker image tags generated by this PR's CI pipeline. Per the file header comment and dev-containers documentation, the user branch identifier (user_zhanruis_1024_upgrade_dlfw_2510_main-1075) is the expected pattern for PR-based CI artifacts.Developers accessing the internal registry will use these staging images; others automatically fall back to published NGC Development container images. This is the normal workflow—staging tags are not meant for promotion before merge; they represent the current development state of the DLFW 25.10 upgrade on this branch.
|
PR_Github #23149 [ run ] triggered by Bot. Commit: |
|
/bot run --post-merge --disable-fail-fast |
|
PR_Github #23181 [ run ] triggered by Bot. Commit: |
|
PR_Github #23149 [ run ] completed with state |
1493595 to
1b17880
Compare
|
/bot run --post-merge --disable-fail-fast |
|
PR_Github #23203 [ run ] triggered by Bot. Commit: |
|
PR_Github #23181 [ run ] completed with state |
|
PR_Github #23275 [ run ] triggered by Bot. Commit: |
|
PR_Github #23267 [ run ] completed with state |
|
PR_Github #23275 [ run ] completed with state |
|
/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Post-Merge-1, DGX_B300-4_GPUs-PyTorch-Post-Merge-1" |
|
PR_Github #23348 [ run ] triggered by Bot. Commit: |
|
PR_Github #23348 [ run ] completed with state |
ee47aad to
56b0630
Compare
|
/bot run --post-merge --disable-fail-fast |
|
PR_Github #23389 [ run ] triggered by Bot. Commit: |
|
/bot run --post-merge --disable-fail-fast |
|
PR_Github #23407 [ run ] triggered by Bot. Commit: |
|
PR_Github #23407 [ run ] completed with state |
|
/bot run --post-merge --disable-fail-fast |
|
/LLM/main/L0_MergeRequest pipeline #29388 |
… 3.5.0 Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
07aba70 to
747cf5e
Compare
|
/bot skip --comments "/LLM/main/L0_MergeRequest_PR pipeline #17628 and /LLM/main/L0_MergeRequest pipeline #29388 run a full post merge pipeline for commit |
|
/bot skip --comment "/LLM/main/L0_MergeRequest_PR pipeline #17628 and /LLM/main/L0_MergeRequest pipeline #29388 run a full post merge pipeline for commit 07aba70. Two failed tests are known issue on main branch, which are waived by #8897" |
|
PR_Github #23500 Bot args parsing error: usage: /bot skip --comment COMMENT |
|
PR_Github #23501 [ skip ] triggered by Bot. Commit: |
|
PR_Github #23501 [ skip ] completed with state |
… 3.5.0 (NVIDIA#8838) Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com> Signed-off-by: Yanchao Lu <yanchaol@nvidia.com> Co-authored-by: Yanchao Lu <yanchaol@nvidia.com> Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
…/ triton 3.5.0 (NVIDIA#8838)" This reverts commit 4de31be. Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
Summary by CodeRabbit
New Features
Documentation
Chores
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.