Skip to content

Conversation

@hyukn
Copy link
Collaborator

@hyukn hyukn commented Dec 2, 2025

Summary by CodeRabbit

  • New Features

    • Added distributed tuning support with configurable strategies (BROADCAST, INDEPENDENT, MERGE) for multi-rank environments.
    • Implemented rank-aware cache synchronization to coordinate autotuning across distributed ranks.
  • Tests

    • Added comprehensive distributed autotuning test coverage with MPI support.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

This PR introduces the first version of a distributed autotuning system. We've chosen a distributed tuning approach over standalone tuning for the following reasons:

  • Reduced Tuning Time: When tunable operations exhibit consistent patterns across different ranks, we can distribute tuning tasks across ranks in parallel, each exploring different optimization tactics. This dramatically reduces overall tuning time compared to sequential, standalone tuning.
  • Guaranteed Symmetric Tactics: Standalone tuning can lead to inconsistencies when different ranks independently select different tactics during the warm-up phase, due to minor variations in dummy tensor creation or profiling timing. This can result in imbalanced GPU execution times during inference. The distributed approach ensures all ranks converge on the same tactics, eliminating these performance bottlenecks.
  • Communication Op Alignment: The distributed autotuning system ensures all ranks can align on communication operation implementations, providing better compatibility with the AutoTuner framework.

We have implemented four distinct strategies to accommodate different distributed tuning scenarios, including BROADCAST, INDEPENDENT, MERGE, PARALLEL:

  • Distributed tuning is disabled by default, with the INDEPENDENT strategy as the fallback. This conservative approach prevents unexpected behavior in standard use cases.
  • Only operations with significant tuning time overhead have been assigned the PARALLEL strategy, which allows the same tensor parallelism (TP) rank to tune tactics concurrently across different ranks. This targeted approach balances performance gains with stability.
  • Operations with nested tuning structures, such as NVFP4GemmUnifiedRunner, currently support only the INDEPENDENT strategy. This restriction exists because the synchronization mechanism is optimized for leaf operations and doesn't yet handle nested hierarchies.

Tuning perf experiments:
TODO here

Future enhancement includes:

  • Extend parallelization across both pipeline parallelism (PP) and tensor parallelism (TP) ranks to eliminate redundant serial tuning cycles and further accelerate the overall tuning process.
  • Enable the BROADCAST strategy by default across all operations, removing the current exception for nested operations and providing more uniform tuning behavior.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@hyukn hyukn requested a review from a team as a code owner December 2, 2025 08:28
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 2, 2025

📝 Walkthrough

Walkthrough

Distributed tuning support is introduced to TensorRT-LLM's autotuner module, including a new tuning strategy enum, distributed state initialization, rank-aware synchronization, and cache coordination mechanisms. Integration updates are made to MoERunner, and comprehensive distributed test coverage is added.

Changes

Cohort / File(s) Summary
Distributed tuning infrastructure
tensorrt_llm/_torch/autotuner.py
Added DistributedTuningStrategy enum with BROADCAST, INDEPENDENT, MERGE strategies; extended TuningConfig with distributed_tuning_strategy field; introduced AutoTuner.set_mapping() and AutoTuner.setup_distributed_state() for distributed initialization; added internal synchronization helpers for rank coordination and cache broadcast/merge; enhanced autotune context manager with rank-aware logging and cache synchronization points.
MoERunner distributed integration
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
Imported DistributedTuningStrategy and added to public exports; configured MoERunner.tuning_config with distributed_tuning_strategy=DistributedTuningStrategy.INDEPENDENT; modified MoERunner.unique_id() to exclude tp_rank and ep_rank from identity tuple for cross-rank cache reuse.
Distributed autotuning tests
tests/unittest/_torch/misc/test_autotuner.py
Added MPI-based distributed tuning test infrastructure; introduced DistributedGemmRunner with multi-tactic support; added _distributed_worker_function for per-rank MPI simulation; implemented tests validating BROADCAST, INDEPENDENT, and MERGE strategy behavior and rank synchronization patterns.

Sequence Diagram

sequenceDiagram
    participant Rank0 as Rank 0
    participant Rank1 as Rank 1
    participant RankN as Rank N
    participant AutoTuner
    participant Cache as ProfilingCache
    participant Strategy

    Rank0->>AutoTuner: setup_distributed_state(mapping)
    Rank1->>AutoTuner: setup_distributed_state(mapping)
    RankN->>AutoTuner: setup_distributed_state(mapping)

    Note over Rank0,RankN: Tuning Phase Start
    Rank0->>AutoTuner: autotune context enter
    Rank1->>AutoTuner: autotune context enter
    RankN->>AutoTuner: autotune context enter

    par Distributed Profiling
        Rank0->>Strategy: _should_current_rank_tune()
        Rank1->>Strategy: _should_current_rank_tune()
        RankN->>Strategy: _should_current_rank_tune()
        
        Rank0->>Cache: profile_kernel()
        Rank1->>Cache: profile_kernel()
        RankN->>Cache: profile_kernel()
    end

    Note over Rank0,RankN: Cache Synchronization
    alt Strategy == BROADCAST
        Rank0->>Cache: get_cache_data()
        Cache-->>Rank0: cache_data
        Rank0->>AutoTuner: _broadcast_cache_data()
        AutoTuner->>Rank1: cache_data
        AutoTuner->>RankN: cache_data
    else Strategy == MERGE
        Rank0->>Cache: get_cache_data()
        Rank1->>Cache: get_cache_data()
        RankN->>Cache: get_cache_data()
        AutoTuner->>AutoTuner: _merge_cache_data()
        AutoTuner->>Rank0: merged_cache_data
        AutoTuner->>Rank1: merged_cache_data
        AutoTuner->>RankN: merged_cache_data
    end

    Note over Rank0,RankN: Synchronization Barrier
    Rank0->>AutoTuner: _synchronize_ranks()
    Rank1->>AutoTuner: _synchronize_ranks()
    RankN->>AutoTuner: _synchronize_ranks()

    Rank0->>AutoTuner: autotune context exit
    Rank1->>AutoTuner: autotune context exit
    RankN->>AutoTuner: autotune context exit
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Distributed synchronization logic: New rank coordination, cache broadcast/merge mechanisms, and strategy-based conditional logic require careful verification of correctness across multiple ranks.
  • API surface changes: Multiple new public methods (set_mapping, setup_distributed_state, merge_cache_data) and enum introduction (DistributedTuningStrategy) need review for consistency and backward compatibility.
  • MoERunner identity change: Removal of tp_rank and ep_rank from unique_id() has implications for cache key specificity and cross-rank reuse patterns; interactions with existing caching logic need validation.
  • Test infrastructure: Distributed MPI test framework with per-rank worker functions and strategy validation requires understanding of multi-process test execution and synchronization patterns.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 37.93% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive PR description provides context about distributed autotuning system and rationale, but Test Coverage and several checklist items remain incomplete. Complete the 'Test Coverage' section with specific test case details, and review the PR checklist items that require completion before merging.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main objective of the changeset: implementing distributed tuning system support with new strategies, distributed-aware APIs, and synchronization mechanisms.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
tests/unittest/_torch/misc/test_autotuner.py (2)

634-651: Minor cleanup: unused variable and assert False.

  1. selected_runner is unpacked but never used - prefix with underscore
  2. assert False should be raise AssertionError() for robustness with python -O

Apply this diff:

-    selected_runner, best_tactic = tuner.choose_one(
+    _selected_runner, best_tactic = tuner.choose_one(
         custom_op=f"test_distributed_{strategy}",
         runners=[runner],
         tuning_config=config,
         inputs=inputs)
 
     if strategy == DistributedTuningStrategy.BROADCAST:
         # All ranks should select tactic 0
         assert best_tactic == 0
     elif strategy == DistributedTuningStrategy.INDEPENDENT:
         # Each rank should select the tactic it prefers
         assert best_tactic == rank
     elif strategy == DistributedTuningStrategy.MERGE:
         # Because tactic 0 is slower, two ranks should always select tactic 1
         assert best_tactic == 1
     else:
-        assert False, f"Unknown strategy: {strategy}"
+        raise AssertionError(f"Unknown strategy: {strategy}")

655-678: Test name doesn't match scope - tests all strategies, not just broadcast.

The test test_distributed_broadcast_strategy is parametrized across all three strategies (BROADCAST, INDEPENDENT, MERGE), but the name implies it only tests the broadcast strategy. Consider renaming to test_distributed_tuning_strategies for clarity.

-def test_distributed_broadcast_strategy(strategy, mpi_pool_executor):
-    """Test broadcast strategy with real MPI processes."""
+def test_distributed_tuning_strategies(strategy, mpi_pool_executor):
+    """Test distributed tuning strategies with real MPI processes."""
tensorrt_llm/_torch/autotuner.py (2)

806-821: Unused variable min_time from unpacking.

The min_time variable is unpacked but never used. Prefix with underscore to indicate intentional discard.

-                    best_runner_id, best_tactic, min_time, has_tuning_failure_occured = self._profile_runners(
+                    best_runner_id, best_tactic, _min_time, has_tuning_failure_occured = self._profile_runners(

1440-1459: Minor: Remove unnecessary f-string prefix.

Line 1445 has an f-string with no placeholders.

-            logger.warning(
-                f"[AutoTuner] Not in distributed environment, skipping synchronization"
-            )
+            logger.warning(
+                "[AutoTuner] Not in distributed environment, skipping synchronization"
+            )
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3e4f238 and 705161e.

📒 Files selected for processing (3)
  • tensorrt_llm/_torch/autotuner.py (11 hunks)
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (2 hunks)
  • tests/unittest/_torch/misc/test_autotuner.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tests/unittest/_torch/misc/test_autotuner.py
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
  • tensorrt_llm/_torch/autotuner.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tests/unittest/_torch/misc/test_autotuner.py
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
  • tensorrt_llm/_torch/autotuner.py
🧠 Learnings (9)
📓 Common learnings
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.

Applied to files:

  • tests/unittest/_torch/misc/test_autotuner.py
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-27T14:23:55.566Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/modules/rms_norm.py:17-17
Timestamp: 2025-08-27T14:23:55.566Z
Learning: The TensorRT-LLM project requires Python 3.10+ as evidenced by the use of TypeAlias from typing module, match/case statements, and union type | syntax throughout the codebase, despite some documentation still mentioning Python 3.8+.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)
tensorrt_llm/_torch/autotuner.py (6)
  • ConstraintSpec (53-63)
  • DistributedTuningStrategy (29-33)
  • DynamicTensorSpec (37-49)
  • OptimizationProfile (152-167)
  • TunableRunner (178-242)
  • TuningConfig (67-126)
tensorrt_llm/_torch/autotuner.py (4)
tensorrt_llm/_utils.py (4)
  • mpi_allgather (591-592)
  • mpi_barrier (577-579)
  • mpi_broadcast (587-588)
  • mpi_disabled (522-524)
tensorrt_llm/mapping.py (1)
  • Mapping (351-530)
tensorrt_llm/llmapi/utils.py (2)
  • get (415-445)
  • get (498-515)
tensorrt_llm/llmapi/mpi_session.py (1)
  • is_initialized (57-58)
🪛 Ruff (0.14.7)
tests/unittest/_torch/misc/test_autotuner.py

592-592: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


595-595: Unused method argument: inputs

(ARG002)


595-595: Unused method argument: profile

(ARG002)


595-595: Unused method argument: kwargs

(ARG002)


599-599: Unused method argument: kwargs

(ARG002)


634-634: Unpacked variable selected_runner is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


650-650: Do not assert False (python -O removes these calls), raise AssertionError()

Replace assert False

(B011)


672-675: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

tensorrt_llm/_torch/autotuner.py

816-816: Unpacked variable min_time is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


1429-1429: Do not catch blind exception: Exception

(BLE001)


1445-1445: f-string without any placeholders

Remove extraneous f prefix

(F541)


1515-1515: Do not catch blind exception: Exception

(BLE001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (12)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (3)

13-15: LGTM: Import addition for distributed tuning strategy.

The import of DistributedTuningStrategy from the autotuner module aligns with the new distributed tuning infrastructure being introduced.


34-40: LGTM: INDEPENDENT strategy is appropriate for MoE tuning.

Using DistributedTuningStrategy.INDEPENDENT is reasonable for MoE operations since each rank may have different expert assignments and workload distributions.


101-120: Verify removal of tp_rank and ep_rank from unique_id() is intentional.

The unique_id() method no longer includes self.tp_rank and self.ep_rank, though these attributes are still stored in the instance. This changes the cache key behavior - tuning results will now be shared across ranks with the same configuration.

With the INDEPENDENT strategy, this may be intentional to enable cache reuse. However, please verify this won't cause incorrect cache lookups if ranks have different weight shapes or expert distributions that depend on tp_rank/ep_rank.

tests/unittest/_torch/misc/test_autotuner.py (1)

25-34: Module-level MPI/cloudpickle setup is appropriate for distributed tests.

The setup for MPI pickling with cloudpickle and the thread leak marker for pytest are reasonable for distributed testing infrastructure. The comment on line 33 explaining the thread leak marker is helpful.

tensorrt_llm/_torch/autotuner.py (8)

29-33: LGTM: Clear enum definition for distributed tuning strategies.

The three strategies (BROADCAST, INDEPENDENT, MERGE) are well-defined and cover the common distributed tuning patterns.


115-126: LGTM: TuningConfig extended with distributed strategy field.

The distributed_tuning_strategy field with INDEPENDENT as default is a safe choice that requires no coordination between ranks, making it backward compatible.


246-299: LGTM: Context manager updated with distributed-aware logging.

The autotune context manager now provides rank-aware logging for distributed environments, which aids debugging without changing core behavior.


590-611: LGTM: AutoTuner initialization with distributed state placeholder.

The addition of mapping attribute and configurable logging level via environment variable is well-structured.


1432-1434: LGTM: Distributed check logic is correct.

The three-condition check ensures distributed mode is only enabled when properly configured.


1486-1503: LGTM: Broadcast implementation is correct.

The broadcast logic properly uses torch.distributed or MPI based on the configured backend, and merges the received data into the local cache.


1505-1516: LGTM: Rank synchronization with error handling.

The barrier implementation correctly dispatches to the appropriate backend. The broad exception catch with logging is acceptable for robustness in distributed environments where various failures can occur.


1518-1537: LGTM: Strategy-based tuning decision logic is correct.

The method correctly implements the tuning decision for each strategy:

  • BROADCAST: only rank 0
  • INDEPENDENT/MERGE: all ranks

@hyukn
Copy link
Collaborator Author

hyukn commented Dec 2, 2025

/bot run --disable-fail-fast --only-multi-gpu-test

@hyukn hyukn force-pushed the feat/autotuner_distribute_tuning branch from 705161e to 526dddf Compare December 2, 2025 08:51
@hyukn
Copy link
Collaborator Author

hyukn commented Dec 2, 2025

/bot run --disable-fail-fast --only-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26576 [ run ] triggered by Bot. Commit: 526dddf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26579 [ run ] triggered by Bot. Commit: 526dddf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26576 [ run ] completed with state ABORTED. Commit: 526dddf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26579 [ run ] completed with state SUCCESS. Commit: 526dddf
/LLM/main/L0_MergeRequest_PR pipeline #20210 (Partly Tested) completed with status: 'FAILURE'

@hyukn hyukn requested a review from djns99 December 2, 2025 12:44
@hyukn
Copy link
Collaborator Author

hyukn commented Dec 2, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26620 [ run ] triggered by Bot. Commit: 6e13c9d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26620 [ run ] completed with state SUCCESS. Commit: 6e13c9d
/LLM/main/L0_MergeRequest_PR pipeline #20246 completed with status: 'FAILURE'

Copy link
Collaborator

@djns99 djns99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent work! This seems like a good approach to the problem. I think the default should become BROADCAST or MERGE once we have validated that these give good results and the approach scales well

@hyukn hyukn force-pushed the feat/autotuner_distribute_tuning branch from 6e13c9d to 2985724 Compare December 3, 2025 02:14
@hyukn
Copy link
Collaborator Author

hyukn commented Dec 3, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26685 [ run ] triggered by Bot. Commit: 2985724

rosenrodt added a commit to rosenrodt/TensorRT-LLM that referenced this pull request Dec 3, 2025
…IDIA#9621

This reverts commit 1fe1974.

Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
@tensorrt-cicd
Copy link
Collaborator

PR_Github #26685 [ run ] completed with state SUCCESS. Commit: 2985724
/LLM/main/L0_MergeRequest_PR pipeline #20307 completed with status: 'FAILURE'

@hyukn hyukn force-pushed the feat/autotuner_distribute_tuning branch from 2985724 to 0672bd0 Compare December 4, 2025 03:11
@hyukn hyukn requested a review from a team as a code owner December 4, 2025 03:11
@hyukn
Copy link
Collaborator Author

hyukn commented Dec 4, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26871 [ run ] triggered by Bot. Commit: 0672bd0

@hyukn
Copy link
Collaborator Author

hyukn commented Dec 4, 2025

/bot kill

@limin2021
Copy link
Collaborator

good work!

@hyukn hyukn force-pushed the feat/autotuner_distribute_tuning branch from 564161e to 73efe0c Compare December 12, 2025 03:36
Enable distributed tuning for cutlass moe for prototype validation.
Only apply parallel tuning to several ops at this this moment to avoid unexpected hanging and Cache miss issues.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
@hyukn hyukn force-pushed the feat/autotuner_distribute_tuning branch from 73efe0c to a465736 Compare December 12, 2025 03:36
@hyukn
Copy link
Collaborator Author

hyukn commented Dec 12, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27971 [ run ] triggered by Bot. Commit: a465736

@hyukn hyukn changed the title [TRTLLM-9615][feat] Implement a distributed tuning system. [TRTLLM-9615][feat] Implement a distributed tuning system - Part 1. Dec 12, 2025
@hyukn hyukn changed the title [TRTLLM-9615][feat] Implement a distributed tuning system - Part 1. [TRTLLM-9615][feat] Implement a distributed tuning system - Part 1 Dec 12, 2025
@hyukn hyukn changed the title [TRTLLM-9615][feat] Implement a distributed tuning system - Part 1 [TRTLLM-9615][feat] Implement a distributed tuning system Dec 12, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #27971 [ run ] completed with state SUCCESS. Commit: a465736
/LLM/main/L0_MergeRequest_PR pipeline #21358 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Dec 13, 2025

/bot run --disable-fail-fast

1 similar comment
@hyukn
Copy link
Collaborator Author

hyukn commented Dec 14, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28195 [ run ] triggered by Bot. Commit: a465736

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28195 [ run ] completed with state SUCCESS. Commit: a465736
/LLM/main/L0_MergeRequest_PR pipeline #21549 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Dec 14, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28243 [ run ] triggered by Bot. Commit: 44e580e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28243 [ run ] completed with state SUCCESS. Commit: 44e580e
/LLM/main/L0_MergeRequest_PR pipeline #21596 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Dec 15, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28280 [ run ] triggered by Bot. Commit: 44e580e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28280 [ run ] completed with state SUCCESS. Commit: 44e580e
/LLM/main/L0_MergeRequest_PR pipeline #21631 completed with status: 'SUCCESS'

@hyukn hyukn merged commit 9e7182b into NVIDIA:main Dec 15, 2025
5 checks passed
pp_size=1)
tuner = AutoTuner.get()
tuner.clear_cache()
tuner.setup_distributed_state(mapping)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hyukn This line appear to miss one arg dist in the call tuner.setup_distributed_state(mapping, dist).

sherry-1001 pushed a commit to sherry-1001/TensorRT-LLM that referenced this pull request Dec 16, 2025
Four distinct strategies are implemented to accommodate different distributed tuning scenarios, including BROADCAST, INDEPENDENT, MERGE, PARALLEL.

* Distributed tuning is disabled by default, with the INDEPENDENT strategy as the fallback. This conservative approach prevents unexpected behavior in standard use cases.
* Only operations with significant tuning time overhead have been assigned the PARALLEL strategy, which allows the same tensor parallelism (TP) rank to tune tactics concurrently across different ranks. This targeted approach balances performance gains with stability.
* Operations with nested tuning structures, such as NVFP4GemmUnifiedRunner, currently support only the INDEPENDENT strategy. This restriction exists because the synchronization mechanism is optimized only for leaf operations and doesn't yet handle nested hierarchies.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 19, 2025
Four distinct strategies are implemented to accommodate different distributed tuning scenarios, including BROADCAST, INDEPENDENT, MERGE, PARALLEL.

* Distributed tuning is disabled by default, with the INDEPENDENT strategy as the fallback. This conservative approach prevents unexpected behavior in standard use cases.
* Only operations with significant tuning time overhead have been assigned the PARALLEL strategy, which allows the same tensor parallelism (TP) rank to tune tactics concurrently across different ranks. This targeted approach balances performance gains with stability.
* Operations with nested tuning structures, such as NVFP4GemmUnifiedRunner, currently support only the INDEPENDENT strategy. This restriction exists because the synchronization mechanism is optimized only for leaf operations and doesn't yet handle nested hierarchies.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants