-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTLLM-9615][feat] Implement a distributed tuning system #9621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughDistributed tuning support is introduced to TensorRT-LLM's autotuner module, including a new tuning strategy enum, distributed state initialization, rank-aware synchronization, and cache coordination mechanisms. Integration updates are made to MoERunner, and comprehensive distributed test coverage is added. Changes
Sequence DiagramsequenceDiagram
participant Rank0 as Rank 0
participant Rank1 as Rank 1
participant RankN as Rank N
participant AutoTuner
participant Cache as ProfilingCache
participant Strategy
Rank0->>AutoTuner: setup_distributed_state(mapping)
Rank1->>AutoTuner: setup_distributed_state(mapping)
RankN->>AutoTuner: setup_distributed_state(mapping)
Note over Rank0,RankN: Tuning Phase Start
Rank0->>AutoTuner: autotune context enter
Rank1->>AutoTuner: autotune context enter
RankN->>AutoTuner: autotune context enter
par Distributed Profiling
Rank0->>Strategy: _should_current_rank_tune()
Rank1->>Strategy: _should_current_rank_tune()
RankN->>Strategy: _should_current_rank_tune()
Rank0->>Cache: profile_kernel()
Rank1->>Cache: profile_kernel()
RankN->>Cache: profile_kernel()
end
Note over Rank0,RankN: Cache Synchronization
alt Strategy == BROADCAST
Rank0->>Cache: get_cache_data()
Cache-->>Rank0: cache_data
Rank0->>AutoTuner: _broadcast_cache_data()
AutoTuner->>Rank1: cache_data
AutoTuner->>RankN: cache_data
else Strategy == MERGE
Rank0->>Cache: get_cache_data()
Rank1->>Cache: get_cache_data()
RankN->>Cache: get_cache_data()
AutoTuner->>AutoTuner: _merge_cache_data()
AutoTuner->>Rank0: merged_cache_data
AutoTuner->>Rank1: merged_cache_data
AutoTuner->>RankN: merged_cache_data
end
Note over Rank0,RankN: Synchronization Barrier
Rank0->>AutoTuner: _synchronize_ranks()
Rank1->>AutoTuner: _synchronize_ranks()
RankN->>AutoTuner: _synchronize_ranks()
Rank0->>AutoTuner: autotune context exit
Rank1->>AutoTuner: autotune context exit
RankN->>AutoTuner: autotune context exit
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (4)
tests/unittest/_torch/misc/test_autotuner.py (2)
634-651: Minor cleanup: unused variable andassert False.
selected_runneris unpacked but never used - prefix with underscoreassert Falseshould beraise AssertionError()for robustness withpython -OApply this diff:
- selected_runner, best_tactic = tuner.choose_one( + _selected_runner, best_tactic = tuner.choose_one( custom_op=f"test_distributed_{strategy}", runners=[runner], tuning_config=config, inputs=inputs) if strategy == DistributedTuningStrategy.BROADCAST: # All ranks should select tactic 0 assert best_tactic == 0 elif strategy == DistributedTuningStrategy.INDEPENDENT: # Each rank should select the tactic it prefers assert best_tactic == rank elif strategy == DistributedTuningStrategy.MERGE: # Because tactic 0 is slower, two ranks should always select tactic 1 assert best_tactic == 1 else: - assert False, f"Unknown strategy: {strategy}" + raise AssertionError(f"Unknown strategy: {strategy}")
655-678: Test name doesn't match scope - tests all strategies, not just broadcast.The test
test_distributed_broadcast_strategyis parametrized across all three strategies (BROADCAST, INDEPENDENT, MERGE), but the name implies it only tests the broadcast strategy. Consider renaming totest_distributed_tuning_strategiesfor clarity.-def test_distributed_broadcast_strategy(strategy, mpi_pool_executor): - """Test broadcast strategy with real MPI processes.""" +def test_distributed_tuning_strategies(strategy, mpi_pool_executor): + """Test distributed tuning strategies with real MPI processes."""tensorrt_llm/_torch/autotuner.py (2)
806-821: Unused variablemin_timefrom unpacking.The
min_timevariable is unpacked but never used. Prefix with underscore to indicate intentional discard.- best_runner_id, best_tactic, min_time, has_tuning_failure_occured = self._profile_runners( + best_runner_id, best_tactic, _min_time, has_tuning_failure_occured = self._profile_runners(
1440-1459: Minor: Remove unnecessary f-string prefix.Line 1445 has an f-string with no placeholders.
- logger.warning( - f"[AutoTuner] Not in distributed environment, skipping synchronization" - ) + logger.warning( + "[AutoTuner] Not in distributed environment, skipping synchronization" + )
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tensorrt_llm/_torch/autotuner.py(11 hunks)tensorrt_llm/_torch/custom_ops/torch_custom_ops.py(2 hunks)tests/unittest/_torch/misc/test_autotuner.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., usefrom package.subpackage import fooand thenfoo.SomeClass()instead offrom package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g.,some_file.py)
Python class names should use PascalCase (e.g.,class SomeClass)
Python function and method names should use snake_case (e.g.,def my_awesome_function():)
Python local variable names should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile = ...)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g.,MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g.,self.x = 5followed by"""<type>: Description of 'x'""")
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic
Files:
tests/unittest/_torch/misc/test_autotuner.pytensorrt_llm/_torch/custom_ops/torch_custom_ops.pytensorrt_llm/_torch/autotuner.py
**/*.{cpp,h,cu,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top
Files:
tests/unittest/_torch/misc/test_autotuner.pytensorrt_llm/_torch/custom_ops/torch_custom_ops.pytensorrt_llm/_torch/autotuner.py
🧠 Learnings (9)
📓 Common learnings
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.
Applied to files:
tests/unittest/_torch/misc/test_autotuner.py
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-27T14:23:55.566Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/modules/rms_norm.py:17-17
Timestamp: 2025-08-27T14:23:55.566Z
Learning: The TensorRT-LLM project requires Python 3.10+ as evidenced by the use of TypeAlias from typing module, match/case statements, and union type | syntax throughout the codebase, despite some documentation still mentioning Python 3.8+.
Applied to files:
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.
Applied to files:
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)
tensorrt_llm/_torch/autotuner.py (6)
ConstraintSpec(53-63)DistributedTuningStrategy(29-33)DynamicTensorSpec(37-49)OptimizationProfile(152-167)TunableRunner(178-242)TuningConfig(67-126)
tensorrt_llm/_torch/autotuner.py (4)
tensorrt_llm/_utils.py (4)
mpi_allgather(591-592)mpi_barrier(577-579)mpi_broadcast(587-588)mpi_disabled(522-524)tensorrt_llm/mapping.py (1)
Mapping(351-530)tensorrt_llm/llmapi/utils.py (2)
get(415-445)get(498-515)tensorrt_llm/llmapi/mpi_session.py (1)
is_initialized(57-58)
🪛 Ruff (0.14.7)
tests/unittest/_torch/misc/test_autotuner.py
592-592: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
595-595: Unused method argument: inputs
(ARG002)
595-595: Unused method argument: profile
(ARG002)
595-595: Unused method argument: kwargs
(ARG002)
599-599: Unused method argument: kwargs
(ARG002)
634-634: Unpacked variable selected_runner is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
650-650: Do not assert False (python -O removes these calls), raise AssertionError()
Replace assert False
(B011)
672-675: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
tensorrt_llm/_torch/autotuner.py
816-816: Unpacked variable min_time is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
1429-1429: Do not catch blind exception: Exception
(BLE001)
1445-1445: f-string without any placeholders
Remove extraneous f prefix
(F541)
1515-1515: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (12)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (3)
13-15: LGTM: Import addition for distributed tuning strategy.The import of
DistributedTuningStrategyfrom the autotuner module aligns with the new distributed tuning infrastructure being introduced.
34-40: LGTM: INDEPENDENT strategy is appropriate for MoE tuning.Using
DistributedTuningStrategy.INDEPENDENTis reasonable for MoE operations since each rank may have different expert assignments and workload distributions.
101-120: Verify removal oftp_rankandep_rankfromunique_id()is intentional.The
unique_id()method no longer includesself.tp_rankandself.ep_rank, though these attributes are still stored in the instance. This changes the cache key behavior - tuning results will now be shared across ranks with the same configuration.With the
INDEPENDENTstrategy, this may be intentional to enable cache reuse. However, please verify this won't cause incorrect cache lookups if ranks have different weight shapes or expert distributions that depend ontp_rank/ep_rank.tests/unittest/_torch/misc/test_autotuner.py (1)
25-34: Module-level MPI/cloudpickle setup is appropriate for distributed tests.The setup for MPI pickling with cloudpickle and the thread leak marker for pytest are reasonable for distributed testing infrastructure. The comment on line 33 explaining the thread leak marker is helpful.
tensorrt_llm/_torch/autotuner.py (8)
29-33: LGTM: Clear enum definition for distributed tuning strategies.The three strategies (BROADCAST, INDEPENDENT, MERGE) are well-defined and cover the common distributed tuning patterns.
115-126: LGTM: TuningConfig extended with distributed strategy field.The
distributed_tuning_strategyfield withINDEPENDENTas default is a safe choice that requires no coordination between ranks, making it backward compatible.
246-299: LGTM: Context manager updated with distributed-aware logging.The
autotunecontext manager now provides rank-aware logging for distributed environments, which aids debugging without changing core behavior.
590-611: LGTM: AutoTuner initialization with distributed state placeholder.The addition of
mappingattribute and configurable logging level via environment variable is well-structured.
1432-1434: LGTM: Distributed check logic is correct.The three-condition check ensures distributed mode is only enabled when properly configured.
1486-1503: LGTM: Broadcast implementation is correct.The broadcast logic properly uses torch.distributed or MPI based on the configured backend, and merges the received data into the local cache.
1505-1516: LGTM: Rank synchronization with error handling.The barrier implementation correctly dispatches to the appropriate backend. The broad exception catch with logging is acceptable for robustness in distributed environments where various failures can occur.
1518-1537: LGTM: Strategy-based tuning decision logic is correct.The method correctly implements the tuning decision for each strategy:
- BROADCAST: only rank 0
- INDEPENDENT/MERGE: all ranks
|
/bot run --disable-fail-fast --only-multi-gpu-test |
705161e to
526dddf
Compare
|
/bot run --disable-fail-fast --only-multi-gpu-test |
|
PR_Github #26576 [ run ] triggered by Bot. Commit: |
|
PR_Github #26579 [ run ] triggered by Bot. Commit: |
|
PR_Github #26576 [ run ] completed with state |
|
PR_Github #26579 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #26620 [ run ] triggered by Bot. Commit: |
|
PR_Github #26620 [ run ] completed with state |
djns99
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent work! This seems like a good approach to the problem. I think the default should become BROADCAST or MERGE once we have validated that these give good results and the approach scales well
6e13c9d to
2985724
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #26685 [ run ] triggered by Bot. Commit: |
…IDIA#9621 This reverts commit 1fe1974. Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
PR_Github #26685 [ run ] completed with state |
2985724 to
0672bd0
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #26871 [ run ] triggered by Bot. Commit: |
|
/bot kill |
|
good work! |
564161e to
73efe0c
Compare
Enable distributed tuning for cutlass moe for prototype validation. Only apply parallel tuning to several ops at this this moment to avoid unexpected hanging and Cache miss issues. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
73efe0c to
a465736
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #27971 [ run ] triggered by Bot. Commit: |
|
PR_Github #27971 [ run ] completed with state |
|
/bot run --disable-fail-fast |
1 similar comment
|
/bot run --disable-fail-fast |
|
PR_Github #28195 [ run ] triggered by Bot. Commit: |
|
PR_Github #28195 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #28243 [ run ] triggered by Bot. Commit: |
|
PR_Github #28243 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #28280 [ run ] triggered by Bot. Commit: |
|
PR_Github #28280 [ run ] completed with state |
| pp_size=1) | ||
| tuner = AutoTuner.get() | ||
| tuner.clear_cache() | ||
| tuner.setup_distributed_state(mapping) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hyukn This line appear to miss one arg dist in the call tuner.setup_distributed_state(mapping, dist).
Four distinct strategies are implemented to accommodate different distributed tuning scenarios, including BROADCAST, INDEPENDENT, MERGE, PARALLEL. * Distributed tuning is disabled by default, with the INDEPENDENT strategy as the fallback. This conservative approach prevents unexpected behavior in standard use cases. * Only operations with significant tuning time overhead have been assigned the PARALLEL strategy, which allows the same tensor parallelism (TP) rank to tune tactics concurrently across different ranks. This targeted approach balances performance gains with stability. * Operations with nested tuning structures, such as NVFP4GemmUnifiedRunner, currently support only the INDEPENDENT strategy. This restriction exists because the synchronization mechanism is optimized only for leaf operations and doesn't yet handle nested hierarchies. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Four distinct strategies are implemented to accommodate different distributed tuning scenarios, including BROADCAST, INDEPENDENT, MERGE, PARALLEL. * Distributed tuning is disabled by default, with the INDEPENDENT strategy as the fallback. This conservative approach prevents unexpected behavior in standard use cases. * Only operations with significant tuning time overhead have been assigned the PARALLEL strategy, which allows the same tensor parallelism (TP) rank to tune tactics concurrently across different ranks. This targeted approach balances performance gains with stability. * Operations with nested tuning structures, such as NVFP4GemmUnifiedRunner, currently support only the INDEPENDENT strategy. This restriction exists because the synchronization mechanism is optimized only for leaf operations and doesn't yet handle nested hierarchies. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.
Description
This PR introduces the first version of a distributed autotuning system. We've chosen a distributed tuning approach over standalone tuning for the following reasons:
We have implemented four distinct strategies to accommodate different distributed tuning scenarios, including BROADCAST, INDEPENDENT, MERGE, PARALLEL:
Tuning perf experiments:
TODO here
Future enhancement includes:
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.