Skip to content

Conversation

@jinyangyuan-nvidia
Copy link
Collaborator

@jinyangyuan-nvidia jinyangyuan-nvidia commented Aug 5, 2025

Summary by CodeRabbit

  • New Features

    • Enhanced support for multiple auxiliary CUDA streams in MoE models, enabling finer-grained stream management and improved parallelism.
    • Added a new auxiliary stream type for MoE load balancing.
    • Introduced structured asynchronous synchronization and distributed statistic aggregation in load balancers.
  • Refactor

    • Unified auxiliary stream handling by switching from single streams to dictionaries across all MoE modules.
    • Improved load balancer logic with clearer dynamic routing controls, explicit start/done synchronization calls, and streamlined statistic updates.
    • Simplified alltoall preparation by removing unused statistic gathering parameters.
    • Removed deprecated auxiliary stream parameters from class and function docstrings.
  • Bug Fixes

    • Improved synchronization and correctness in GPU stream and load balancer operations.
  • Documentation

    • Updated and clarified docstrings to reflect changes in auxiliary stream handling and load balancer synchronization.
  • Tests

    • Expanded and adapted test cases to cover new stream and synchronization logic, including explicit CUDA stream and allreduce configurations.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@jinyangyuan-nvidia jinyangyuan-nvidia self-assigned this Aug 5, 2025
@jinyangyuan-nvidia jinyangyuan-nvidia requested a review from a team as a code owner August 5, 2025 08:34
@jinyangyuan-nvidia jinyangyuan-nvidia requested a review from hlu1 August 5, 2025 08:34
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 5, 2025

📝 Walkthrough

Walkthrough

This update refactors auxiliary CUDA stream handling for Mixture of Experts (MoE) components throughout the codebase. It replaces single-stream parameters with dictionaries mapping multiple stream types, introduces new stream types, and restructures load balancer synchronization and statistic update logic for MoE modules. Test suites and documentation are updated to match the new interfaces and methods.

Changes

Cohort / File(s) Change Summary
MoE Model Stream Refactor
tensorrt_llm/_torch/models/modeling_deepseekv3.py, tensorrt_llm/_torch/models/modeling_mixtral.py, tensorrt_llm/_torch/models/modeling_qwen3_moe.py, tensorrt_llm/_torch/models/modeling_qwen_moe.py
MoE model classes now pass a dictionary of auxiliary CUDA streams (aux_stream_dict) to MoE creation functions instead of a single stream. Additional stream types (e.g., MoeBalancer) are supported.
MoE Creation and Backend Update
tensorrt_llm/_torch/modules/fused_moe/create_moe.py
The create_moe function now accepts an aux_stream_dict instead of a single stream, passing either specific streams or the full dictionary to backend constructors as needed. Function signature updated.
WideEPMoE Load Balancer Refactor
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
WideEPMoE now uses an auxiliary stream dictionary, supports dynamic routing detection, and simplifies load balancer statistic gathering and synchronization. Method and constructor signatures updated.
MoE Load Balancer Synchronization Overhaul
tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py
Introduces explicit auxiliary stream and allreduce support for load balancer layers, event-based synchronization, and separates GPU/CPU stage methods into start/done pairs. Statistic update methods are split for local/global updates.
Auxiliary Stream Type Extension
tensorrt_llm/_torch/utils.py
Adds 'MoeBalancer' to the AuxStreamType enum and associated stream name list.
MoE Documentation Update
tensorrt_llm/_torch/modules/fused_moe/interface.py, tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
Removes outdated aux_stream parameter documentation from MoE and TRTLLMGenFusedMoE class docstrings.
MoE Load Balancer Test Updates
tests/unittest/_torch/modules/test_moe_load_balancer.py
Test cases updated to use new auxiliary stream/allreduce parameters and new method names for synchronization and statistic updates, reflecting the refactored load balancer API.

Sequence Diagram(s)

sequenceDiagram
    participant Model
    participant MoE
    participant LoadBalancer
    participant AllReduce

    Model->>MoE: create_moe(aux_stream_dict)
    MoE->>LoadBalancer: initialize(aux_stream=aux_stream_dict[MoeBalancer], allreduce=AllReduce)
    Model->>MoE: forward(input)
    MoE->>LoadBalancer: start_wait_gpu_stage()
    LoadBalancer->>LoadBalancer: synchronize via aux_stream and events
    MoE->>LoadBalancer: done_wait_gpu_stage()
    MoE->>LoadBalancer: update_statistic_with_local_ids()
    LoadBalancer->>AllReduce: allreduce(local_stats)
    LoadBalancer->>LoadBalancer: update global stats
    MoE->>LoadBalancer: start_set_cpu_stage()
    LoadBalancer->>LoadBalancer: synchronize via aux_stream and events
    MoE->>LoadBalancer: done_set_cpu_stage()
    MoE->>LoadBalancer: route(tokens)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • pcastonguay
  • Shixiaowei02
  • nv-guomingz
  • HuiGao-NV

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14122 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
tensorrt_llm/_torch/modules/fused_moe/create_moe.py (1)

155-155: Fix undefined variable reference.

Line 155 references aux_stream which is no longer a parameter. This should extract the appropriate stream from aux_stream_dict.

Apply this fix:

-            aux_stream=aux_stream,
+            aux_stream=aux_stream_dict[AuxStreamType.MoeChunkingOverlap],
🧹 Nitpick comments (1)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)

852-852: Fix line length to comply with project standards.

Line 852 exceeds the 120-character limit (currently 128 characters). Please reformat for better readability.

-            alltoall_info, token_selected_slots, token_final_scales, _ = MnnvlMoe.mnnvl_moe_alltoallv_prepare_without_allgather(
+            alltoall_info, token_selected_slots, token_final_scales, _ = \
+                MnnvlMoe.mnnvl_moe_alltoallv_prepare_without_allgather(
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7cbe30e and 366b12b.

📒 Files selected for processing (11)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_mixtral.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_qwen_moe.py (2 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py (5 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (0 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (16 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/interface.py (0 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py (9 hunks)
  • tensorrt_llm/_torch/utils.py (1 hunks)
  • tests/unittest/_torch/modules/test_moe_load_balancer.py (10 hunks)
💤 Files with no reviewable changes (2)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/fused_moe/interface.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without reflection.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py
  • tensorrt_llm/_torch/models/modeling_mixtral.py
  • tensorrt_llm/_torch/utils.py
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
  • tensorrt_llm/_torch/models/modeling_qwen_moe.py
  • tests/unittest/_torch/modules/test_moe_load_balancer.py
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py
  • tensorrt_llm/_torch/models/modeling_mixtral.py
  • tensorrt_llm/_torch/utils.py
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
  • tensorrt_llm/_torch/models/modeling_qwen_moe.py
  • tests/unittest/_torch/modules/test_moe_load_balancer.py
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
🧠 Learnings (1)
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/unittest/_torch/modules/test_moe_load_balancer.py
🧬 Code Graph Analysis (1)
tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py (2)
tensorrt_llm/_torch/distributed/ops.py (1)
  • AllReduce (362-491)
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (1)
  • is_graph_capturing (24-25)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py

852-852: Line too long (128 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (32)
tensorrt_llm/_torch/utils.py (1)

15-20: LGTM! Clean extension of auxiliary stream types.

The addition of 'MoeBalancer' to the auxiliary stream name list properly extends the AuxStreamType enum to support load balancer operations in MoE components. This follows the existing pattern and integrates seamlessly with the enum creation logic.

tensorrt_llm/_torch/models/modeling_mixtral.py (2)

18-18: LGTM! Proper import for auxiliary stream types.

The import of AuxStreamType is necessary to support the refactored auxiliary stream management.


53-53: LGTM! Correct implementation of auxiliary stream dictionary pattern.

The change from a single aux_stream parameter to aux_stream_dict keyed by AuxStreamType.MoeChunkingOverlap properly aligns with the broader refactor to support multiple auxiliary CUDA streams for MoE components. This maintains the same functionality while enabling more granular stream management.

tensorrt_llm/_torch/models/modeling_qwen3_moe.py (2)

25-25: LGTM! Consistent import for auxiliary stream types.

The import of AuxStreamType aligns with the auxiliary stream dictionary refactor.


111-111: LGTM! Consistent auxiliary stream dictionary implementation.

The conversion from aux_stream to aux_stream_dict with AuxStreamType.MoeChunkingOverlap key maintains consistency with the broader refactor pattern across MoE model implementations.

tensorrt_llm/_torch/models/modeling_qwen_moe.py (2)

20-20: LGTM! Necessary import for auxiliary stream refactor.

The import of AuxStreamType enables the use of auxiliary stream dictionaries in the MoE implementation.


57-57: LGTM! Proper auxiliary stream dictionary usage.

The change to aux_stream_dict with AuxStreamType.MoeChunkingOverlap key follows the established pattern and maintains functionality while enabling multiple stream support.

tensorrt_llm/_torch/modules/fused_moe/create_moe.py (6)

1-1: LGTM! Necessary import for auxiliary stream dictionary support.

The addition of Dict to the typing imports enables the new auxiliary stream dictionary parameter type.


9-9: LGTM! Import required for auxiliary stream type enumeration.

The import of AuxStreamType is necessary for the dictionary keys in the refactored auxiliary stream management.


64-64: LGTM! Function signature properly updated for auxiliary stream dictionary.

The replacement of aux_stream parameter with aux_stream_dict enables support for multiple auxiliary CUDA streams keyed by AuxStreamType.


99-99: LGTM! Correct stream extraction for CutlassFusedMoE.

The extraction of AuxStreamType.MoeChunkingOverlap stream from the dictionary is appropriate for this MoE implementation.


113-113: LGTM! WideEPMoE receives full auxiliary stream dictionary.

Passing the complete aux_stream_dict to WideEPMoE is correct as this implementation needs access to multiple stream types including the new MoeBalancer stream.


141-141: LGTM! Correct stream extraction for CuteDslFusedMoE.

The extraction of AuxStreamType.MoeChunkingOverlap stream follows the same pattern as CutlassFusedMoE.

tensorrt_llm/_torch/models/modeling_deepseekv3.py (2)

1053-1058: LGTM! Auxiliary stream expansion aligns with performance optimization goals.

The addition of a third auxiliary CUDA stream and the new MoeBalancer stream type is consistent with the PR's objective of improving online EPLB performance through better overlapping. The implementation correctly maps stream types to their respective CUDA streams.


457-457: Consistent refactoring to pass auxiliary stream dictionary.

The change from passing a single aux_stream to passing the entire aux_stream_dict to create_moe is consistent with the broader refactoring to support multiple auxiliary CUDA streams for improved overlapping capabilities.

tests/unittest/_torch/modules/test_moe_load_balancer.py (3)

68-74: Consistent test updates for new load balancer API.

The test methods have been correctly updated to include the new aux_stream and allreduce parameters required by the refactored load balancer API. The use of AllReduceStrategy.NCCL with a default Mapping() is appropriate for GPU-based distributed operations.

Also applies to: 149-155, 210-214, 321-327, 390-396


243-244: Improved synchronization pattern with explicit start/done methods.

The refactoring from single-stage synchronization methods to explicit start/done pairs (start_wait_gpu_stage/done_wait_gpu_stage and start_set_cpu_stage/done_set_cpu_stage) provides better control over asynchronous operations. This pattern aligns well with CUDA event-based synchronization and enables more efficient overlapping of operations.

Also applies to: 266-270, 352-353, 360-361, 422-423, 429-430


254-254: Clear and descriptive method naming for statistics updates.

The rename from statistic() to update_statistic_with_global_ids() better describes the method's purpose. The addition of is_first_stage and is_last_stage boolean parameters enables proper handling of multi-stage operations.

Also applies to: 356-357

tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (6)

47-48: Consistent refactoring to support multiple auxiliary streams.

The change from a single optional aux_stream to aux_stream_dict (dictionary of streams keyed by AuxStreamType) aligns with the broader architectural change to support multiple auxiliary CUDA streams for improved overlapping capabilities.

Also applies to: 68-69


111-117: Proper initialization of load balancer with distributed support.

The load balancer is correctly initialized with an AllReduce instance using NCCL strategy for distributed statistics aggregation. The auxiliary stream extraction for MoeBalancer is properly handled with a conditional check on the aux_stream_dict.


141-142: Good refactoring: Simplified dynamic routing checks.

The introduction of the is_dynamic_routing boolean attribute is a clean refactoring that replaces repeated conditional checks throughout the code, improving readability and maintainability.


157-160: Robust auxiliary stream handling with proper fallback.

The code correctly extracts the MoeChunkingOverlap stream from the dictionary when available, with a proper fallback to create a new stream if aux_stream_dict is None. This ensures backward compatibility.


386-387: Improved load balancer synchronization with explicit stage control.

The refactoring to use explicit start/done synchronization methods (start_wait_gpu_stage/done_wait_gpu_stage, start_set_cpu_stage/done_set_cpu_stage) with proper stage control (is_first_call, is_last_call) enables better overlap of GPU operations and aligns with CUDA event-based synchronization patterns.

Also applies to: 415-421, 653-654, 684-685


456-456: Simplified interface with removal of redundant statistic gathering.

The removal of local_statistic_tensor from alltoall_prepare_maybe_dispatch and related allgather calls simplifies the interface. Statistics are now handled through the load balancer's integrated AllReduce mechanism, providing a cleaner separation of concerns.

Also applies to: 541-541, 844-857, 870-871, 896-896

tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py (8)

16-17: LGTM! Necessary imports for new functionality.

The imports for AllReduce and EventType are appropriate for implementing the auxiliary stream synchronization and distributed operations.


277-339: Well-structured initialization with proper stream and event management.

The changes improve the design by:

  1. Supporting custom auxiliary streams for better overlap control
  2. Using typed events (EventType) for better code clarity
  3. Enforcing allreduce requirement when updates are enabled
  4. Adding call count tracking for API usage validation

The initialization properly handles both static routing and dynamic update scenarios.


469-497: Excellent refactoring of GPU synchronization into start/done pairs.

The changes improve synchronization control by:

  1. Splitting into explicit start/done phases for better overlap opportunities
  2. Adding call count assertions to enforce correct API usage
  3. Properly handling CUDA graph capture with event-based synchronization
  4. Using auxiliary streams to overlap GPU operations with other work

The event synchronization pattern (Main→MoeBalancer) ensures correct ordering during graph capture.


498-528: Well-designed CPU stage synchronization with proper state management.

The implementation ensures:

  1. Correct sequencing by requiring GPU wait completion before CPU stage
  2. Clean state reset for next iteration by clearing all call counts
  3. Prevention of stale data by nullifying statistic_flag_tensor
  4. Consistent event-based synchronization for CUDA graph scenarios

The design maintains clear lifecycle boundaries for each synchronization phase.


529-603: Excellent addition of hierarchical statistics with local/global ID support.

The refactoring provides:

  1. Clear distinction between global and local expert ID processing
  2. Efficient hierarchical updates using local aggregation followed by allreduce
  3. Proper mutual exclusion between the two update methods
  4. Memory-efficient lazy allocation of local statistic tensor
  5. Correct event-based synchronization for auxiliary stream operations

The design enables better scalability for distributed MoE scenarios.


617-619: Good addition of routing precondition check.

The assertion ensures GPU synchronization completes before routing operations, preventing potential race conditions. The call count tracking aids in debugging API usage patterns.


713-752: Correct propagation of auxiliary stream and allreduce parameters.

The method signature and implementation properly forward the new parameters to maintain consistency throughout the layer hierarchy.


979-1003: Consistent API update for global helper function.

The function signature properly matches the updated add_layer method, maintaining API consistency across the module.

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot kill

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14129 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14122 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14130 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14129 [ kill ] completed with state ABORTED

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)

852-856: Fix line length violation.

Line 852 exceeds the 120 character limit as flagged by static analysis.

Apply this diff to fix the line length:

-            alltoall_info, token_selected_slots, token_final_scales, _ = MnnvlMoe.mnnvl_moe_alltoallv_prepare_without_allgather(
-                token_selected_slots, token_final_scales, None,
-                self.alltoall_prepare_workspace, all_rank_max_num_tokens,
-                self.ep_rank, self.ep_size, self.num_experts, self.num_slots,
-                top_k)
+            result = MnnvlMoe.mnnvl_moe_alltoallv_prepare_without_allgather(
+                token_selected_slots, token_final_scales, None,
+                self.alltoall_prepare_workspace, all_rank_max_num_tokens,
+                self.ep_rank, self.ep_size, self.num_experts, self.num_slots,
+                top_k)
+            alltoall_info, token_selected_slots, token_final_scales, _ = result
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 366b12b and cc78f57.

📒 Files selected for processing (11)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_mixtral.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_qwen_moe.py (2 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py (6 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (0 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (16 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/interface.py (0 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py (9 hunks)
  • tensorrt_llm/_torch/utils.py (1 hunks)
  • tests/unittest/_torch/modules/test_moe_load_balancer.py (10 hunks)
💤 Files with no reviewable changes (2)
  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
🚧 Files skipped from review as they are similar to previous changes (8)
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
  • tensorrt_llm/_torch/models/modeling_mixtral.py
  • tensorrt_llm/_torch/models/modeling_qwen_moe.py
  • tensorrt_llm/_torch/utils.py
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • tests/unittest/_torch/modules/test_moe_load_balancer.py
  • tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without reflection.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py

852-852: Line too long (128 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (8)

9-9: LGTM! Import additions support new functionality.

The new imports for AllReduceStrategy, AllReduce, and AuxStreamType are correctly added to support the auxiliary stream dictionary pattern and distributed allreduce functionality.

Also applies to: 12-12, 15-15


47-47: LGTM! Proper conversion to auxiliary stream dictionary.

The parameter change from single aux_stream to aux_stream_dict with proper type annotation enables support for multiple auxiliary streams, which aligns with the PR's performance improvement objectives.

Also applies to: 68-69


111-117: LGTM! Proper load balancer initialization with new stream handling.

The changes correctly:

  • Extract the MoeBalancer auxiliary stream from the dictionary with proper null checking
  • Initialize AllReduce with NCCL strategy for distributed statistic aggregation
  • Maintain backward compatibility when aux_stream_dict is None

141-142: LGTM! Good refactoring to simplify dynamic routing checks.

The is_dynamic_routing boolean attribute precomputes the routing status, improving code readability and avoiding repeated conditional evaluations throughout the class.


157-160: LGTM! Consistent auxiliary stream extraction for chunking overlap.

The change properly extracts the MoeChunkingOverlap stream from the dictionary with appropriate fallback to create a new stream when the dictionary is not provided.


386-387: LGTM! Improved load balancer synchronization with explicit staging.

The changes implement better synchronization control through:

  • Use of the is_dynamic_routing flag for cleaner conditionals
  • Explicit CUDA event coordination with start/done staging pairs
  • Simplified statistic updates with update_statistic_with_local_ids
  • Proper timing of GPU and CPU stage synchronization

This aligns with the PR's goal of improving performance through better overlapping.

Also applies to: 415-421, 653-654, 684-685


844-848: LGTM! Method signature simplified appropriately.

The removal of the local_statistic_tensor parameter from alltoall_prepare_maybe_dispatch aligns with the new load balancer API that handles statistics differently.

Also applies to: 896-896


433-434: LGTM! Good formatting improvements.

The added blank lines and consistent formatting enhance code readability.

Also applies to: 541-541, 712-712, 870-871

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14130 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #10659 completed with status: 'FAILURE'

@jinyangyuan-nvidia jinyangyuan-nvidia force-pushed the dev/perf_online_eplb branch 3 times, most recently from dad0c9f to 4de7894 Compare August 5, 2025 12:17
@jinyangyuan-nvidia jinyangyuan-nvidia changed the title [None][perf] Improve the performance of online EPLB by better overlapping [None][perf] Improve the performance of online EPLB on Hopper by better overlapping Aug 5, 2025
@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py (1)

588-588: Minor: Line exceeds 120 characters.

Consider breaking this line for better readability.

-            gathered_local_statistic_tensor: gathered local statistics info, should have shape (world_size, self.expert_count)
+            gathered_local_statistic_tensor: gathered local statistics info, should have shape 
+                (world_size, self.expert_count)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cc78f57 and 2af45be.

📒 Files selected for processing (11)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_mixtral.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_qwen_moe.py (2 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py (6 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (0 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (12 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/interface.py (0 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py (10 hunks)
  • tensorrt_llm/_torch/utils.py (1 hunks)
  • tests/unittest/_torch/modules/test_moe_load_balancer.py (4 hunks)
💤 Files with no reviewable changes (2)
  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
✅ Files skipped from review due to trivial changes (1)
  • tests/unittest/_torch/modules/test_moe_load_balancer.py
🚧 Files skipped from review as they are similar to previous changes (6)
  • tensorrt_llm/_torch/models/modeling_qwen_moe.py
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
  • tensorrt_llm/_torch/utils.py
  • tensorrt_llm/_torch/models/modeling_mixtral.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without reflection.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py

471-471: Line too long (122 > 120)

(E501)

tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py

588-588: Line too long (126 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (15)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (7)

47-48: LGTM! Clean transition to dictionary-based auxiliary streams.

The parameter change from aux_stream to aux_stream_dict with proper type annotations improves flexibility by supporting multiple auxiliary stream types.

Also applies to: 68-69


111-115: Good defensive programming with None checks.

The conditional extraction of the MoeBalancer stream from the dictionary handles the case where aux_stream_dict might be None.


139-145: Proper initialization of AllReduce for dynamic routing.

The conditional creation of AllReduce instance based on dynamic routing status is appropriate. Using NCCL strategy aligns with distributed training requirements.


389-390: Well-structured GPU synchronization pattern.

The start/done pattern for GPU stage waiting provides clear synchronization boundaries for dynamic routing scenarios.

Also applies to: 420-421


418-431: Improved statistic update logic with proper branching.

The three distinct paths for statistic updates (MNNVL alltoall vs. other methods) with appropriate use of AllReduce for aggregation is well-designed. The logic correctly handles different scenarios based on the alltoall method type.


466-470: Optimization: Conditional statistic tensor passing.

Good optimization to only pass the local statistic tensor on the last call, reducing unnecessary data transfer in intermediate calls.


478-482: Correct handling of gathered statistics.

The reshaping of gathered statistics to (moe_ep_size, num_experts) and subsequent update via load balancer is properly implemented.

tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py (8)

277-278: Well-designed conditional initialization of synchronization primitives.

The auxiliary stream and event dictionary are only created when updates are enabled, avoiding unnecessary resource allocation in static routing scenarios.

Also applies to: 311-320


324-338: Excellent call order enforcement mechanism.

The func_called_count dictionary provides a robust way to enforce the correct sequence of method calls, which is crucial for proper synchronization in concurrent scenarios.


468-485: Robust GPU stage synchronization with CUDA graph support.

The start/done pattern for GPU stage waiting with proper event recording and synchronization is well-implemented. The assertions ensure methods are called in the correct order.

Also applies to: 486-496


516-527: Proper cleanup and reset in done_set_cpu_stage.

Good practice to reset all counters and clear the statistic flag tensor after completing the CPU stage, preparing for the next iteration.


528-565: Well-structured local statistic update with proper synchronization.

The method correctly initializes the local statistic tensor on first use and handles both CUDA graph and regular execution paths.


614-651: Excellent integration of AllReduce for distributed statistics.

The method properly aggregates local statistics across ranks using AllReduce when in the last stage, enabling efficient distributed statistic collection.


698-699: Proper sequencing enforcement in route method.

The assertion ensures that GPU stage waiting is complete before routing, maintaining correct synchronization order.


794-800: Consistent aux_stream propagation throughout the API.

The auxiliary stream parameter is properly propagated through all relevant methods, maintaining API consistency.

Also applies to: 1057-1062

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14149 [ run ] triggered by Bot

Copy link
Collaborator

@dongxuy04 dongxuy04 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14149 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10676 completed with status: 'FAILURE'

@jinyangyuan-nvidia jinyangyuan-nvidia requested review from a team as code owners August 6, 2025 02:25
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14411 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10893 completed with status: 'FAILURE'

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14525 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14525 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10973 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

1 similar comment
@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

…er overlapping

Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14604 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14604 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #11032 completed with status: 'FAILURE'

Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14622 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14622 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11046 completed with status: 'SUCCESS'

Copy link
Collaborator

@QiJune QiJune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@hlu1 hlu1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes in the deepseek look good to me.

@jinyangyuan-nvidia jinyangyuan-nvidia merged commit ead89a0 into NVIDIA:main Aug 12, 2025
4 checks passed
@jinyangyuan-nvidia jinyangyuan-nvidia deleted the dev/perf_online_eplb branch August 12, 2025 01:25
MartinMarciniszyn added a commit to MartinMarciniszyn/TensorRT-LLM that referenced this pull request Aug 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants