Skip to content

Conversation

@shaharmor98
Copy link
Collaborator

@shaharmor98 shaharmor98 commented Aug 4, 2025

Summary by CodeRabbit

  • Documentation

    • Added a comprehensive guide on the PyTorch backend checkpoint loading system, including architecture overview, usage instructions, and customization examples for checkpoint loaders, config loaders, weight loaders, and weight mappers.
  • Bug Fixes

    • Corrected a typo in the method name from get_initilized_weight_mapper to get_initialized_weight_mapper to ensure consistency and prevent potential errors.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Shahar Mor <17088876+shaharmor98@users.noreply.github.com>
@shaharmor98 shaharmor98 requested review from a team as code owners August 4, 2025 05:32
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 4, 2025

📝 Walkthrough

Walkthrough

A new documentation file was added describing the modular checkpoint loading system for PyTorch in TRTLLM, including usage and customization. Additionally, a typo in the method name get_initilized_weight_mapper was corrected to get_initialized_weight_mapper in both its definition and usage, with no logic changes.

Changes

Cohort / File(s) Change Summary
Documentation: Checkpoint Loading Guide
docs/source/torch/features/checkpoint_loading.md
Added a comprehensive guide detailing the architecture, usage, and customization of the modular checkpoint loading system for PyTorch in TRTLLM, including code templates for custom components.
Typo Fix: Method Name in Base Loader
tensorrt_llm/_torch/models/checkpoints/base_checkpoint_loader.py
Renamed method get_initilized_weight_mapper to get_initialized_weight_mapper in the BaseCheckpointLoader class; logic remains unchanged.
Typo Fix: Method Call in Model Engine
tensorrt_llm/_torch/pyexecutor/model_engine.py
Updated method call from get_initilized_weight_mapper to get_initialized_weight_mapper in the _load_model method of PyTorchModelEngine.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant LLM
    participant CheckpointLoader
    participant ConfigLoader
    participant WeightLoader
    participant WeightMapper

    User->>LLM: load_model(checkpoint_dir)
    LLM->>CheckpointLoader: load(checkpoint_dir)
    CheckpointLoader->>ConfigLoader: load(checkpoint_dir)
    ConfigLoader-->>CheckpointLoader: ModelConfig
    CheckpointLoader->>WeightLoader: load_weights(checkpoint_dir)
    WeightLoader-->>CheckpointLoader: weights_dict
    CheckpointLoader->>WeightMapper: map_weights(model, weights_dict)
    WeightMapper-->>CheckpointLoader: mapped_weights
    CheckpointLoader-->>LLM: loaded_model
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~7 minutes

Suggested labels

Documentation

Suggested reviewers

  • nv-guomingz
  • yilin-void
  • chzblych

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
docs/source/torch/features/checkpoint_loading.md (2)

116-116: Minor wording tweak for clarity

“…provide any specific subcomponent upon need” → “…provide any specific subcomponent when needed”.

-          checkpoint_loader=HfCheckpointLoader(weight_loader=MyCustomWeightLoader()))
+          checkpoint_loader=HfCheckpointLoader(weight_loader=MyCustomWeightLoader()))
-... they can provide any specific subcomponent upon need
+... they can provide any specific subcomponent when needed

332-332: Spelling typo: “asscoiated” → “associated”

-By setting the model name, the registered mapper will be asscoiated with the specific model.
+By setting the model name, the registered mapper will be associated with the specific model.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4763e94 and 5a63684.

📒 Files selected for processing (3)
  • docs/source/torch/features/checkpoint_loading.md (1 hunks)
  • tensorrt_llm/_torch/models/checkpoints/base_checkpoint_loader.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile = ...).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL = ...).
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tensorrt_llm/_torch/models/checkpoints/base_checkpoint_loader.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tensorrt_llm/_torch/models/checkpoints/base_checkpoint_loader.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • docs/source/torch/features/checkpoint_loading.md
🪛 LanguageTool
docs/source/torch/features/checkpoint_loading.md

[style] ~146-~146: Consider using a different adverb to strengthen your wording.
Context: ...ent all four components when supporting a completely new checkpoint format - **Custom Weight...

(COMPLETELY_ENTIRELY)


[grammar] ~332-~332: Ensure spelling is correct
Context: ...del name, the registered mapper will be asscoiated with the specific model.

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🔇 Additional comments (3)
tensorrt_llm/_torch/models/checkpoints/base_checkpoint_loader.py (1)

70-71: LGTM! Typo correction improves code quality.

The method name correction from get_initilized_weight_mapper to get_initialized_weight_mapper fixes the misspelling and improves code readability. The method signature and functionality remain unchanged.

tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

1061-1061: LGTM! Method call updated to match corrected method name.

The method call correction from get_initilized_weight_mapper to get_initialized_weight_mapper maintains consistency with the updated method name in BaseCheckpointLoader. This ensures the code continues to function correctly after the typo fix.

docs/source/torch/features/checkpoint_loading.md (1)

35-36: All references to the old misspelling have been removed

I ran rg -n "get_initilized_weight_mapper" across the repository and found no matches—there are no remaining stale references.

Copy link
Collaborator

@nv-guomingz nv-guomingz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@nv-guomingz nv-guomingz force-pushed the docs/add-checkpoint-docs branch from 5a63684 to c0ca8f0 Compare August 7, 2025 09:37
@nv-guomingz nv-guomingz requested review from a team as code owners August 7, 2025 09:37
@nv-guomingz
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14444 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14444 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10917 completed with status: 'FAILURE'

Copy link
Collaborator

@chenopis chenopis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some grammar and spelling nits. LGTM overall.

@shaharmor98 shaharmor98 enabled auto-merge (squash) August 10, 2025 10:31
@shaharmor98
Copy link
Collaborator Author

/bot run

@shaharmor98 shaharmor98 disabled auto-merge August 10, 2025 10:32
Signed-off-by: Shahar Mor <17088876+shaharmor98@users.noreply.github.com>
@shaharmor98 shaharmor98 force-pushed the docs/add-checkpoint-docs branch from c0ca8f0 to d3aa1f9 Compare August 10, 2025 10:32
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
docs/source/torch/features/checkpoint_loading.md (2)

321-323: Missing colon makes the example invalid Python

Add the trailing colon.

 @register_mapper("CUSTOM_FORMAT")
-class CustomWeightMapper(BaseWeightMapper)
+class CustomWeightMapper(BaseWeightMapper):

328-330: Missing colon in model-specific mapper example

Add the trailing colon.

 @register_mapper("CUSTOM_FORMAT", "Gemma3ForCausalLM")
-class CustomWeightMapper(BaseWeightMapper)
+class CustomWeightMapper(BaseWeightMapper):
🧹 Nitpick comments (4)
docs/source/torch/features/checkpoint_loading.md (4)

82-85: Normalize phrasing and casing in the HF features bullets

Make the three bullets parallel and consistently phrased.

- - **Weights loading** (`.safetensors, .bin, .pth`): Load HF-compatible weights from disk
- - **Configuration parser** - Parse configuration information stored by HF into a TRTLLM `ModelConfig` object
- - **Weights Mapping** - Convert HF weights into a TRTLLM-compatible representation
+ - **Weight loading** (`.safetensors`, `.bin`, `.pth`): Load HF-compatible weights from disk.
+ - **Configuration parsing**: Parse configuration information stored by HF into a TRTLLM `ModelConfig` object.
+ - **Weight mapping**: Convert HF weights into a TRTLLM-compatible representation.

90-90: Add missing period

End the sentence with a period.

-There are two main approaches for using checkpoint loading objects
+There are two main approaches for using checkpoint loading objects.

63-65: Missing typing import for Any

The signature uses dict[str, Any] but Any isn’t imported in this snippet.

+from typing import Any
 from tensorrt_llm._torch.models.checkpoints.base_weight_loader import BaseWeightLoader

146-146: Grammar: “Completely New Format”

Use the adverb form.

-- **Complete New Format**: Implement all four components to support a new checkpoint format
+- **Completely New Format**: Implement all four components to support a new checkpoint format
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5a63684 and d3aa1f9.

📒 Files selected for processing (1)
  • docs/source/torch/features/checkpoint_loading.md (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14705 [ run ] triggered by Bot

@shaharmor98
Copy link
Collaborator Author

/bot run --disable-fail-fast

@shaharmor98 shaharmor98 enabled auto-merge (squash) August 10, 2025 10:41
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14706 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14705 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14706 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11100 completed with status: 'FAILURE'

@shaharmor98
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14715 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14715 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11106 completed with status: 'SUCCESS'

@shaharmor98 shaharmor98 merged commit b6baa9e into NVIDIA:main Aug 10, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants