Skip to content

Conversation

@Wanli-Jiang
Copy link
Collaborator

@Wanli-Jiang Wanli-Jiang commented Oct 31, 2025

Feature:

  • Update doc for nano-v2-vl.
  • Update chat_template util for nano-v2-vl.
  • Enable e2e and metrics sanity tests.

Summary by CodeRabbit

  • New Features

    • Added support for NVIDIA-Nemotron-Nano-12B-v2-VL model with BF16 and FP8 variants, including multimodal capabilities for images and videos.
    • Enhanced multimodal placeholder handling for improved image and video processing.
    • Added comprehensive documentation for Nemotron-Nano-v2-VL model series with inference examples.
  • Bug Fixes

    • Fixed prompt formatting for video and image frame sequences.
    • Improved chat template role assignment for better conversation handling.
  • Tests

    • Expanded test coverage for new Nemotron Nano model variants across multiple modalities.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 31, 2025

📝 Walkthrough

Walkthrough

This pull request introduces support for the NVIDIA-Nemotron-Nano-12B-v2-VL model series. Changes include new documentation, updated model import paths, modifications to prompt token generation, dynamic role assignment in multimodal evaluation, placeholder handling logic for the new model, and expanded test coverage with reference accuracy updates.

Changes

Cohort / File(s) Summary
Documentation
examples/models/core/nemotron/README_nano-v2-vl.md, examples/models/core/nemotron/README_nemotron-3.md
Added new model documentation for Nemotron-nano-v2-VL with inference examples and known issues. Updated title in README_nemotron-3.md from "Nemotron" to "Nemotron-3".
Model Import & Implementation
tensorrt_llm/_torch/models/__init__.py, tensorrt_llm/_torch/models/modeling_nemotron_nano.py
Updated import path for NemotronH_Nano_VL_V2 from modeling_nanov2vlm to modeling_nemotron_nano. Removed trailing newline token from video/image frame prompt sequences.
Evaluation & Input Handling
tensorrt_llm/evaluate/lm_eval.py, tensorrt_llm/inputs/utils.py
Updated MultimodalLmEvalWrapper to use dynamic role from content["role"]. Added NemotronH_Nano_VL_V2 to PLACEHOLDER_EXCEPTIONS and implemented content-combining logic for image/video placeholders. Extended apply_chat_template to accept mm_placeholder_counts as dict or list of dicts.
Test Configuration & References
tests/integration/defs/accuracy/references/mmmu.yaml, tests/integration/defs/accuracy/test_llm_api_pytorch.py, tests/integration/defs/test_e2e.py, tests/integration/test_lists/qa/llm_function_core.txt
Replaced Nano-v2-VLM model references with NVIDIA-Nemotron-Nano-12B-v2-VL-BF16; updated accuracy reference from 43.78 to 26.67 with reasoning notes. Renamed test class from TestNano_V2_VLM to TestNemotron_Nano_12B_V2_VL and added EXTRA_EVALUATOR_KWARGS. Extended sampling parameters and multimodal test coverage. Added FP8 model variant tests.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

  • tensorrt_llm/inputs/utils.py: Verify the new placeholder handling logic for NemotronH_Nano_VL_V2, particularly the content sequencing for image/video placeholders and type coercion of mm_placeholder_counts between dict and list.
  • tensorrt_llm/_torch/models/modeling_nemotron_nano.py: Confirm that removing trailing newlines from prompt tokens does not introduce unintended behavioral changes in model inference.
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py: Verify test parametrization, EXTRA_EVALUATOR_KWARGS usage, and reasoning system prompt ("/no_think") application in MMMU evaluation.
  • tests/integration/defs/test_e2e.py: Cross-check multimodal test expectations (keywords, prompts) for both BF16 and FP8 variants; confirm model-specific conditionals are correctly applied.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is incomplete. While it begins with a brief "Feature" section listing three bullet points (doc update, chat_template util update, and e2e/metrics tests), the required template sections are not properly filled out. The "Description" section that should explain the issue and solution in detail contains only the placeholder comment, and the "Test Coverage" section that should list relevant tests is also empty. Although a checklist item is marked, the core descriptive and test coverage requirements from the template are missing.
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[TRTLLM-8119][feat] Update doc/tests/chat_template for nano-v2-vlm" directly aligns with the changeset, which includes documentation additions for the Nemotron-nano-v2-VL model, updates to chat template utilities in tensorrt_llm/inputs/utils.py and tensorrt_llm/evaluate/lm_eval.py, and test/reference updates to support this model. The title is specific, concise, and clearly conveys the primary purpose of the changes without being vague or misleading.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/inputs/utils.py (1)

568-593: Use Python 3.8-compatible type annotations.

Repository guidelines target Python 3.8+. Replace PEP 604 unions and PEP 585 generics in the public signature.

-def apply_chat_template(
+def apply_chat_template(
     *,
     model_type: str,
     tokenizer: Union[TransformersTokenizer, TokenizerBase],
     processor: ProcessorMixin,
     conversation: list[ConversationMessage],
     add_generation_prompt: bool,
-    mm_placeholder_counts: dict[str, int] | list[dict[str, int]],
+    mm_placeholder_counts: Union[Dict[str, int], List[Dict[str, int]]],
     tools: Optional[list[dict[str, Any]]] = None,
     documents: Optional[list[dict[str, str]]] = None,
     chat_template: Optional[str] = None,
     chat_template_kwargs: Optional[dict[str, Any]] = None,
 ) -> (str | List[str]):

And, if desired for full 3.8 compliance, switch other local annotations (e.g., list[...] / dict[...]) to List[...] / Dict[...]. As per coding guidelines.

🧹 Nitpick comments (6)
tensorrt_llm/_torch/models/modeling_nemotron_nano.py (1)

519-526: Harden frame-loop alignment and confirm newline intent.

  • Use zip(..., strict=True) to fail fast when frame_separators and num_tokens per frame diverge.
  • You reintroduce "This is a video:\n". If the model expects no trailing newline after the last frame (per PR intent), please confirm this line doesn't inadvertently add an extra newline to the final chunk.
-            for frame_sep, num_tokens in zip(frame_separators,
-                                             num_tokens_per_frame):
+            for frame_sep, num_tokens in zip(frame_separators,
+                                             num_tokens_per_frame, strict=True):
@@
-                for frame_sep in frame_separators:
+                for frame_sep in frame_separators:  # OK as-is
+                    # If you later pair with lengths, prefer: zip(frame_separators, strict=True)

Also applies to: 532-539

tensorrt_llm/inputs/utils.py (1)

536-556: Make zip strict to catch misaligned placeholder counts.

Add strict=True so content and placeholder counts must match lengths; avoids silent truncation.

-        for conv, mm_placeholder_count in zip(conversation,
-                                              mm_placeholder_counts):
+        for conv, mm_placeholder_count in zip(conversation,
+                                              mm_placeholder_counts, strict=True):

Based on static analysis hints.

examples/models/core/nemotron/README_nano-v2-vl.md (1)

81-84: Tiny grammar nit.

Remove the stray space before the period.

- * Prefix-caching is not supported for Nemotron-nano-v2-VL yet .
+ * Prefix-caching is not supported for Nemotron-nano-v2-VL yet.

Optional: if markdownlint is enforced in CI, consider converting bare URLs to text and normalizing list indentation. Based on learnings.

tests/integration/defs/test_e2e.py (1)

2422-2424: Nemotron-Nano V2 VL BF16 coverage looks good; factor model-specific args.

Additions and keywords are reasonable. To reduce duplication across tests, consider a small helper that returns extra CLI args for a given model/modality (e.g., max_batch_size, kv-cache flags).

+# Pseudo helper
+def _extra_args_for(model_name: str, modality: str) -> list[str]:
+    args = []
+    if model_name == "NVIDIA-Nemotron-Nano-12B-v2-VL-BF16":
+        args += ["--max_batch_size=128", "--disable_kv_cache_reuse"]
+        if modality == "video":
+            args += ["--max_num_tokens=20480"]
+    return args

Then append cmd.extend(_extra_args_for(model_name, modality)) where used.

Also applies to: 2478-2501, 2598-2605

tests/integration/defs/accuracy/test_llm_api_pytorch.py (2)

3831-3845: Annotate mutable class attributes with ClassVar (fixes RUF012) and tighten max_tokens.

  • EXTRA_EVALUATOR_KWARGS, sampling_params, and kv_cache_config are mutable class attributes; annotate with typing.ClassVar.
  • Also align MAX_NUM_TOKENS with MMMU.MAX_OUTPUT_LEN to avoid runaway generations on MMMU.

Apply these diffs:

+from typing import ClassVar
-class TestNemotron_Nano_12B_V2_VL(LlmapiAccuracyTestHarness):
-    MODEL_NAME = "nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16"
-    MODEL_PATH = f"{llm_models_root()}/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16"
-    MAX_NUM_TOKENS = 25600
-    EXTRA_EVALUATOR_KWARGS = dict(
-        apply_chat_template=True,
-        system_prompt="/no_think",
-    )
+class TestNemotron_Nano_12B_V2_VL(LlmapiAccuracyTestHarness):
+    MODEL_NAME: ClassVar[str] = "nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16"
+    MODEL_PATH: ClassVar[str] = f"{llm_models_root()}/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16"
+    MAX_NUM_TOKENS: ClassVar[int] = MMMU.MAX_OUTPUT_LEN
+    EXTRA_EVALUATOR_KWARGS: ClassVar[dict] = dict(
+        # apply_chat_template is already True in MMMU.EVALUATOR_KWARGS
+        system_prompt="/no_think",
+    )
-    sampling_params = SamplingParams(max_tokens=MAX_NUM_TOKENS,
-                                     truncate_prompt_tokens=MMMU.MAX_INPUT_LEN,
-                                     temperature=0.0,
-                                     top_k=1,
-                                     stop="<|endoftext|>")
+    sampling_params: ClassVar[SamplingParams] = SamplingParams(
+        max_tokens=MAX_NUM_TOKENS,
+        truncate_prompt_tokens=MMMU.MAX_INPUT_LEN,
+        temperature=0.0,
+        top_k=1,
+        stop="<|endoftext|>",
+    )
-    kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.8,
-                                    enable_block_reuse=False)
+    kv_cache_config: ClassVar[KvCacheConfig] = KvCacheConfig(
+        free_gpu_memory_fraction=0.8,
+        enable_block_reuse=False,
+    )

3831-3834: Minor: drop redundant apply_chat_template override.

MMMU.EVALUATOR_KWARGS already sets apply_chat_template=True; overriding it again is redundant. Keep only system_prompt (as above).

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 98453d2 and 3c46fc6.

📒 Files selected for processing (10)
  • examples/models/core/nemotron/README_nano-v2-vl.md (1 hunks)
  • examples/models/core/nemotron/README_nemotron-3.md (1 hunks)
  • tensorrt_llm/_torch/models/__init__.py (1 hunks)
  • tensorrt_llm/_torch/models/modeling_nemotron_nano.py (2 hunks)
  • tensorrt_llm/evaluate/lm_eval.py (1 hunks)
  • tensorrt_llm/inputs/utils.py (4 hunks)
  • tests/integration/defs/accuracy/references/mmmu.yaml (1 hunks)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/defs/test_e2e.py (7 hunks)
  • tests/integration/test_lists/qa/llm_function_core.txt (3 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/evaluate/lm_eval.py
  • tests/integration/defs/test_e2e.py
  • tensorrt_llm/_torch/models/modeling_nemotron_nano.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tensorrt_llm/_torch/models/__init__.py
  • tensorrt_llm/inputs/utils.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/evaluate/lm_eval.py
  • tests/integration/defs/test_e2e.py
  • tensorrt_llm/_torch/models/modeling_nemotron_nano.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tensorrt_llm/_torch/models/__init__.py
  • tensorrt_llm/inputs/utils.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/evaluate/lm_eval.py
  • tests/integration/defs/test_e2e.py
  • tensorrt_llm/_torch/models/modeling_nemotron_nano.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tensorrt_llm/_torch/models/__init__.py
  • tensorrt_llm/inputs/utils.py
🧠 Learnings (15)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • examples/models/core/nemotron/README_nano-v2-vl.md
  • tests/integration/defs/test_e2e.py
  • examples/models/core/nemotron/README_nemotron-3.md
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/integration/defs/test_e2e.py
  • examples/models/core/nemotron/README_nemotron-3.md
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • tests/integration/defs/test_e2e.py
  • examples/models/core/nemotron/README_nemotron-3.md
  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.

Applied to files:

  • tests/integration/defs/test_e2e.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/integration/defs/test_e2e.py
  • examples/models/core/nemotron/README_nemotron-3.md
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-06T03:47:16.802Z
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6650
File: tests/integration/test_lists/qa/llm_perf_cluster.yml:33-37
Timestamp: 2025-08-06T03:47:16.802Z
Learning: Ministral is a valid model name from Mistral AI, distinct from the regular Mistral models. In TensorRT-LLM test configurations, "ministral_8b" and "ministral_8b_fp8" are correct model identifiers and should not be changed to "mistral_8b".

Applied to files:

  • tests/integration/defs/test_e2e.py
📚 Learning: 2025-08-18T08:42:02.640Z
Learnt from: samuellees
Repo: NVIDIA/TensorRT-LLM PR: 6974
File: tensorrt_llm/serve/scripts/benchmark_dataset.py:558-566
Timestamp: 2025-08-18T08:42:02.640Z
Learning: In TensorRT-LLM's RandomDataset (tensorrt_llm/serve/scripts/benchmark_dataset.py), when using --random-token-ids option, sequence length accuracy is prioritized over semantic correctness for benchmarking purposes. The encode/decode operations should use skip_special_tokens=True and add_special_tokens=False to ensure exact target token lengths.

Applied to files:

  • tensorrt_llm/_torch/models/modeling_nemotron_nano.py
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.

Applied to files:

  • examples/models/core/nemotron/README_nemotron-3.md
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • examples/models/core/nemotron/README_nemotron-3.md
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.

Applied to files:

  • examples/models/core/nemotron/README_nemotron-3.md
📚 Learning: 2025-08-21T00:16:56.457Z
Learnt from: farshadghodsian
Repo: NVIDIA/TensorRT-LLM PR: 7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.

Applied to files:

  • examples/models/core/nemotron/README_nemotron-3.md
📚 Learning: 2025-08-11T20:09:24.389Z
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.

Applied to files:

  • examples/models/core/nemotron/README_nemotron-3.md
📚 Learning: 2025-10-20T16:54:09.824Z
Learnt from: nvchenghaoz
Repo: NVIDIA/TensorRT-LLM PR: 8469
File: tensorrt_llm/_torch/auto_deploy/custom_ops/rms_norm.py:6-6
Timestamp: 2025-10-20T16:54:09.824Z
Learning: In tensorrt_llm/_torch/auto_deploy/custom_ops/rms_norm.py, the import `from ...modules.mamba.layernorm_gated import _layer_norm_fwd` is correct and should not be changed to modules.fla.layernorm_gated. The _layer_norm_fwd function exists in both modules/mamba/layernorm_gated.py and modules/fla/layernorm_gated.py, but the mamba version is the intended implementation for this use case.

Applied to files:

  • tensorrt_llm/_torch/models/__init__.py
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
🧬 Code graph analysis (3)
tensorrt_llm/evaluate/lm_eval.py (1)
tensorrt_llm/inputs/utils.py (1)
  • ConversationMessage (427-435)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (4)
tests/integration/defs/accuracy/accuracy_core.py (3)
  • MMMU (386-403)
  • evaluate (184-247)
  • evaluate (766-776)
tests/integration/defs/conftest.py (1)
  • llm_models_root (80-94)
tensorrt_llm/evaluate/lm_eval.py (4)
  • apply_chat_template (66-78)
  • apply_chat_template (197-249)
  • MMMU (662-715)
  • evaluate (394-429)
tensorrt_llm/llmapi/llm_args.py (1)
  • KvCacheConfig (1265-1409)
tensorrt_llm/_torch/models/__init__.py (1)
tensorrt_llm/_torch/models/modeling_nemotron_nano.py (1)
  • NemotronH_Nano_VL_V2 (682-855)
🪛 markdownlint-cli2 (0.18.1)
examples/models/core/nemotron/README_nano-v2-vl.md

4-4: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


4-4: Bare URL used

(MD034, no-bare-urls)


5-5: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


5-5: Bare URL used

(MD034, no-bare-urls)


6-6: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


6-6: Bare URL used

(MD034, no-bare-urls)


9-9: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


10-10: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


11-11: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


12-12: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


13-13: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


14-14: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


18-18: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


20-20: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


26-26: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


32-32: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


38-38: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


46-46: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


82-82: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


83-83: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)


84-84: Unordered list indentation
Expected: 0; Actual: 1

(MD007, ul-indent)

🪛 Ruff (0.14.2)
tests/integration/defs/accuracy/test_llm_api_pytorch.py

3831-3834: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)

tensorrt_llm/inputs/utils.py

537-538: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
tensorrt_llm/inputs/utils.py (1)

378-378: LGTM: model added to placeholder exceptions.

Including "NemotronH_Nano_VL_V2" here aligns with the new conversion path and custom content assembly.

examples/models/core/nemotron/README_nemotron-3.md (1)

1-1: LGTM.

Title update is consistent with file scope.

tests/integration/defs/accuracy/references/mmmu.yaml (1)

3-7: Record provenance for the new MMMU number.

Please add the eval date, commit/tag of the model, and harness version flags used so we can reproduce 26.67 later.

tests/integration/defs/test_e2e.py (1)

2841-2845: FP8 path: skip and flags look appropriate.

  • Skipping video for FP8 chunked prefill is explicitly handled; good.
  • Disabling KV cache reuse and bounding max_batch_size are consistent with BF16.

If FP8 enables chunked prefill video later, please drop the skip and tune max_num_tokens accordingly.

Also applies to: 2863-2869, 3005-3010

tensorrt_llm/_torch/models/__init__.py (1)

20-20: No stale imports detected—migration is clean.

The grep search confirms that the old module path modeling_nanov2vlm has been completely removed. All remaining references to NemotronH_Nano_VL_V2 are consistent: string-based model type checks in utils.py, the correct new import in __init__.py, class definition in modeling_nemotron_nano.py, and documentation. No split sources or stale imports remain.

tests/integration/test_lists/qa/llm_function_core.txt (3)

612-612: Symbol rename alignment looks good.

Updated to TestNemotron_Nano_12B_V2_VL::test_auto_dtype; matches the new class.


656-661: New Nemotron Nano 12B V2 VL multimodal e2e entries — OK.

The new image/video/mixture_text_image cases look consistent with existing patterns.


690-690: FP8 chunked prefill entry for Nemotron Nano 12B V2 VL — OK.

Name/path format matches the existing suite.

@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23177 [ run ] triggered by Bot. Commit: 3c46fc6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23177 [ run ] completed with state SUCCESS. Commit: 3c46fc6
/LLM/main/L0_MergeRequest_PR pipeline #17472 completed with status: 'FAILURE'

@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/update-nanov2vlm branch 3 times, most recently from 102c720 to cee6a3c Compare November 5, 2025 09:49
@Wanli-Jiang Wanli-Jiang requested a review from a team as a code owner November 5, 2025 09:49
@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23643 [ run ] triggered by Bot. Commit: cee6a3c

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23643 [ run ] completed with state SUCCESS. Commit: cee6a3c
/LLM/main/L0_MergeRequest_PR pipeline #17788 completed with status: 'FAILURE'

@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23694 [ run ] triggered by Bot. Commit: cee6a3c

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23694 [ run ] completed with state SUCCESS. Commit: cee6a3c
/LLM/main/L0_MergeRequest_PR pipeline #17827 completed with status: 'SUCCESS'

@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/update-nanov2vlm branch 2 times, most recently from c199033 to 1233001 Compare November 7, 2025 03:28
@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23800 [ run ] triggered by Bot. Commit: 1233001

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23800 [ run ] completed with state SUCCESS. Commit: 1233001
/LLM/main/L0_MergeRequest_PR pipeline #17916 completed with status: 'SUCCESS'

Copy link
Collaborator

@yechank-nvidia yechank-nvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx for adding the unittest. Left comment about deletion of e2e test.

@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/update-nanov2vlm branch from 1233001 to 2ded563 Compare November 10, 2025 12:06
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/update-nanov2vlm branch from 2ded563 to 5fe0107 Compare November 10, 2025 12:14
@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24022 [ run ] triggered by Bot. Commit: 5fe0107

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24022 [ run ] completed with state SUCCESS. Commit: 5fe0107
/LLM/main/L0_MergeRequest_PR pipeline #18097 completed with status: 'FAILURE'

Copy link
Collaborator

@yechank-nvidia yechank-nvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24088 [ run ] triggered by Bot. Commit: 5fe0107

@Wanli-Jiang Wanli-Jiang enabled auto-merge (squash) November 11, 2025 03:51
@tensorrt-cicd
Copy link
Collaborator

PR_Github #24088 [ run ] completed with state SUCCESS. Commit: 5fe0107
/LLM/main/L0_MergeRequest_PR pipeline #18153 completed with status: 'FAILURE'

@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24139 [ run ] triggered by Bot. Commit: 5fe0107

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24139 [ run ] completed with state SUCCESS. Commit: 5fe0107
/LLM/main/L0_MergeRequest_PR pipeline #18201 completed with status: 'FAILURE'

@Wanli-Jiang
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24188 [ run ] triggered by Bot. Commit: 5fe0107

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24188 [ run ] completed with state SUCCESS. Commit: 5fe0107
/LLM/main/L0_MergeRequest_PR pipeline #18238 completed with status: 'SUCCESS'

@Wanli-Jiang Wanli-Jiang merged commit ebdd1cc into NVIDIA:main Nov 11, 2025
5 checks passed
suyoggupta pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Nov 12, 2025
…VIDIA#8840)

Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants