Skip to content

Conversation

@stnie
Copy link
Collaborator

@stnie stnie commented Oct 23, 2025

  • added copyright header
  • Introduced a DummyModel and associated configurations to facilitate deterministic outputs for beam search tests.
  • Updated test cases to validate generation logits, logprobs, and cache indirection.
  • Refactored input prompts and expected outputs to align with the new model structure.
  • Added checks for context logits and overall output validation to ensure correctness in beam search functionality.

Summary by CodeRabbit

  • Tests
    • Enhanced beam search test infrastructure with improved scaffolding and deterministic utilities for more robust and comprehensive test coverage.

Description

Updated beam search tests to provide deterministic output, to prevent the choice of hardware to affect the results

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@stnie stnie requested review from Funatiq, dcampora and ixlmar October 23, 2025 10:16
@stnie
Copy link
Collaborator Author

stnie commented Oct 23, 2025

/bot run

@stnie stnie force-pushed the develop/tests/beam_search_test_robustness branch from 0a4c810 to 86520ad Compare October 23, 2025 16:10
@stnie
Copy link
Collaborator Author

stnie commented Oct 23, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22307 [ run ] triggered by Bot. Commit: 86520ad

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22307 [ run ] completed with state SUCCESS. Commit: 86520ad
/LLM/main/L0_MergeRequest_PR pipeline #16817 completed with status: 'SUCCESS'

stnie added 2 commits October 27, 2025 13:50
…odel and validation checks to provide deterministic output

- added copyright header
- Introduced a DummyModel and associated configurations to facilitate deterministic outputs for beam search tests.
- Updated test cases to validate generation logits, logprobs, and cache indirection.
- Refactored input prompts and expected outputs to align with the new model structure.
- Added checks for context logits and overall output validation to ensure correctness in beam search functionality.

Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
…or clarity and correctness

- Removed reliance on additional output from overlap scheduler
- Removed unused parameters from `get_expected_outputs` and `validate_output` functions to simplify the interface.
- Updated comments for better clarity on the logic flow within the beam search output validation.
- Adjusted cache indirection checks to ensure accuracy in the last token generation.
- Enhanced readability by restructuring code and comments in the test cases.

Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
@stnie stnie force-pushed the develop/tests/beam_search_test_robustness branch from 86520ad to 44df581 Compare October 27, 2025 12:50
@stnie stnie marked this pull request as ready for review October 27, 2025 12:50
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 27, 2025

📝 Walkthrough

Walkthrough

A single test file undergoes comprehensive refactoring to replace external model dependencies with deterministic dummy implementations. New classes for dummy models, configuration and weight loaders, and utility functions enable reproducible, isolated beam search testing without external model references.

Changes

Cohort / File(s) Summary
Dummy Model Infrastructure
tests/unittest/_torch/sampler/test_beam_search.py
Added DummyConfig (PretrainedConfig subclass), DummyModel (minimal forward pass with model registration), DummyWeightLoader (registered checkpoint loader returning empty dict), and DummyConfigLoader (registered config loader wrapping DummyConfig).
Test Output & Validation Utilities
tests/unittest/_torch/sampler/test_beam_search.py
Added BeamSearchTestOutput data class holding outputs and cache_indirection; added get_expected_outputs() function for deterministic 2-beam expected outputs; added validation helpers: check_generation_logits, check_logprobs, check_cache_indirection, validate_output_beam, check_context_logits, validate_output, and validate_outputs.
Test Fixture & Parameterization Updates
tests/unittest/_torch/sampler/test_beam_search.py
Refactored llm fixture to use dummy loaders instead of HfCheckpointLoader; replaced expected_outputs fixture with programmatic get_expected_outputs(); introduced fixed_params and llm_cuda_graph fixtures with beam width constraints; updated test_beam_search_output_shapes and test_beam_search_output_shapes_cuda_graph_and_overlap to use new utilities and numeric prompts.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Areas requiring attention:
    • Correctness of DummyModel.forward() logic and alignment with actual model interface expectations for beam search
    • Deterministic output generation in get_expected_outputs() and correctness of expected values for 2-beam setup
    • Validation helper implementations to ensure they correctly assert output properties
    • Fixture wiring and parameter constraints (beam width fixed to 2, max_tokens adjustments) applied consistently across tests

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description provides a brief but clear explanation of the changes and their purpose (making beam search tests deterministic to prevent hardware-dependent variations), and the PR Checklist is completed with all boxes marked. However, the Test Coverage section, which is a required component of the repository's PR template, is completely empty with only the comment placeholder remaining. The description does not list any specific tests that validate the changes or safeguard the new functionality, representing a significant gap in the required information that the template explicitly requests. Fill in the Test Coverage section by explicitly listing which test cases validate the changes, such as specific test methods in the modified test file that verify the deterministic dummy model behavior, generation logits, cache indirection, and context logits functionality. This section should clearly identify how the changes are tested and ensure sufficient test coverage per the PR template requirements.
Docstring Coverage ⚠️ Warning Docstring coverage is 41.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[https://nvbugs/5593199][test] Enhance beam search tests deterministic dummy model" is specific, concise, and directly related to the primary change in the changeset. It follows the required template format with the NVBugs ticket ID, proper type designation [test], and clearly communicates the core enhancement: adding a deterministic dummy model to beam search tests. The title accurately reflects the introduction of DummyConfig, DummyModel, and associated loaders, as well as the refactoring of test cases to use deterministic outputs rather than external model references.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (5)
tests/unittest/_torch/sampler/test_beam_search.py (5)

39-53: Add a docstring to the public class.

The DummyConfig class is missing a docstring. According to coding guidelines, classes should have Google-style docstrings describing their purpose.

Example:

 class DummyConfig(PretrainedConfig):
+    """Dummy configuration for deterministic beam search testing.
+    
+    Provides minimal configuration attributes needed to instantiate
+    a DummyModel for reproducible test outputs.
+    """

56-130: Add a docstring to the public class.

The DummyModel class is missing a docstring. According to coding guidelines, classes should have Google-style docstrings.

Example:

 @register_auto_model("DummyModel")
 class DummyModel(torch.nn.Module):
+    """Deterministic dummy model for beam search testing.
+    
+    Produces predictable logits based on input token IDs to enable
+    reproducible validation of beam search behavior without hardware
+    dependencies.
+    """

133-146: Add a class-level docstring.

While the method has a good docstring, the class itself should have a docstring describing its purpose.

Example:

 @register_checkpoint_weight_loader("DUMMY_FORMAT")
 class DummyWeightLoader(BaseWeightLoader):
+    """Weight loader for dummy test models that returns no weights."""

149-160: Add a class-level docstring.

While the method has a good docstring, the class itself should have a docstring describing its purpose.

Example:

 @register_config_loader("DUMMY_FORMAT")
 class DummyConfigLoader(BaseConfigLoader):
+    """Config loader for dummy test models that returns DummyConfig."""

167-172: Add a docstring to the public class.

The BeamSearchTestOutput class should have a docstring describing its purpose and attributes.

Example:

 class BeamSearchTestOutput:
+    """Container for expected beam search test outputs.
+    
+    Attributes:
+        outputs: Expected token IDs for each beam.
+        cache_indirection: Expected cache indirection indices for each beam.
+    """
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0019d99 and 44df581.

📒 Files selected for processing (1)
  • tests/unittest/_torch/sampler/test_beam_search.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/unittest/_torch/sampler/test_beam_search.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/unittest/_torch/sampler/test_beam_search.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/unittest/_torch/sampler/test_beam_search.py
🧬 Code graph analysis (1)
tests/unittest/_torch/sampler/test_beam_search.py (8)
tensorrt_llm/models/modeling_utils.py (1)
  • PretrainedConfig (369-570)
tensorrt_llm/_torch/attention_backend/interface.py (2)
  • AttentionMetadata (43-350)
  • seq_lens_cuda (219-220)
tensorrt_llm/_torch/models/checkpoints/hf/checkpoint_loader.py (1)
  • HfCheckpointLoader (19-75)
tensorrt_llm/_torch/models/checkpoints/base_config_loader.py (1)
  • BaseConfigLoader (6-13)
tensorrt_llm/_torch/models/checkpoints/base_weight_loader.py (1)
  • BaseWeightLoader (5-20)
tensorrt_llm/_torch/models/checkpoints/base_weight_mapper.py (2)
  • BaseWeightMapper (10-165)
  • skip_modules (133-134)
tensorrt_llm/executor/result.py (3)
  • CompletionOutput (96-160)
  • GenerationResult (764-939)
  • prompt_token_ids (817-818)
tensorrt_llm/_torch/pyexecutor/llm_request.py (2)
  • generation_logits (317-326)
  • additional_generation_outputs (353-362)
🪛 Ruff (0.14.1)
tests/unittest/_torch/sampler/test_beam_search.py

71-71: Unused method argument: args

(ARG002)


73-73: Unused method argument: position_ids

(ARG002)


76-76: Unused method argument: kwargs

(ARG002)


128-128: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


135-135: Unused method argument: checkpoint_dir

(ARG002)


135-135: Unused method argument: kwargs

(ARG002)


151-151: Unused method argument: checkpoint_dir

(ARG002)


151-151: Unused method argument: kwargs

(ARG002)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
tests/unittest/_torch/sampler/test_beam_search.py (8)

1-14: LGTM!

The copyright header is correctly formatted and includes the current year (2025) as required by the coding guidelines.


15-35: LGTM!

Import statements follow coding guidelines by maintaining module namespace and are all relevant to the test functionality.


174-257: LGTM!

The get_expected_outputs function has excellent documentation and implements deterministic beam search logic correctly. The algorithm properly tracks beam scores, handles beam swapping, and manages cache indirection updates.


260-280: LGTM!

The fixture configuration properly sets up the LLM with the dummy model loaders and appropriate constraints for deterministic testing.


283-299: LGTM!

The CUDA graph fixture is properly configured with overlap scheduler enabled and appropriate batch sizes for graph capture.


302-410: LGTM!

The validation helper functions are well-structured with clear docstrings and implement comprehensive checks for generation logits, logprobs, cache indirection, context logits, and overall output correctness. The decomposition into focused helper functions promotes readability and maintainability.


413-473: LGTM!

The test functions are comprehensively parametrized to cover various combinations of return_log_probs, gather_generation_logits, gather_context_logits, num_output_beams, and num_prompts. The tests properly configure sampling parameters with end_id=-1 for deterministic behavior and additional_model_outputs=["cache_indirection"] to validate cache indirection correctness.


476-477: LGTM!

Standard pytest main guard for direct script execution.

@stnie
Copy link
Collaborator Author

stnie commented Oct 27, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22650 [ run ] triggered by Bot. Commit: 44df581

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22650 [ run ] completed with state SUCCESS. Commit: 44df581
/LLM/main/L0_MergeRequest_PR pipeline #17074 completed with status: 'FAILURE'

@stnie
Copy link
Collaborator Author

stnie commented Oct 28, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22757 [ run ] triggered by Bot. Commit: 44df581

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22757 [ run ] completed with state SUCCESS. Commit: 44df581
/LLM/main/L0_MergeRequest_PR pipeline #17159 completed with status: 'SUCCESS'

@dcampora dcampora enabled auto-merge (squash) October 29, 2025 05:11
@dcampora dcampora merged commit 19ca7b1 into NVIDIA:main Oct 29, 2025
7 checks passed
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 1, 2025
…c dummy model (NVIDIA#8625)

Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…c dummy model (NVIDIA#8625)

Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…c dummy model (NVIDIA#8625)

Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…c dummy model (NVIDIA#8625)

Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
@stnie stnie deleted the develop/tests/beam_search_test_robustness branch December 18, 2025 07:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants