Skip to content

Conversation

@ixlmar
Copy link
Collaborator

@ixlmar ixlmar commented Oct 16, 2025

Description

Ensure that unsupported request parameters are handled gracefully.

Test Coverage

Relevant tests added.

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Bug Fixes

    • Enhanced beam width parameter validation to catch configuration mismatches and raise appropriate errors.
    • Improved error handling for unsupported beam search configurations.
  • Tests

    • Added comprehensive parameter validation tests for beam search, including edge cases for beam width and greedy decoding scenarios.

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 16, 2025

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21589 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21589 [ run ] completed with state DISABLED
L0 testing is limited to prioritized users. User ixlmar is not in the prioritized list. L0 testing cannot be triggered.

@chzblych
Copy link
Collaborator

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21601 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21601 [ run ] completed with state FAILURE

@chzblych
Copy link
Collaborator

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21608 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21608 [ run ] completed with state FAILURE

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 17, 2025

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21666 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21666 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #174 (Partly Tested) completed with status: 'FAILURE'

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 17, 2025

/bot run --disable-fail-fast --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2"

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 17, 2025

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2" --detailed-log

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21680 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21681 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21680 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21681 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #177 (Partly Tested) completed with status: 'FAILURE'

@ixlmar ixlmar force-pushed the fix/beam-search-request-validation branch from 5e1177f to c42fd3f Compare October 17, 2025 14:33
@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 17, 2025

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2" --detailed-log

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21712 [ run ] triggered by Bot. Commit: c42fd3f

@ixlmar ixlmar requested a review from Funatiq October 17, 2025 15:07
@tensorrt-cicd
Copy link
Collaborator

PR_Github #21712 [ run ] completed with state SUCCESS. Commit: c42fd3f
/LLM/release-1.1/L0_MergeRequest_PR pipeline #179 (Partly Tested) completed with status: 'FAILURE'

@ixlmar ixlmar force-pushed the fix/beam-search-request-validation branch 2 times, most recently from 7ef2ed8 to ddbcecf Compare October 20, 2025 07:33
@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2" --detailed-log

4 similar comments
@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2" --detailed-log

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2" --detailed-log

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2" --detailed-log

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

/bot run --stage-list "L40S-PyTorch-1,L40S-PyTorch-2,A30-PyTorch-1,A30-PyTorch-2" --detailed-log

@ixlmar ixlmar force-pushed the fix/beam-search-request-validation branch from ddbcecf to 0b006f5 Compare October 20, 2025 10:50
@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21906 [ run ] triggered by Bot. Commit: 0b006f5

@ixlmar ixlmar marked this pull request as ready for review October 20, 2025 12:15
@ixlmar ixlmar requested a review from a team as a code owner October 20, 2025 12:15
@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

@MartinMarciniszyn Could you review this on behalf of trt-llm-release-branch-approval, please?

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

📝 Walkthrough

Walkthrough

This PR refactors beam-width validation from the request queue's filtering stage to the py\_executor's validation layer. The request queue's special item handler now removes beam-width checks and focuses on shutdown and cancellation signals. Concurrently, comprehensive parameter validation tests are added with hardware-aware test behavior for beam search.

Changes

Cohort / File(s) Summary
Production code refactoring
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py, tensorrt_llm/_torch/pyexecutor/py_executor.py
Moves beam-width validation from _validate_and_filter_requests (renamed to _handle_special_queue_items) to _validate_request in py\_executor. Queue handler now only processes shutdown and cancellation signals; beam-width check is enforced at request validation stage via ValueError.
Request queue test updates
tests/unittest/_torch/executor/test_executor_request_queue.py
Renames test method from test_validate_and_filter_requests to test_handle_special_queue_items to align with production code refactoring; updates assertions to reflect new filtering behavior.
Beam search test suite expansion
tests/unittest/_torch/sampler/test_beam_search.py
Introduces hardware-aware test logic (SM version 89 detection), centralized FIXED\_PARAMS constant, fuzzy beam-ordering matching, and new TestParameterValidation class validating beam\_search and beam\_width parameters with error handling.
Test configuration
tests/integration/test_lists/test-db/l0_l40s.yml
Adds unittest/_torch/sampler/test_beam_search.py to PyTorch test list.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Executor as py_executor
    participant Queue as executor_queue
    
    rect rgb(220, 240, 255)
    note over Client,Queue: Old flow (beam-width in queue)
    Client->>Executor: _validate_request()
    Executor->>Executor: token range check
    Executor-->>Client: validation pass
    Client->>Queue: _validate_and_filter_requests()
    Queue->>Queue: beam-width check (removed)
    Queue-->>Client: filtered requests
    end
    
    rect rgb(240, 220, 255)
    note over Client,Queue: New flow (beam-width in validator)
    Client->>Executor: _validate_request()
    Executor->>Executor: beam-width check<br/>(sampling_config.beam_width<br/>vs max_beam_width)
    Executor->>Executor: token range check
    Executor-->>Client: validation pass/ValueError
    Client->>Queue: _handle_special_queue_items()
    Queue->>Queue: shutdown/cancellation only
    Queue-->>Client: accepted requests
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

The refactoring introduces cohesive changes across production validation layers, but the substantial test additions in test\_beam\_search.py—including hardware-specific branching, fuzzy matching logic, and a new parameter validation test class with multiple assertion patterns—require careful review across heterogeneous test scenarios and dependencies.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description is largely incomplete and lacks the required detail specified in the template. The Description section provides only a single vague sentence ("Ensure that unsupported request parameters are handled gracefully") without explaining what issue is being addressed, why it matters, or what the specific solution entails. The Test Coverage section is similarly inadequate, stating only "Relevant tests added" without listing specific test files, test methods, or describing what aspects they cover. While the PR Checklist is present and checked by the author, the substantive content sections that should convey the purpose and scope of the changes are insufficient for reviewers to understand the PR's intent. The author should expand the Description section to clearly explain the problem being solved (e.g., why beam search request validation was needed, what issues occurred without it), describe the solution implemented, and explain how the validation logic was moved or refactored. The Test Coverage section should explicitly list the specific test files and test methods added (such as TestParameterValidation.test_smaller_beam_width, test_use_beam_search_false, etc.) and briefly describe what validation scenarios each test covers. This will help reviewers quickly understand the scope and rationale of the changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 35.29% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title "[TRTLLM-8650][fix] beam search request validation" directly and specifically relates to the primary changes in the pull request. The changeset adds validation logic for beam search request parameters, moving beam-width validation to py_executor.py and removing it from executor_request_queue.py, while adding comprehensive tests for beam search parameter validation. The title accurately summarizes this primary change and is clear, concise, and meaningful for scanning repository history.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)

1320-1323: Shorten exception message (TRY003) without changing semantics

Trim the message to satisfy linters yet keep tests matching.

-                raise ValueError(
-                    f"Request beam width {sampling_config.beam_width} "
-                    f"is not equal to max_beam_width {self.max_beam_width}. This is not supported!"
-                )
+                raise ValueError(
+                    f"Request beam width {sampling_config.beam_width} "
+                    f"is not equal to max_beam_width {self.max_beam_width}"
+                )

1325-1336: Defensive access to optional multimodal flag

Avoid potential AttributeError if py_multimodal_data is missing.

-                has_mm = bool(request.py_multimodal_data)
+                has_mm = bool(getattr(request, "py_multimodal_data", None))
tests/unittest/_torch/executor/test_executor_request_queue.py (1)

328-347: Remove stale comment referencing beam validation

Beam validation moved to PyExecutor; this unit tests only queue filtering.

-    # Create a mock request without sampling_config to avoid beam validation
     mock_request = Mock()
-    delattr(mock_request, 'sampling_config') if hasattr(
-        mock_request, 'sampling_config') else None
tests/unittest/_torch/sampler/test_beam_search.py (2)

248-267: Use raw regex strings in pytest.raises(match=...) (RUF043)

Prefix patterns with r'' to avoid unintended escapes and silence linters.

-        with pytest.raises(
-                ValueError,
-                match=
-                ".*Greedy decoding in the LLM API does not allow multiple returns.*"
-        ):
+        with pytest.raises(
+                ValueError,
+                match=r".*Greedy decoding in the LLM API does not allow multiple returns.*",
+        ):
...
-        with pytest.raises(
-                ValueError,
-                match=
-                ".*Greedy decoding in the LLM API does not allow multiple returns.*"
-        ):
+        with pytest.raises(
+                ValueError,
+                match=r".*Greedy decoding in the LLM API does not allow multiple returns.*",
+        ):
...
-        with pytest.raises(
-                RequestError,
-                match=".*Request beam width 2 is not equal to max_beam_width 4*"
-        ):
+        with pytest.raises(
+                RequestError,
+                match=r".*Request beam width 2 is not equal to max_beam_width 4.*",
+        ):

Also applies to: 270-289, 298-309


271-289: Fix test name typo

Rename “ommitted” -> “omitted” for clarity.

-def test_use_beam_search_ommitted(
+def test_use_beam_search_omitted(
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8ce2dc5 and 0b006f5.

📒 Files selected for processing (5)
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (1 hunks)
  • tests/integration/test_lists/test-db/l0_l40s.yml (1 hunks)
  • tests/unittest/_torch/executor/test_executor_request_queue.py (2 hunks)
  • tests/unittest/_torch/sampler/test_beam_search.py (5 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tests/unittest/_torch/executor/test_executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tests/unittest/_torch/sampler/test_beam_search.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tests/unittest/_torch/executor/test_executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tests/unittest/_torch/sampler/test_beam_search.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tests/unittest/_torch/executor/test_executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tests/unittest/_torch/sampler/test_beam_search.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: dcampora
PR: NVIDIA/TensorRT-LLM#6867
File: tensorrt_llm/_torch/pyexecutor/sampler.py:67-72
Timestamp: 2025-08-13T16:20:37.987Z
Learning: In TensorRT-LLM sampler code, performance is prioritized over additional validation checks. The beam_width helper method intentionally returns the first request's beam_width without validating consistency across all requests to avoid performance overhead from iterating through the entire batch.
🧬 Code graph analysis (4)
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (2)
tensorrt_llm/executor/executor.py (1)
  • is_shutdown (289-290)
tensorrt_llm/_torch/pyexecutor/llm_request.py (2)
  • append (79-98)
  • append (125-142)
tests/unittest/_torch/executor/test_executor_request_queue.py (1)
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)
  • _handle_special_queue_items (453-467)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)
  • beam_width (78-81)
tests/unittest/_torch/sampler/test_beam_search.py (1)
tensorrt_llm/executor/utils.py (1)
  • RequestError (76-77)
🪛 Ruff (0.14.0)
tensorrt_llm/_torch/pyexecutor/py_executor.py

1320-1323: Avoid specifying long messages outside the exception class

(TRY003)

tests/unittest/_torch/sampler/test_beam_search.py

258-258: Pattern passed to match= contains metacharacters but is neither escaped nor raw

(RUF043)


280-280: Pattern passed to match= contains metacharacters but is neither escaped nor raw

(RUF043)


300-300: Pattern passed to match= contains metacharacters but is neither escaped nor raw

(RUF043)

🔇 Additional comments (8)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

1316-1324: Beam-width guard is correct; keep it per-request

This enforces uniform beam width before scheduling and complements Sampler.beam_width’s “first-request only” behavior. Good placement. Based on learnings

tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)

453-468: Special-item filtering looks good

Shutdown/cancel handling is clear and minimal; filtering normal items is correct.

tests/unittest/_torch/sampler/test_beam_search.py (5)

5-9: Imports updated appropriately

Adding getSMVersion and RequestError aligns tests with hardware gating and API error type.


20-23: Hardware helper LGTM

is_l40s() is simple and localized.


27-61: Hardware-aware expectations are fine for now

Given known nvbugs, the branching expected_outputs is acceptable.


63-69: Centralized FIXED_PARAMS is good

Reduces duplication across tests.


1-232: No action required; @force_ampere correctly enables tests on L40S

The @force_ampere decorator skips tests only when SM < 80 or SM > 89. Since L40S is SM 89 (within the Ampere range), the decorator does not skip this suite on L40S nodes. The test properly handles L40S-specific behavior via is_l40s() and the expected_outputs fixture already provides hardware-specific assertions for both L40S and other Ampere variants.

tests/integration/test_lists/test-db/l0_l40s.yml (1)

17-17: Clarify test duplication across configs

The test file exists and is correctly referenced, but appears in both l0_l40s.yml (line 17) and l0_a30.yml (line 23). Verify this duplication is intentional (e.g., testing across different hardware SKUs) and not an accidental duplicate that would cause redundant CI runs.

@ixlmar ixlmar enabled auto-merge (squash) October 20, 2025 15:52
@ixlmar ixlmar disabled auto-merge October 20, 2025 15:53
@ixlmar ixlmar enabled auto-merge (squash) October 20, 2025 15:53
@tensorrt-cicd
Copy link
Collaborator

PR_Github #21906 [ run ] completed with state SUCCESS. Commit: 0b006f5
/LLM/release-1.1/L0_MergeRequest_PR pipeline #199 completed with status: 'FAILURE'

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21937 [ run ] triggered by Bot. Commit: 0b006f5

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21937 [ run ] completed with state SUCCESS. Commit: 0b006f5
/LLM/release-1.1/L0_MergeRequest_PR pipeline #203 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@ixlmar ixlmar merged commit f256eb9 into NVIDIA:release/1.1 Oct 21, 2025
7 checks passed
@ixlmar ixlmar deleted the fix/beam-search-request-validation branch October 21, 2025 08:50
ixlmar added a commit to ixlmar/TensorRT-LLM that referenced this pull request Oct 21, 2025
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
ixlmar added a commit to ixlmar/TensorRT-LLM that referenced this pull request Nov 17, 2025
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
ixlmar added a commit to ixlmar/TensorRT-LLM that referenced this pull request Nov 17, 2025
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
ixlmar added a commit to ixlmar/TensorRT-LLM that referenced this pull request Nov 18, 2025
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
ixlmar added a commit to ixlmar/TensorRT-LLM that referenced this pull request Nov 18, 2025
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
ixlmar added a commit to ixlmar/TensorRT-LLM that referenced this pull request Nov 18, 2025
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
ixlmar added a commit that referenced this pull request Nov 21, 2025
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants