Skip to content

Conversation

@eopXD
Copy link
Collaborator

@eopXD eopXD commented Oct 9, 2025

Description

For VSWA scheme, we do not want kv_cache_cnonfig.max_token to control and cap the maximum memory of a block pool because block pool size are not identical amongst different window sizes. This MR omits the effect of kv_cache_config.max_tokens under kvCacheManager.cpp to allow the setting of block pool size to rely on the window size to share ratio and the total gpu memory analyzed and fed to the kv cache manager.

Test Coverage

Only skipping for VSWA scheme, no extra coverage was added.

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • New Features

    • No user-facing changes.
  • Refactor

    • Simplified KV cache estimation flow: preparation now only determines whether estimation should run, removing obsolete estimation logic for a clearer, more predictable behavior.
  • Documentation

    • Updated inline documentation to align with the simplified KV cache estimation process.
  • Impact

    • No functional changes expected for end-users; behavior remains consistent while reducing internal complexity.

@eopXD eopXD requested a review from a team as a code owner October 9, 2025 07:31
@eopXD eopXD requested a review from achartier October 9, 2025 07:31
@eopXD
Copy link
Collaborator Author

eopXD commented Oct 9, 2025

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 9, 2025

📝 Walkthrough

Walkthrough

Refactors KvCacheCreator by removing the private token estimation method and simplifying try_prepare_estimation to only return a boolean based on cp_config, without mutating kv_cache_config or performing estimation. Docstrings updated accordingly.

Changes

Cohort / File(s) Summary of Changes
KV cache estimation refactor
tensorrt_llm/_torch/pyexecutor/_util.py
Removed private method _get_token_num_for_estimation(self) -> int. Simplified try_prepare_estimation(self) -> bool to only return whether estimation should occur based on mapping.cp_config (presence of cp_type) and removed side effects that updated kv_cache_config. Updated docstring to reflect new behavior.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller
  participant KvCacheCreator
  Note over Caller,KvCacheCreator: Old flow (before change)
  Caller->>KvCacheCreator: try_prepare_estimation()
  activate KvCacheCreator
  KvCacheCreator->>KvCacheCreator: check mapping.cp_config
  alt estimation needed
    KvCacheCreator->>KvCacheCreator: _get_token_num_for_estimation()
    KvCacheCreator->>KvCacheCreator: mutate kv_cache_config (estimation)
    KvCacheCreator-->>Caller: true
  else not needed
    KvCacheCreator-->>Caller: false
  end
  deactivate KvCacheCreator
Loading
sequenceDiagram
  autonumber
  participant Caller
  participant KvCacheCreator
  Note over Caller,KvCacheCreator: New flow (after change)
  Caller->>KvCacheCreator: try_prepare_estimation()
  activate KvCacheCreator
  KvCacheCreator->>KvCacheCreator: return bool based on absence of cp_type in mapping.cp_config
  KvCacheCreator-->>Caller: true/false
  deactivate KvCacheCreator
  Note over Caller: No kv_cache_config mutations or token estimation performed
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
Title Check ✅ Passed The pull request title "[None][fix] Avoid overwrite of kv_cache_config.max_tokens for VSWA scheme for the KVCacheManager" accurately describes the main change in the changeset. The code modifications remove the private method _get_token_num_for_estimation and eliminate the logic in try_prepare_estimation that was updating kv_cache_config, which directly aligns with the title's intent to avoid overwriting kv_cache_config.max_tokens. The title is specific and concise, clearly identifying both what is being fixed (the overwrite behavior) and the relevant context (VSWA scheme and KVCacheManager), making it informative for someone scanning the project history.
Description Check ✅ Passed The pull request description follows the required template structure with all major sections present: a clear Description section explaining the VSWA scheme context and rationale for removing the kv_cache_config.max_tokens overwrite, a Test Coverage section acknowledging that no additional test coverage was added, and a completed PR Checklist confirming the author reviewed the required items. The description provides adequate technical context about the change and its reasoning. However, there appears to be a discrepancy between the description content (which focuses on kvCacheManager.cpp and VSWA scheme changes) and the raw file summary (which shows modifications to tensorrt_llm/_torch/pyexecutor/_util.py), suggesting the description may not capture the full scope or may be describing a different component of the change.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 27677a3 and 74dd0f3.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/_util.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/pyexecutor/_util.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/pyexecutor/_util.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/pyexecutor/_util.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/_util.py (1)
tensorrt_llm/_torch/distributed/communicator.py (1)
  • cp_config (107-108)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/_torch/pyexecutor/_util.py (1)

191-196: LGTM! Simplification aligns with PR objective.

The refactoring correctly simplifies try_prepare_estimation to only return a boolean indicator. The original max_tokens value is properly preserved via self._max_kv_tokens_in (line 69) and used in configure_kv_cache_capacity (lines 294-304), which fixes the bug where the configured value was previously overwritten.

@eopXD eopXD changed the title [None][fix] Remove overwrite of kv_cache_config.max_tokens [None][fix] Remove overwrite of kv_cache_config.max_tokens to allow the KVCacheManager to receive it Oct 9, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #20865 [ run ] triggered by Bot

@eopXD eopXD force-pushed the fix-max-token-configuration branch from 74dd0f3 to 76c92ed Compare October 9, 2025 07:46
@tensorrt-cicd
Copy link
Collaborator

PR_Github #20865 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15782 completed with status: 'FAILURE'

@eopXD eopXD force-pushed the fix-max-token-configuration branch from 76c92ed to 1f5a7b1 Compare October 10, 2025 07:11
@eopXD
Copy link
Collaborator Author

eopXD commented Oct 10, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21000 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21000 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15878 completed with status: 'FAILURE'

@eopXD eopXD force-pushed the fix-max-token-configuration branch from 1f5a7b1 to 67b03f3 Compare October 14, 2025 00:38
@eopXD
Copy link
Collaborator Author

eopXD commented Oct 14, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21265 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21265 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16055 completed with status: 'FAILURE'

@eopXD eopXD force-pushed the fix-max-token-configuration branch from 67b03f3 to 1463820 Compare October 14, 2025 06:19
@eopXD
Copy link
Collaborator Author

eopXD commented Oct 14, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21318 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21318 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16090 completed with status: 'FAILURE'

@eopXD eopXD changed the title [None][fix] Remove overwrite of kv_cache_config.max_tokens to allow the KVCacheManager to receive it [None][fix] Avoid overwrite of kv_cache_config.max_tokens when specified to allow the KVCacheManager to receive it Oct 15, 2025
@eopXD eopXD force-pushed the fix-max-token-configuration branch from 1463820 to 2347e46 Compare October 15, 2025 02:07
@eopXD
Copy link
Collaborator Author

eopXD commented Oct 15, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21404 [ run ] triggered by Bot

@eopXD eopXD force-pushed the fix-max-token-configuration branch from 2347e46 to dd9e4de Compare October 15, 2025 02:25
@tensorrt-cicd
Copy link
Collaborator

PR_Github #21410 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #16169 (Blue Ocean) completed with status: ABORTED

@eopXD eopXD force-pushed the fix-max-token-configuration branch from 6272229 to 11f9bae Compare October 15, 2025 02:49
@eopXD
Copy link
Collaborator Author

eopXD commented Oct 15, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21438 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21415 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #16173 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21438 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16190 completed with status: 'FAILURE'

@eopXD eopXD force-pushed the fix-max-token-configuration branch 2 times, most recently from a109570 to ea9adc1 Compare October 16, 2025 01:34
@eopXD
Copy link
Collaborator Author

eopXD commented Oct 16, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21510 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21510 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #16239 completed with status: 'FAILURE'

@eopXD eopXD force-pushed the fix-max-token-configuration branch from ea9adc1 to be4f673 Compare October 17, 2025 07:01
…A scheme

Also a small fix in debug log logic.

Signed-off-by: eopXD <yuehtingc@nvidia.com>
@eopXD eopXD force-pushed the fix-max-token-configuration branch from be4f673 to 1731251 Compare October 17, 2025 07:02
@eopXD
Copy link
Collaborator Author

eopXD commented Oct 17, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21676 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21676 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16334 completed with status: 'SUCCESS'

@eopXD eopXD requested a review from achartier October 20, 2025 01:09
@eopXD eopXD changed the title [None][fix] Avoid overwrite of kv_cache_config.max_tokens when specified to allow the KVCacheManager to receive it [None][fix] Avoid overwrite of kv_cache_config.max_tokens for VSWA scheme for the KVCacheManager Oct 20, 2025
@eopXD eopXD changed the title [None][fix] Avoid overwrite of kv_cache_config.max_tokens for VSWA scheme for the KVCacheManager [None][fix] Avoid overwrite of kv_cache_config.max_tokens for VSWA scheme for the KVCacheManager Oct 20, 2025
@eopXD eopXD enabled auto-merge (squash) October 20, 2025 01:31
@eopXD eopXD requested a review from jaedeokk October 20, 2025 01:31
Copy link
Collaborator

@jaedeok-nvidia jaedeok-nvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @eopXD for the updates. VSWA feature was not very well aligned with the other logics of the KV Cache manager (or cache calculations) because of multiple cache pools. Avoiding max_token overwrite will make the python logic concrete without WAR. Thanks!

@eopXD eopXD merged commit 128a351 into NVIDIA:main Oct 20, 2025
11 checks passed
govind-ramnarayan pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Oct 21, 2025
…scheme for the KVCacheManager (NVIDIA#8219)

For VSWA scheme, we do not want `kv_cache_cnonfig.max_token` to control
and cap the maximum memory of a block pool because block pool size are
not identical amongst different window sizes. This MR omits the effect
of `kv_cache_config.max_tokens` under `kvCacheManager.cpp` to allow the
setting of block pool size to rely on the window size to share ratio
and the total gpu memory analyzed and fed to the kv cache manager.

Only skipping for VSWA scheme, no extra coverage was added.

Signed-off-by: eopXD <yuehtingc@nvidia.com>
yufeiwu-nv pushed a commit to yufeiwu-nv/TensorRT-LLM that referenced this pull request Oct 24, 2025
…scheme for the KVCacheManager (NVIDIA#8219)

For VSWA scheme, we do not want `kv_cache_cnonfig.max_token` to control
and cap the maximum memory of a block pool because block pool size are
not identical amongst different window sizes. This MR omits the effect
of `kv_cache_config.max_tokens` under `kvCacheManager.cpp` to allow the
setting of block pool size to rely on the window size to share ratio
and the total gpu memory analyzed and fed to the kv cache manager.

Only skipping for VSWA scheme, no extra coverage was added.

Signed-off-by: eopXD <yuehtingc@nvidia.com>
Signed-off-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 1, 2025
…scheme for the KVCacheManager (NVIDIA#8219)

For VSWA scheme, we do not want `kv_cache_cnonfig.max_token` to control
and cap the maximum memory of a block pool because block pool size are
not identical amongst different window sizes. This MR omits the effect
of `kv_cache_config.max_tokens` under `kvCacheManager.cpp` to allow the
setting of block pool size to rely on the window size to share ratio
and the total gpu memory analyzed and fed to the kv cache manager.

Only skipping for VSWA scheme, no extra coverage was added.

Signed-off-by: eopXD <yuehtingc@nvidia.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…scheme for the KVCacheManager (NVIDIA#8219)

For VSWA scheme, we do not want `kv_cache_cnonfig.max_token` to control
and cap the maximum memory of a block pool because block pool size are
not identical amongst different window sizes. This MR omits the effect
of `kv_cache_config.max_tokens` under `kvCacheManager.cpp` to allow the
setting of block pool size to rely on the window size to share ratio
and the total gpu memory analyzed and fed to the kv cache manager.

Only skipping for VSWA scheme, no extra coverage was added.

Signed-off-by: eopXD <yuehtingc@nvidia.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…scheme for the KVCacheManager (NVIDIA#8219)

For VSWA scheme, we do not want `kv_cache_cnonfig.max_token` to control
and cap the maximum memory of a block pool because block pool size are
not identical amongst different window sizes. This MR omits the effect
of `kv_cache_config.max_tokens` under `kvCacheManager.cpp` to allow the
setting of block pool size to rely on the window size to share ratio
and the total gpu memory analyzed and fed to the kv cache manager.

Only skipping for VSWA scheme, no extra coverage was added.

Signed-off-by: eopXD <yuehtingc@nvidia.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…scheme for the KVCacheManager (NVIDIA#8219)

For VSWA scheme, we do not want `kv_cache_cnonfig.max_token` to control
and cap the maximum memory of a block pool because block pool size are
not identical amongst different window sizes. This MR omits the effect
of `kv_cache_config.max_tokens` under `kvCacheManager.cpp` to allow the
setting of block pool size to rely on the window size to share ratio
and the total gpu memory analyzed and fed to the kv cache manager.

Only skipping for VSWA scheme, no extra coverage was added.

Signed-off-by: eopXD <yuehtingc@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants