Skip to content

Conversation

@byshiue
Copy link
Collaborator

@byshiue byshiue commented Aug 7, 2025

When I run the forward with 5 same requests with input length 4096, output length 10, the original iteration logs are like

[08/07/2025-02:28:51] [TRT-LLM] [I] iter = 5, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.0823206901550293s, timestamp = 2025-08-07 02:28:51, num_scheduled_requests: 1, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 4096, 'num_generation_tokens': 0}
[08/07/2025-02:28:52] [TRT-LLM] [I] iter = 6, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.09875988960266113s, timestamp = 2025-08-07 02:28:52, num_scheduled_requests: 2, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 4096, 'num_generation_tokens': 1}
[08/07/2025-02:28:52] [TRT-LLM] [I] iter = 7, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.0669400691986084s, timestamp = 2025-08-07 02:28:52, num_scheduled_requests: 3, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 4096, 'num_generation_tokens': 2}
[08/07/2025-02:28:52] [TRT-LLM] [I] iter = 8, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.09499597549438477s, timestamp = 2025-08-07 02:28:52, num_scheduled_requests: 4, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 4096, 'num_generation_tokens': 3}
[08/07/2025-02:28:52] [TRT-LLM] [I] iter = 9, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.09572505950927734s, timestamp = 2025-08-07 02:28:52, num_scheduled_requests: 5, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 4096, 'num_generation_tokens': 4}

I observe that the last four sentences cannot reuse the cache of the first sentence. So, I update the kv cache of context request after the forward and the new iteration logs become

[08/07/2025-02:42:29] [TRT-LLM] [I] iter = 4, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.3997073173522949s, timestamp = 2025-08-07 02:42:29, num_scheduled_requests: 1, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 4096, 'num_generation_tokens': 0}
[08/07/2025-02:42:29] [TRT-LLM] [I] iter = 5, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.048545122146606445s, timestamp = 2025-08-07 02:42:29, num_scheduled_requests: 2, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 1, 'num_generation_tokens': 1}
[08/07/2025-02:42:29] [TRT-LLM] [I] iter = 6, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.0420839786529541s, timestamp = 2025-08-07 02:42:29, num_scheduled_requests: 3, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 1, 'num_generation_tokens': 2}
[08/07/2025-02:42:29] [TRT-LLM] [I] iter = 7, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.041689395904541016s, timestamp = 2025-08-07 02:42:29, num_scheduled_requests: 4, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 1, 'num_generation_tokens': 3}
[08/07/2025-02:42:29] [TRT-LLM] [I] iter = 8, global_rank = 0, rank = 0, currank_total_requests = 0/0, elapsed_time = 0.04293417930603027s, timestamp = 2025-08-07 02:42:29, num_scheduled_requests: 5, states = {'num_ctx_requests': 1, 'num_ctx_tokens': 1, 'num_generation_tokens': 4}

which reuse the kv cache of first sentence as we expect.

Summary by CodeRabbit

  • Bug Fixes

    • Improved handling to prevent accessing non-existent sequences when storing context blocks, enhancing stability.
  • New Features

    • Extended resource updates to process context requests, ensuring context blocks are stored for each relevant request.

@byshiue byshiue requested review from a team as code owners August 7, 2025 02:43
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 7, 2025

📝 Walkthrough

Walkthrough

The changes update the logic for storing context blocks in the C++ KVCacheManager to avoid accessing non-existent sequences. In the Python resource manager, the update_resources method now iterates over context requests and calls the context block storage method for each, after handling generation requests.

Changes

Cohort / File(s) Change Summary
C++ KVCacheManager sequence check
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
Modified to check if a sequence exists in mSequences before accessing it in storeContextBlocks, preventing invalid access.
Python resource manager context block storage
tensorrt_llm/_torch/pyexecutor/resource_manager.py
Extended update_resources to iterate over scheduled_batch.context_requests and call store_context_blocks for each request.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Suggested reviewers

  • mikeiovine
  • pcastonguay

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ca9ad89 and d27335a.

📒 Files selected for processing (2)
  • cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py
  • cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@byshiue
Copy link
Collaborator Author

byshiue commented Aug 7, 2025

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp (1)

105-110: Consider adjusting log level for production use.

The implementation correctly follows the established pattern for pure virtual overrides. However, the INFO log level might be too verbose for production environments, especially since this method could be called frequently during KV cache operations.

Consider changing to DEBUG level:

-        TLLM_LOG_INFO("%s start", __PRETTY_FUNCTION__);
+        TLLM_LOG_DEBUG("%s start", __PRETTY_FUNCTION__);
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 157ea77 and 4c5d112.

📒 Files selected for processing (3)
  • cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp (1 hunks)
  • cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{cpp,h,hpp,cc,cxx}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,h,hpp,cc,cxx}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo).
Prefer const or constexpr variables over #defines whenever possible.
A variable that is not modified after its initialization should be declared as const.
Except 0 (used for checking signness/existence/emptiness), nullptr, true, false, all other literals should only be used for variable initialization.
Use the Allman indentation style for braces in C++ code.
Put the semicolon for an empty for or while loop in a new line.
The statement forming the body of a switch, while, do..while, or for statement shall be a compound statement (use brace-delimited statements).
If and else should always be followed by brace-delimited statements, even if empty or a single statement.
C++ filenames should use camel case with the first letter lowercase (e.g., thisIsAFilename.cpp), and all files involved in a compilation target must have case-insensitive unique filenames.
All types (including class names) should use camel case with uppercase first letter (e.g., FooBarClass).
Local variables, methods, and namespaces should use camel case with first letter lowercase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camel case prefixed by 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number global variables that are static or defined in an anonymous namespace should use camel case prefixed by 's' (e.g., sMutableStaticGlobal).
Locally visible static variables should use camel case with lowercase prefix 's' as the first letter (e.g., static std::once_flag sFlag;).
Class member variables should use camel case prefixed with 'm' (e.g., mNbFooValues). Public member variables do not require the 'm' prefix but it is encouraged for clarity.
Enumerations, global constants, static constants at class-scope, and function-scope magic-number/literal constants should be uppercase snake case with prefix...

Files:

  • cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
  • cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
  • cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tensorrt_llm/_torch/pyexecutor/resource_manager.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
📚 Learning: in cpp/tensorrt_llm/batch_manager/datatransceiverimpl.cpp, the existing `mmtxformap` mutex in datase...
Learnt from: zhengd-nv
PR: NVIDIA/TensorRT-LLM#6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.

Applied to files:

  • cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
  • cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
📚 Learning: in tensorrt_llm/executor/worker.py, the lora adapter cache optimization logic that checks `is_adapte...
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (1)

453-456: LGTM! Clean implementation following established patterns.

The addition of the context request block storage loop is well-implemented:

  • Follows the same pattern as the generation request handling above
  • Clear, descriptive comment explaining the purpose
  • Proper integration with the existing resource update workflow

This change aligns perfectly with the PR objective of improving KV cache reuse for repeated context requests.

cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp (1)

351-351: LGTM! Python binding correctly exposes the C++ method.

The binding follows standard pybind11 patterns and enables the Python-side KVCacheManager to call the C++ storeNewBlock implementation. The method name appropriately uses snake_case convention for Python.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14363 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14363 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #10855 completed with status: 'FAILURE'

@byshiue
Copy link
Collaborator Author

byshiue commented Aug 7, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14410 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14410 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #10892 completed with status: 'FAILURE'

@byshiue
Copy link
Collaborator Author

byshiue commented Aug 7, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14419 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14419 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #10899 completed with status: 'FAILURE'

@Funatiq Funatiq requested a review from thorjohnsen August 7, 2025 10:55
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
@byshiue
Copy link
Collaborator Author

byshiue commented Aug 8, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14583 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14583 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11016 completed with status: 'FAILURE'

@byshiue
Copy link
Collaborator Author

byshiue commented Aug 9, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14661 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14661 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11065 completed with status: 'SUCCESS'

@byshiue byshiue changed the title [TRTLLM-5532][KV Cache][feat] store the block of context request into kv cache [TRTLLM-5532][feat] store the block of context request into kv cache Aug 10, 2025
@byshiue byshiue merged commit 83dbc6c into NVIDIA:main Aug 11, 2025
4 of 6 checks passed
QiJune added a commit to QiJune/TensorRT-LLM that referenced this pull request Aug 12, 2025
@coderabbitai coderabbitai bot mentioned this pull request Aug 12, 2025
MartinMarciniszyn added a commit to MartinMarciniszyn/TensorRT-LLM that referenced this pull request Aug 12, 2025
@Nekofish-L
Copy link
Contributor

During our stress testing, we have identified a critical issue that this PR introduces under high-concurrency scenarios. The bug manifests specifically when the capacity_scheduler_policy is set to MAX_UTILIZATION. Here is the extra_llm_api_options:

cuda_graph_config:
  enable_padding: true
  batch_sizes: [1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 20, 24, 28, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 128, 160, 192, 224, 256]

print_iter_log: true
return_perf_metrics: true
enable_chunked_prefill: true
# enable_attention_dp: true

kv_cache_config:
  enable_block_reuse: true
  dtype: auto

scheduler_config:
  capacity_scheduler_policy: MAX_UTILIZATION

We have been able to consistently reproduce the problem on our Qwen3-32B-FP8 (H20, TP1) setup. The error occurs reliably during an un-throttled concurrent load test with a batch size exceeding 128.

The stress test consistently results in the following engine crash error:

with torch_sampler
image

with trtllm_sampler
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants