-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTLLM-8214][feat] Support Qwen3 tool parser #8216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdds a pluggable tool parsing capability. Introduces a tool parser framework, a Qwen3 parser, and utilities. Wires a new Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant U as User (CLI)
participant S as Serve CLI
participant L as LLM Args
participant O as OpenAI Server
participant P as Postprocessor
participant F as ToolParserFactory
participant T as Tool Parser
U->>S: serve ... --tool_parser qwen3
S->>L: get_llm_args(..., tool_parser)
L-->>S: llm_args{..., tool_parser}
S->>O: start server with llm.args.tool_parser
O->>P: build ChatPostprocArgs(tool_parser)
P->>F: create_tool_parser(tool_parser)
F-->>P: parser instance (e.g., Qwen3ToolParser)
O->>P: post-process model output (stream/non-stream)
P->>T: detect_and_parse / parse_streaming_increment
T-->>P: StreamingParseResult(text, tool_calls[])
P-->>O: messages with tool_calls (IDs via make_tool_call_id)
O-->>U: OpenAI-compatible response/stream
sequenceDiagram
autonumber
participant M as Model Stream
participant P as Postprocessor
participant T as Tool Parser
participant ID as make_tool_call_id
loop token chunks
M-->>P: new_text
P->>T: parse_streaming_increment(new_text, tools)
T-->>P: {normal_text, calls[]}
alt first chunk for a call
P->>ID: generate ID (type=random|kimi_k2)
ID-->>P: tool_call_id
P-->>M: Delta with tool name + id
else subsequent arg chunks
P-->>M: Delta with arguments (id omitted)
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (5)
requirements.txt (1)
78-78: Pinpartial_json_parservia constraints.Please add
partial_json_parsertoconstraints.txt(e.g.,partial_json_parser==0.2.1.1.post6) so installs stay reproducible and immune to upstream releases drifting under us. Based on learnings.tensorrt_llm/serve/tool_parser/tool_parser_factory.py (1)
8-10: Annotate mutable class attribute withClassVar.The
parsersdictionary is a class-level attribute that should be annotated withtyping.ClassVarto clearly indicate it's not an instance attribute.Apply this diff:
+from typing import ClassVar, Type -from typing import Type from .base_tool_parser import BaseToolParser from .qwen3_tool_parser import Qwen3ToolParser class ToolParserFactory: - parsers: dict[str, Type[BaseToolParser]] = { + parsers: ClassVar[dict[str, Type[BaseToolParser]]] = { "qwen3": Qwen3ToolParser, }tensorrt_llm/serve/tool_parser/core_types.py (1)
29-34: Consider using a comment-style docstring for type aliases.The triple-quoted string after the type alias is not a standard Python docstring location. For type aliases, documentation is typically provided as a comment above the definition.
Apply this diff:
-""" -Helper alias of function -Usually it is a function that takes a name string and returns a StructureInfo object, -which can be used to construct a structural_tag object -""" -_GetInfoFunc = Callable[[str], StructureInfo] +# Helper alias for a function that takes a name string and returns a StructureInfo object, +# which can be used to construct a structural_tag object +_GetInfoFunc = Callable[[str], StructureInfo]tensorrt_llm/serve/postprocess_handlers.py (1)
218-246: Emit type="function" on first tool_call chunkAdd type only when id is present (first chunk) to align with OpenAI stream deltas.
- tool_calls.append( - DeltaToolCall( - id=tool_call_id, - index=call_item.tool_index, - function=DeltaFunctionCall( - name=function_name, - arguments=call_item.parameters, - ), - )) + tool_calls.append( + DeltaToolCall( + id=tool_call_id, + type="function" if tool_call_id else None, + index=call_item.tool_index, + function=DeltaFunctionCall( + name=function_name, + arguments=call_item.parameters, + ), + ))tensorrt_llm/serve/tool_parser/base_tool_parser.py (1)
297-299: Avoid broadexcept ExceptionLimit to known parse errors to prevent swallowing bugs; log and re-raise otherwise.
- except Exception as e: + except (MalformedJSON, json.JSONDecodeError, TypeError, ValueError, KeyError) as e: logger.error(f"Error in parse_streaming_increment: {e}") return StreamingParseResult()
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
requirements.txt(1 hunks)tensorrt_llm/commands/serve.py(6 hunks)tensorrt_llm/llmapi/llm_args.py(1 hunks)tensorrt_llm/serve/chat_utils.py(2 hunks)tensorrt_llm/serve/openai_server.py(1 hunks)tensorrt_llm/serve/postprocess_handlers.py(6 hunks)tensorrt_llm/serve/tool_parser/__init__.py(1 hunks)tensorrt_llm/serve/tool_parser/base_tool_parser.py(1 hunks)tensorrt_llm/serve/tool_parser/core_types.py(1 hunks)tensorrt_llm/serve/tool_parser/qwen3_tool_parser.py(1 hunks)tensorrt_llm/serve/tool_parser/tool_parser_factory.py(1 hunks)tensorrt_llm/serve/tool_parser/utils.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use only spaces, no tabs; indent with 4 spaces.
Files:
tensorrt_llm/serve/chat_utils.pytensorrt_llm/serve/openai_server.pytensorrt_llm/llmapi/llm_args.pytensorrt_llm/serve/tool_parser/__init__.pytensorrt_llm/serve/tool_parser/utils.pytensorrt_llm/serve/tool_parser/tool_parser_factory.pytensorrt_llm/serve/tool_parser/core_types.pytensorrt_llm/serve/postprocess_handlers.pytensorrt_llm/serve/tool_parser/qwen3_tool_parser.pytensorrt_llm/commands/serve.pytensorrt_llm/serve/tool_parser/base_tool_parser.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.
Files:
tensorrt_llm/serve/chat_utils.pytensorrt_llm/serve/openai_server.pytensorrt_llm/llmapi/llm_args.pytensorrt_llm/serve/tool_parser/__init__.pytensorrt_llm/serve/tool_parser/utils.pytensorrt_llm/serve/tool_parser/tool_parser_factory.pytensorrt_llm/serve/tool_parser/core_types.pytensorrt_llm/serve/postprocess_handlers.pytensorrt_llm/serve/tool_parser/qwen3_tool_parser.pytensorrt_llm/commands/serve.pytensorrt_llm/serve/tool_parser/base_tool_parser.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).
Files:
tensorrt_llm/serve/chat_utils.pytensorrt_llm/serve/openai_server.pytensorrt_llm/llmapi/llm_args.pytensorrt_llm/serve/tool_parser/__init__.pytensorrt_llm/serve/tool_parser/utils.pytensorrt_llm/serve/tool_parser/tool_parser_factory.pytensorrt_llm/serve/tool_parser/core_types.pytensorrt_llm/serve/postprocess_handlers.pytensorrt_llm/serve/tool_parser/qwen3_tool_parser.pytensorrt_llm/commands/serve.pytensorrt_llm/serve/tool_parser/base_tool_parser.py
🧬 Code graph analysis (7)
tensorrt_llm/llmapi/llm_args.py (1)
tensorrt_llm/builder.py (1)
default(50-58)
tensorrt_llm/serve/tool_parser/__init__.py (1)
tensorrt_llm/serve/tool_parser/tool_parser_factory.py (1)
ToolParserFactory(7-21)
tensorrt_llm/serve/tool_parser/tool_parser_factory.py (2)
tensorrt_llm/serve/tool_parser/base_tool_parser.py (1)
BaseToolParser(16-351)tensorrt_llm/serve/tool_parser/qwen3_tool_parser.py (1)
Qwen3ToolParser(13-114)
tensorrt_llm/serve/postprocess_handlers.py (6)
tensorrt_llm/serve/chat_utils.py (1)
make_tool_call_id(209-214)tensorrt_llm/serve/openai_protocol.py (9)
CompletionStreamResponse(169-175)DeltaFunctionCall(341-343)DeltaMessage(445-451)DeltaToolCall(353-357)FunctionCall(336-338)ToolCall(346-350)UsageInfo(71-74)to_disaggregated_params(923-933)ChatMessage(360-365)tensorrt_llm/serve/tool_parser/base_tool_parser.py (3)
BaseToolParser(16-351)detect_and_parse(89-96)parse_streaming_increment(111-299)tensorrt_llm/serve/tool_parser/core_types.py (1)
ToolCallItem(7-12)tensorrt_llm/serve/tool_parser/tool_parser_factory.py (2)
ToolParserFactory(7-21)create_tool_parser(13-21)tensorrt_llm/serve/tool_parser/qwen3_tool_parser.py (2)
detect_and_parse(43-70)parse_streaming_increment(72-107)
tensorrt_llm/serve/tool_parser/qwen3_tool_parser.py (2)
tensorrt_llm/serve/openai_protocol.py (1)
ChatCompletionToolsParam(478-480)tensorrt_llm/serve/tool_parser/base_tool_parser.py (7)
BaseToolParser(16-351)has_tool_call(302-306)detect_and_parse(89-96)parse_base_json(62-86)parse_streaming_increment(111-299)_ends_with_partial_token(98-109)structure_info(313-324)
tensorrt_llm/commands/serve.py (1)
tensorrt_llm/serve/tool_parser/tool_parser_factory.py (1)
ToolParserFactory(7-21)
tensorrt_llm/serve/tool_parser/base_tool_parser.py (5)
tensorrt_llm/serve/openai_protocol.py (1)
ChatCompletionToolsParam(478-480)tensorrt_llm/serve/tool_parser/core_types.py (2)
StreamingParseResult(15-19)ToolCallItem(7-12)tensorrt_llm/serve/tool_parser/utils.py (3)
find_common_prefix(10-18)is_complete_json(50-55)partial_json_loads(21-47)tensorrt_llm/logger.py (2)
warning(132-133)error(126-127)tensorrt_llm/serve/tool_parser/qwen3_tool_parser.py (4)
detect_and_parse(43-70)parse_streaming_increment(72-107)has_tool_call(39-41)structure_info(109-114)
🪛 Ruff (0.13.3)
tensorrt_llm/serve/chat_utils.py
214-214: Use explicit conversion flag
Replace with conversion flag
(RUF010)
tensorrt_llm/serve/tool_parser/utils.py
53-53: Consider moving this statement to an else block
(TRY300)
tensorrt_llm/serve/tool_parser/tool_parser_factory.py
8-10: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
18-21: Avoid specifying long messages outside the exception class
(TRY003)
tensorrt_llm/serve/tool_parser/qwen3_tool_parser.py
34-34: Possible hardcoded password assigned to: "bot_token"
(S105)
35-35: Possible hardcoded password assigned to: "eot_token"
(S105)
67-67: Use explicit conversion flag
Replace with conversion flag
(RUF010)
tensorrt_llm/serve/tool_parser/base_tool_parser.py
295-295: Consider moving this statement to an else block
(TRY300)
297-297: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (18)
tensorrt_llm/serve/tool_parser/__init__.py (1)
1-3: LGTM!The public API re-export is clean and follows standard Python module patterns.
tensorrt_llm/llmapi/llm_args.py (1)
1475-1478: LGTM!The new
tool_parserfield is properly defined with appropriate type hints, default value, and documentation. It follows the same pattern as the existingreasoning_parserfield above it.tensorrt_llm/serve/openai_server.py (1)
458-458: LGTM!The assignment correctly propagates the
tool_parserconfiguration to post-processing arguments, following the same pattern as thereasoning_parseron line 457.tensorrt_llm/serve/tool_parser/tool_parser_factory.py (1)
13-21: LGTM!The factory method correctly implements the factory pattern with proper error handling. The error message includes helpful information about supported parsers, which aids debugging.
tensorrt_llm/serve/tool_parser/utils.py (3)
10-18: LGTM!The
find_common_prefixfunction is straightforward and correct. The implementation is clear and handles edge cases properly.
21-47: LGTM!The
partial_json_loadsfunction is well-documented and handles partial JSON parsing robustly. The error recovery logic for "Extra data" and "pop from empty list" messages adds resilience for streaming scenarios.
50-55: LGTM!The
is_complete_jsonfunction correctly validates JSON completeness. The try/except pattern is idiomatic Python despite the static analysis suggestion to use an else block.tensorrt_llm/serve/tool_parser/core_types.py (2)
7-19: LGTM!The
ToolCallItemandStreamingParseResultPydantic models are well-defined with proper type hints, default values, and docstrings. They follow Pydantic v2 patterns correctly.
22-26: LGTM!The
StructureInfodataclass is clean and appropriate for a simple data container.tensorrt_llm/serve/tool_parser/qwen3_tool_parser.py (4)
13-41: LGTM!The class initialization and basic methods are well-implemented. The docstring clearly explains the format structure with references to the source model.
43-70: LGTM!The
detect_and_parsemethod correctly uses regex to extract tool call blocks and handles JSON parsing errors gracefully by logging warnings and continuing to process remaining calls.
72-107: LGTM!The streaming parser correctly handles partial end tokens through buffering. The logic is well-commented and handles edge cases where the end token arrives character by character.
109-114: LGTM!The
structure_infomethod returns appropriate metadata for constrained generation with correct begin/end patterns and trigger token.tensorrt_llm/serve/postprocess_handlers.py (2)
127-148: apply_tool_parser integration looks goodParser lifecycle per-output with factory + streaming vs non-streaming split is clean.
310-319: Non-streaming tool_calls mapping LGTMConverts parsed calls into ToolCall(FunctionCall) as expected; ignores tool_index appropriately.
tensorrt_llm/commands/serve.py (2)
328-333: CLI and arg propagation for tool_parser look correct
- click.Choice over available parsers
- Signature and get_llm_args propagation are consistent.
Also applies to: 359-386
32-32: No change needed for ToolParserFactory import: It’s re-exported intensorrt_llm/serve/tool_parser/__init__.py, so the current import is valid.Likely an incorrect or invalid review comment.
tensorrt_llm/serve/tool_parser/base_tool_parser.py (1)
152-170: Flag selection is sensibleUsing Allow.ALL until name is sent, then disallow STR to stabilize name parse is reasonable.
1dc11ee to
39c06e8
Compare
JunyiXu-nv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
8001ac2 to
26d83cc
Compare
26d83cc to
98cda6a
Compare
|
/bot run |
|
PR_Github #22038 [ run ] triggered by Bot. Commit: |
|
PR_Github #22038 [ run ] completed with state |
|
/bot run |
|
PR_Github #22058 [ run ] triggered by Bot. Commit: |
|
PR_Github #22058 [ run ] completed with state |
|
/bot run |
|
PR_Github #22062 [ run ] triggered by Bot. Commit: |
|
PR_Github #22062 [ run ] completed with state |
|
/bot run |
|
PR_Github #22147 [ run ] triggered by Bot. Commit: |
|
PR_Github #22147 [ run ] completed with state |
50797a9 to
e198eb1
Compare
|
/bot run |
|
PR_Github #22160 [ run ] triggered by Bot. Commit: |
|
PR_Github #22160 [ run ] completed with state |
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
e198eb1 to
64196e9
Compare
|
/bot run |
|
PR_Github #22751 [ run ] triggered by Bot. Commit: |
|
PR_Github #22751 [ run ] completed with state |
juney-nvidia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On behalf of the review about OSS compliance.
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Summary by CodeRabbit
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.