-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Added Tool Call and Tool Result to GetPrompt for in-context learning … #188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Tool Call and Tool Result to GetPrompt for in-context learning … #188
Conversation
schema/2025-03-26/schema.ts
Outdated
| * The ID of the tool call this result is for. | ||
| * This must match the ID of a previous tool call. | ||
| */ | ||
| toolCallId: string; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the underlying model certainly uses tool call IDs today, it would be simpler for a protocol to have a result?: ToolResult on the tool call. Is there a case where a client would actually need the literal tool call ID?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ID is to make sure the Request and Response are correlated - this is NOT expected to be an LLM API ID.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, right, had it flipped around in my head. So the MCP server making a sampling call to the client, which goes to an LM and can result in tool calls. Those get returned to the MCP server which is expected to serve those tool calls and may make a subsequent sampling request with the ToolResult's.
How does the client know what tools to provide to the LM in the sampling case? It could give it the requesting MCP's own tools, which it knows how to serve, but if a client includes more tools (e.g. includeContext: allServers) how does server A deal with the tool calls of server B?
If we say it's up to clients to automagically call server B's tools and pass A's through, I am curious what value this provides since server A's tools could be served in the same way.
But this becomes valuable if a CreateMessageRequest can come with a custom toolset from the server...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @connor4312 -- this isn't intended for Sampling (the change affects PromptMessage).
This change has 2 main use cases:
1 - Small Model Users (<8B) -- providing examples of Tool Usage massively increases their predictability and suitability for using MCP Servers for tasks.
2 - Esoteric Tool Definitions -- The ability to "show" the model how to use tools and arguments via previous User/Assistant conversation pairs gives much tighter adherence than attempting to Prompt Engineer.
Prompts are already usable for in-context learning, and this is the missing piece to improve Tool Usage behaviour by allowing the Host to translate to their Provider API.
The Correlation ID mentioned above is because it is possible for a single Assistant message to contain multiple Tool Calls, which would then be responded to with a sequence of Tool Results -- hence the need for Correlation. The Host would typically add this to the LLM API conversation history (probably using the same identifier, but not necessarily - how they show the LLM the correlations is up to them and maybe not even in the Provider API!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another use-case I forgot to mention is using the MCP Prompt format as a portable way of saving conversation history. This (plus the multipart content blocks) makes it very convenient to do so as for MCP tool calls are important semantic information for the conversation. (Reasoning, Citation blocks etc. are a separate conversation).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, sorry, I was totally barking up the wrong tree. Tool calls in sampling requests would be cool though :)
|
Updated to draft; changed naming. Documentation needs coordination with |
|
Hey @evalstate, just going through some of the old PRs. Are you still planning to pursue this proposal? Wondering if it should be a SEP. |
|
A new SEP will be raised to bring this up to date |
…of tool usage
Addition of ToolCall and ToolResult blocks to PromptMessage to allow in-context learning of Tool Usage patterns and error handling.
Submitted as draft for review before completing/adding documentation etc.
Motivation and Context
To allow Tool developers to provide specific usage examples of how and when the Assistant should use tool calls and handle results and errors.
How Has This Been Tested?
Breaking Changes
None.
Types of changes
Checklist
Additional context