Skip to content

Conversation

@evalstate
Copy link
Member

@evalstate evalstate commented Mar 4, 2025

…of tool usage

Addition of ToolCall and ToolResult blocks to PromptMessage to allow in-context learning of Tool Usage patterns and error handling.

Submitted as draft for review before completing/adding documentation etc.

Motivation and Context

To allow Tool developers to provide specific usage examples of how and when the Assistant should use tool calls and handle results and errors.

How Has This Been Tested?

Breaking Changes

None.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update

Checklist

  • I have read the MCP Documentation
  • My code follows the repository's style guidelines
  • New and existing tests pass locally
  • I have added appropriate error handling
  • I have added or updated documentation as needed

Additional context

jspahrsummers pushed a commit that referenced this pull request Apr 7, 2025
@evalstate evalstate marked this pull request as ready for review April 10, 2025 07:02
@dsp-ant dsp-ant moved this to Consulting in Standards Track Jun 6, 2025
@dsp-ant dsp-ant added this to the DRAFT 2025-06-XX milestone Jun 6, 2025
@dsp-ant dsp-ant moved this from Consulting to In Review in Standards Track Jun 10, 2025
* The ID of the tool call this result is for.
* This must match the ID of a previous tool call.
*/
toolCallId: string;
Copy link
Contributor

@connor4312 connor4312 Jun 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the underlying model certainly uses tool call IDs today, it would be simpler for a protocol to have a result?: ToolResult on the tool call. Is there a case where a client would actually need the literal tool call ID?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ID is to make sure the Request and Response are correlated - this is NOT expected to be an LLM API ID.

Copy link
Contributor

@connor4312 connor4312 Jun 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right, had it flipped around in my head. So the MCP server making a sampling call to the client, which goes to an LM and can result in tool calls. Those get returned to the MCP server which is expected to serve those tool calls and may make a subsequent sampling request with the ToolResult's.

How does the client know what tools to provide to the LM in the sampling case? It could give it the requesting MCP's own tools, which it knows how to serve, but if a client includes more tools (e.g. includeContext: allServers) how does server A deal with the tool calls of server B?

If we say it's up to clients to automagically call server B's tools and pass A's through, I am curious what value this provides since server A's tools could be served in the same way.

But this becomes valuable if a CreateMessageRequest can come with a custom toolset from the server...

Copy link
Member Author

@evalstate evalstate Jun 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @connor4312 -- this isn't intended for Sampling (the change affects PromptMessage).

This change has 2 main use cases:
1 - Small Model Users (<8B) -- providing examples of Tool Usage massively increases their predictability and suitability for using MCP Servers for tasks.
2 - Esoteric Tool Definitions -- The ability to "show" the model how to use tools and arguments via previous User/Assistant conversation pairs gives much tighter adherence than attempting to Prompt Engineer.

Prompts are already usable for in-context learning, and this is the missing piece to improve Tool Usage behaviour by allowing the Host to translate to their Provider API.

The Correlation ID mentioned above is because it is possible for a single Assistant message to contain multiple Tool Calls, which would then be responded to with a sequence of Tool Results -- hence the need for Correlation. The Host would typically add this to the LLM API conversation history (probably using the same identifier, but not necessarily - how they show the LLM the correlations is up to them and maybe not even in the Provider API!)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another use-case I forgot to mention is using the MCP Prompt format as a portable way of saving conversation history. This (plus the multipart content blocks) makes it very convenient to do so as for MCP tool calls are important semantic information for the conversation. (Reasoning, Citation blocks etc. are a separate conversation).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, sorry, I was totally barking up the wrong tree. Tool calls in sampling requests would be cool though :)

@evalstate
Copy link
Member Author

Updated to draft; changed naming. Documentation needs coordination with ContentBlock change.

@dsp-ant dsp-ant modified the milestones: DRAFT 2025-06-XX, DRAFT-XX-XX Jun 11, 2025
@dsp-ant dsp-ant moved this from In Review to Consulting in Standards Track Jun 11, 2025
@pja-ant
Copy link
Contributor

pja-ant commented Sep 23, 2025

Hey @evalstate, just going through some of the old PRs. Are you still planning to pursue this proposal? Wondering if it should be a SEP.

@evalstate
Copy link
Member Author

Hi @pja-ant - yes, I consider this extremely important, and have nearly have a full reference I can demonstrate. It adjusts the approach slightly (so I plan to update this), but I consider it the same proposal. There is also still the assumption that

#198

Is also added.

@evalstate
Copy link
Member Author

A new SEP will be raised to bring this up to date

@evalstate evalstate closed this Nov 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

No open projects
Status: Consulting

Development

Successfully merging this pull request may close these issues.

4 participants