Skip to content

Conversation

@jonathanhefner
Copy link
Member

This adds a blog post explaining SEP-1577, which added tool calling support to sampling, enabling servers to drive agentic workflows.

🤖 Generated with Claude Code

@jonathanhefner jonathanhefner force-pushed the blog-sampling-with-tool-support branch from 30c4159 to d12570f Compare December 6, 2025 00:08
@He-Pin
Copy link
Contributor

He-Pin commented Dec 6, 2025

Thanks, better with an uml


[SEP-1577](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1577) has changed that. MCP sampling now supports tool calling, which means tools themselves can drive agentic workflows.

## The gap in the architecture
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this section is mostly just repeating part of the intro, but more verbosely. I think you could almost get rid of it by just adding an extra sentence to the intro?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, this section has most of the rhetorical power. I've trimmed it down (about 50%), but I would prefer to keep the remainder.

tags = ['sampling', 'tools', 'agentic']
+++

Tool use transformed LLMs from sophisticated text generators into agents capable of taking action in the world. Before tool use, you could ask an LLM about the weather and get a plausible-sounding guess. With tool use, it can actually check.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this is burying the lede: the opening sentence should encourage me to read on, i.e. tell us immediately why we should continue reading, something like "A recent addition to the MCP spec has unlocked powerful agentic capabilities in MCP servers enabling entire agents to be distributed as easily as MCP servers without having to set up API keys or provide separate inference." -- maybe more catchy though :)

Copy link
Member Author

@jonathanhefner jonathanhefner Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than change the intro, I've added a punchy description, which the Hugo template will render before the post text. I've also added CSS to make it stand out:

above-the-fold

}
```

## What this enables
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Big ask, but would be cool to show a demo video of one of these :)


The pattern extends naturally. Any tool that would benefit from reasoning, iteration, or multi-step workflows can now implement them directly.

## Capability negotiation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this section and the "A note on includeContext" section could maybe be dropped? I don't think this post needs to cover every aspect of SEP-1577 and instead should just motivate people to use it. Details can be left to the SEP/spec itself.

Copy link
Member Author

@jonathanhefner jonathanhefner Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've trimmed those sections down, but I would prefer to keep parts of them. Since there is currently no client support for this feature, I think we should mention sampling.tools capability. Also, I remember my initial reaction to the feature was to question it vs includeContext, so I would like to address that as well, if briefly.

@localden localden added the blog label Dec 9, 2025

## The asymmetry in the architecture

Today's tool calls follow a simple pattern: an LLM reasons, invokes a tool, gets a result, and continues reasoning. This works well when tools are simple functions. But what if a tool needs to be smart enough to reason, make decisions, or coordinate multiple steps?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Today's tool calls follow a simple pattern: an LLM reasons, invokes a tool, gets a result, and continues reasoning. This works well when tools are simple functions. But what if a tool needs to be smart enough to reason, make decisions, or coordinate multiple steps?
Today's tool calls follow a simple pattern: an LLM reasons, invokes a tool, gets a result, and continues. This works well when tools are simple functions. But what if a tool needs to reason _as well_, make decisions, or coordinate multiple steps on its own?

- **TypeScript SDK**: Version 1.23.0+ ([PR #1101](https://github.com/modelcontextprotocol/typescript-sdk/pull/1101))
- **Python SDK**: Version 1.23.0+ ([PR #1594](https://github.com/modelcontextprotocol/python-sdk/pull/1594))

To experiment with agentic sampling, update to an SDK version that includes these changes and ensure you're connecting to a client that advertises the `sampling.tools` capability.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have examples of clients that support this today?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, there are no clients that support this today. That is the primary reason is PR is marked as draft. However, it might be worth publishing anyway, to encourage adoption.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The C# SDK now has support for sampling with tools. I am working on adding an example to the C# EverythingSever.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was going to add support imminently (and kind of assumed that VSCode already did?). Is there an example test program that would be cool to demonstrate here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kind of assumed that VSCode already did?

So did I 😄, but not yet — see the note at the end of microsoft/vscode#277472.

jonathanhefner and others added 3 commits December 9, 2025 19:41
This adds a blog post explaining SEP-1577, which added tool calling
support to sampling, enabling servers to drive agentic workflows.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Add prominent description, mermaid sequence diagram, and streamline
content per reviewer suggestions.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Peter Alexander <pja@anthropic.com>
Incorporate suggested edits for clarity and readability:
- Expand "LLMs" to "Large Language Models" on first use
- Clarify inner/outer LLM distinction in the agentic loop
- Simplify step numbering and descriptions
- Add link to MCP TypeScript SDK
- Improve wording throughout for consistency

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Den Delimarsky <53200638+localden@users.noreply.github.com>
@jonathanhefner jonathanhefner force-pushed the blog-sampling-with-tool-support branch from aa84a43 to e10577f Compare December 10, 2025 01:45
@jonathanhefner
Copy link
Member Author

@localden Thank you for the feedback! 😃 I have incorporated your suggestions, though I also iterated a bit more on top of them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants