Skip to content

SEP-2417: Model Preferences for Tools#2417

Open
ProductOfAmerica wants to merge 1 commit intomodelcontextprotocol:mainfrom
ProductOfAmerica:sep-model-preferences-for-tools
Open

SEP-2417: Model Preferences for Tools#2417
ProductOfAmerica wants to merge 1 commit intomodelcontextprotocol:mainfrom
ProductOfAmerica:sep-model-preferences-for-tools

Conversation

@ProductOfAmerica
Copy link
Copy Markdown

@ProductOfAmerica ProductOfAmerica commented Mar 17, 2026

Summary

This SEP proposes adding an optional modelPreferences field to ToolAnnotations, allowing MCP servers to signal the model capability level that would produce the best results for each tool.

  • Reuses the existing ModelPreferences type from the sampling capability (intelligencePriority, costPriority, speedPriority)
  • Purely advisory — clients MAY use hints for model routing but are not required to
  • Fully backward compatible — optional field, existing clients ignore unknown annotations

Motivation

MCP servers expose tools with vastly different complexity profiles. A simple list tool and a multi-dimensional analysis tool have very different model requirements, but servers currently have no way to express this. The model selection decision is made entirely client-side with no per-tool granularity.

Reference Implementation

Demo server (49 tests passing): https://github.com/ProductOfAmerica/mcp-server-model-preferences-demo

Three tools with different modelPreferences via _meta (forward-compatible with the proposed annotations location).

Note

SEP file is named 2417-model-preferences-for-tools.md‎.

@ProductOfAmerica ProductOfAmerica force-pushed the sep-model-preferences-for-tools branch from 16f86fa to b7f1296 Compare March 17, 2026 16:56
@ProductOfAmerica ProductOfAmerica requested review from a team as code owners March 17, 2026 16:56
@localden
Copy link
Copy Markdown
Contributor

Thank you for your contribution.

My initial reaction is that I am not entirely sure if this is something that belongs in the core protocol, as it could go against one of our design principles. The current SEP proposes model segmentation that I am not aware of any clients doing today (or whether they have any appetite to do so).

If _meta already solves this, I do wonder why this needs an annotation. My worry here is that we're introducing "annotation bloat" - a bunch of protocol-encoded conventions that may or may not be used by clients.

@SamMorrowDrums @sambhav @olaservo @LucaButBoring would like your input here.

Additional resources to check out:

@localden localden added proposal SEP proposal without a sponsor. SEP labels Mar 17, 2026
@localden localden changed the title SEP: Model Preferences for Tools SEP-2417: Model Preferences for Tools Mar 17, 2026
@LucaButBoring
Copy link
Copy Markdown
Contributor

LucaButBoring commented Mar 17, 2026

Using _meta in the existing/demo/workaround solution implies a custom client, so I don't consider that a particularly compelling point against this in its own right unless the entire point is only to accommodate a custom client. Will think through this more later (traveling right now).

@localden
Copy link
Copy Markdown
Contributor

Just to add a bit more color as to why I think it goes against the "Capability over compensation" design principle. Based on this sep, we'd be making a long-term bet that model performance and cost will remain at a big delta where certain tools can be invoked with, say, cheaper and less intelligent models vs. more expensive and more intelligent ones. I don't know to what degree that will hold in the future (which, of course, is hard to predict).

This feels like something the client can potentially determine on their end, which is what VS Code does today - based on prompt and tool descriptions (since those are accessible), each client can make their own determination.

@ProductOfAmerica
Copy link
Copy Markdown
Author

Thanks @localden.

On the "Capability over compensation" point: the design principle itself says "Optional context that weaker models lean on and stronger ones ignore costs nothing." That's literally what modelPreferences is. Optional, ignorable, zero cost to clients that don't care. And the bet here isn't that models stay dumb. It's that pricing tiers will always exist. Haiku is 5x cheaper than Opus right now ($1/$5 vs $5/$25 per MTok). You don't want to burn Opus tokens on list_organizations. That's not a capability gap, it's just economics.

On VS Code's auto model selection: I actually think that link supports this SEP. Their current implementation routes based on capacity and rate limits, not task complexity. The blog says they want to "dynamically switch between small and large models based on the task" but that's roadmap, not shipped. They don't have a per-tool signal to drive it. Tool descriptions tell you what a tool does, not how hard the output is to reason about. "Diagnose field health" sounds simple until the response is multi-dimensional statistical analysis with evidence chains and confidence weights.

Without something like this, every client that wants per-task routing has to build their own heuristic over tool descriptions, and they'll all classify differently. A 3-number hint from the server (who actually knows what its tools return) gives clients a consistent, machine-readable signal. Doesn't replace client judgment, just informs it.

Happy to iterate on scope. Would also welcome this being reviewed as part of the Tool Annotations IG that @SamMorrowDrums is standing up.

@localden
Copy link
Copy Markdown
Contributor

A review with the Tool Annotations IG is a sensible next step, IMO.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

proposal SEP proposal without a sponsor. SEP

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

3 participants