You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have verified that this discussion would not be more appropriate as an issue in a specific repository
I have searched existing discussions to avoid duplicates
Discussion Topic
Hey everyone! 😄
I am from the #financial-services-wg, and as you know, we deal with sensitive data and are heavily regulated. We want to be able to give our users the ability to query about their banking information in their favorite AI assistants (ChatGPT, Claude, Gemini), via an MCP server proposition. However, a friction point is that, typically by default, most AI assistant providers train on consumer messages, which, if possible, we would like to avoid.
In an ideal world, where all parties are cooperative, I think something like this would mostly work:
First, let's define a sensitive MCP server as one that does not wish to have chats that have made use of it, be used for future training
In the InitializeRequest, we add an attribute, where the MCP client can express that they support the ability to not train on chats if a sensitive MCP server requests
Sensitive MCP servers could reject the initialization request if the MCP client has no support to not train on chats that utilize it
In the _meta property of InitializationResult, reserve a key such as io.modelcontextprocol/do-not-train that sensitive MCP servers would set
Model providers that support the ability to not train on particular chats would respect the io.modelcontextprocol/do-not-train mark (if at least one tool call were made to a sensitive MCP server during a chat, it would exclude the rest of that particular chat from training)
The above-described approach relies on trust and could be easily exploited by bad actors, so as an additional step, we could IP allowlist our sensitive MCP server to only respond to trusted model providers
Playing devil's advocate to the approach I described above:
Can we rely on model providers wanting to support this? Is it in their best interests? Can we get MCP servers that provide sensitive data without this?
In chats configured with multiple MCP servers, this still doesn't prevent sensitive data from being fed out of a sensitive MCP server to another MCP server that the sensitive MCP server does not trust. (Maybe an additional mechanism to tell MCP clients to ensure that we are the only MCP server in a chat?)
Curious to hear:
The community's feelings on this
If any other organizations are in a similar dilemma
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Pre-submission Checklist
Discussion Topic
Hey everyone! 😄
I am from the
#financial-services-wg, and as you know, we deal with sensitive data and are heavily regulated. We want to be able to give our users the ability to query about their banking information in their favorite AI assistants (ChatGPT, Claude, Gemini), via an MCP server proposition. However, a friction point is that, typically by default, most AI assistant providers train on consumer messages, which, if possible, we would like to avoid.In an ideal world, where all parties are cooperative, I think something like this would mostly work:
InitializeRequest, we add an attribute, where the MCP client can express that they support the ability to not train on chats if a sensitive MCP server requests_metaproperty ofInitializationResult, reserve a key such asio.modelcontextprocol/do-not-trainthat sensitive MCP servers would setio.modelcontextprocol/do-not-trainmark (if at least one tool call were made to a sensitive MCP server during a chat, it would exclude the rest of that particular chat from training)Playing devil's advocate to the approach I described above:
Curious to hear:
Beta Was this translation helpful? Give feedback.
All reactions