the three-party identity problem in mcp servers

2025-12-01

a few weeks ago i was building an mcp server to integrate an app into chatgpt. it took me days just to get user authentication working. turns out that binding cryptographic user identity to AI requests isn’t a solved problem. there’s no shared identity layer between the user, the llm, and your backend. this is what i call the “three-party problem”.

here is the basic flow:

  1. user logs into chatgpt (openai identity)
  2. user initiates oauth flow from chatgpt to connect my app
  3. chatgpt redirects to my oauth endpoint with standard PKCE flow
  4. the gap: cryptographically binding chatgpt’s user to my app’s user identity
  5. i built a custom token exchange: validate the oauth callback, issue a firebase custom token, then inject verified identity (X-User-Id) into every subsequent request

this works, but it’s fragile and not standardized. everyone is rolling their own version of this identity bridge inside their mcp server. and once you solve identity, you still need to build:

ai model access governance - best practice

can you believe what oscar from accounting sent to chatgpt?!

before oscar’s message gets through to chatgpt, ideally the dundler mifflin it team would make sure his message is:

identifiabl who is actually making the call? (e.g., oscar from accounting)
validatabl is oscar allowed to call the model or share that data?
transformabl e.g., remove customer names from the revenue forecast he pasted in, or tag the request as containing sensitive info and requiring vp approval
limitabl has oscar exceeded his quota or budget?
proxyabl e.g., forward the message to the correct model for Oscar’s role
explicabl store + explain the interaction (for when corporate flies in from scranton with kpmg and wants an audit trail)

ai model access governance - real life

in many companies, llm usage still flows through a single org-wide api key. this means that when oscar pastes that revenue forecast into chatgpt:

— you don’t know that it’s him
— you don’t know what is being sent
— you don’t know if he has permission to access that model or send that data
— you can’t audit the call or prove compliance e.g., with hipaa or sox

there’s nothing preventing him from sending it, no cryptographic proof it was oscar. just a shared key that could be anyone.

why this matters… now

mcp servers are becoming commonplace and making it easy to give llms access to tools and data. but the identity and governance layer is still being pieced together.

this worked fine when llms were isolated experiments. but as they become:

the lack of a standard trust layer becomes a risk and bottleneck.

what i’m thinking

it occurs to me that identity is just the first of a set of modular components of the same system - the trust and governance agentic control plane.

the core insights are:

  1. llm apps are three-party systems (user ↔ llm ↔ backend) with no standard identity layer binding them together
  2. identity is a prerequisite for governance.

and i am curious…

curious what others are doing. is your team hitting these issues? how are you solving them?

so… what did your team send to chatgpt today?


david crowe - reducibl.com - working on this at gatewaystack.com

← back to writing