does your team use llms securely?

2025-12-14

Trust boundaries for ChatGPT tool access

originally posted to linkedin

a few weeks ago i asked a business leader if they are confident their team uses llms securely.

their answer? yes, because we use chatgpt enterprise.

it’s true that chatgpt enterprise is the right foundation for giving your team secure llm access. it addresses:

this is sufficient when your goal starts and ends with giving your team chatgpt access.

the real value of llms lives beyond access

the real value starts when employees can ask the llm to:

these interactions introduce a new trust boundary. the boundary still includes the employee and chatgpt. but now it includes your backend systems and company data too:

why chatgpt enterprise alone isn’t sufficient (by design)

once an llm can call your backend apis, security is no longer just about chatgpt access. it’s about:

the linchpin? your backend needs a way to cryptographically identify the employee.

This is the three-party problem: the user, the LLM, and your backend have no shared, verifiable identity context.

without this, a manager could ask “summarize performance review feedback for my team” and the llm might pull reviews from across the entire organization, exposing sensitive feedback about employees in other departments.

your backend doesn’t automatically know what “my team” means. it can’t query the right data without verifying their identity.

what’s missing is a clean, standardized way to let llms act on behalf of users without being trusted with authority themselves. this is a gap I see teams hit when moving from demo to production.

chatgpt knows who the user is. you don’t

here’s what actually happens if you were to only rely on chatgpt enterprise:

    user                    llm                     backend
  (alice)              (chatgpt)               (hr system api)
     │                      │                         │
     │ "show my team's data"│                         │
     ├─────────────────────►│                         │
     │                      │                         │
     │  (alice is logged in │   GET /perfReviewData   │
     │   to chatgpt)        │   shared api Key        │
     │                      ├────────────────────────►│
     │                      │                         │
     │                      │ ❌ no user identity     │
     │                      │ ❌ can't filter         │
     │                      │                         │
     │                      │◄────────────────────────┤
     │                      │  returns EVERYONE's     │
     │                      │  performance review     │
     │◄─────────────────────┤        data!            │
     │ shows all perf data  │                         │

you need to cryptographically verify the employee on your backend

here’s the minimum architecture required to make this safe in production and enable:

    user                   llm             missing Layer      backend
   (alice)              (chatgpt)            (identity       (hr system
                                            injection)         api)
     │                      │                    │                     │
     │ "show my team's data"│                    │                     │
     ├─────────────────────►│                    │                     │
     │                      │                    │                     │
     │  (alice is logged in │  oauth token       │                     │
     │   to chatgpt)        │  for alice         │                     │
     │                      ├───────────────────►│                     │
     │                      │                    │                     │
     │                      │                    │ ✓ verify            │
     │                      │                    │   alice's id        │
     │                      │                    │                     │
     │                      │                    │ GET /perfReviewData │
     │                      │                    │ X-User-Id:          │
     │                      │                    │   alice_123         │
     │                      │                    ├────────────────────►│
     │                      │                    │                     │
     │                      │                    │                     │ filter by
     │                      │                    │                     │ alice_123
     │                      │                    │◄────────────────────┤
     │                      │◄───────────────────┤   alice's           │
     │◄─────────────────────┤    alice's data    │   data only         │
     │ only shows perf data │        only        │                     │
     │  for alice's team    │                    │                     │

what end-to-end governance actually requires

to safely expose internal tools and data to llms, you need to ensure:

in short… every tool call is identifiabl, validatabl, and explicabl across system boundaries.

curious how others are handling this

have you made internal tools or data available to your team from inside of chatgpt or another llm? if so…


david crowe - reducibl.com - gatewaystack.com

← back to writing