AI

MCP Auth and Security: OAuth, Scopes, and Enterprise Permissions Guide

OAuth 2.1, scopes, audit logs, and the procurement-grade auth design enterprise buyers expect.

The fastest way to lose an enterprise procurement conversation about an MCP app is to lose the auth conversation. Security teams have spent a decade getting good at evaluating OAuth implementations, scope design, and audit posture in third-party SaaS, and they apply the same lens to MCP – with the added question of how a non-human agent’s behavior is bounded inside the granted permissions. A sloppy auth story is read, correctly, as a sloppy product. A clean one is the floor that lets the rest of the conversation happen.

The MCP specification mandates OAuth 2.1 with PKCE for remote servers, plus Dynamic Client Registration (RFC 7591) and Authorization Server Metadata (RFC 8414). The spec recommends Resource Indicators (RFC 8707) to prevent token-confusion attacks. For enterprise readiness, scopes should be structured per-resource, per-verb, per-sensitivity, with bounded token lifetimes, immediate revocation, customer-facing audit logs, and SSO support for the major identity providers. This guide walks each piece of that bar in detail.

This guide assumes the working vocabulary in our MCP terminology guide and the level distinctions in MCP embedding types.

Auth Design Is Product Strategy

Auth design is not engineering plumbing. It determines who can install you (which clients accept your auth model), what an agent can do at runtime (coarse vs fine scopes), and how procurement reacts (whether the design maps onto reviewers' existing OAuth review checklists). Treat it as a product decision; let engineering execute the design rather than choose it.

The Three MCP Auth Patterns

Three auth patterns dominate in 2026:

PatternWhere it’s usedStrengthsWeaknesses
OAuth 2.1 with PKCE + DCRClaude, Gemini, modern ChatGPT install pathsCleanest model; per-tool consent possible; bounded token lifetimes; clean revocation; spec-compliantHeavier to implement than API key
API key handoffCursor and AI-first IDEs; lightweight integrationsCheap to implement; fast first-shipWeak revocation; weak per-tool scoping; insufficient for enterprise; not spec-compliant for remote servers
Enterprise SSO via host IdPMicrosoft Copilot (Entra ID); Gemini (Google OAuth); Claude enterprise (SAML/SCIM)Strongest procurement story; admin-controlled distribution; existing enterprise IT mental modelHeaviest implementation; per-client identity provider integration

A serious MCP app shipping to multiple AI clients implements at least two of these, and probably all three. There is no shortcut. The cost of pretending one auth model fits all clients is shipping to fewer of them than your strategy intended.

OAuth 2.1 + PKCE + DCR Flow

OAuth 2.1 with PKCE (Proof Key for Code Exchange) is the auth pattern mandated by the MCP spec for remote servers. The user is taken through a standard OAuth flow inside the AI client, grants scopes, and the client receives an access token plus a refresh token bound to the user. PKCE protects the public-client flow against authorization code interception attacks.

The full flow

  1. Discovery. AI client fetches your /.well-known/oauth-authorization-server metadata document (RFC 8414) to discover the authorization endpoint, token endpoint, supported scopes, supported grant types, and registration endpoint.
  2. Dynamic Client Registration. AI client POSTs to your registration endpoint (RFC 7591) to register itself, providing redirect URIs and other metadata. Your authorization server returns a client_id and (for confidential clients) a client_secret.
  3. Authorization request. AI client generates a PKCE code verifier and challenge, then redirects the user to your authorization endpoint with response_type=code, code_challenge, code_challenge_method=S256, and the requested scopes.
  4. User consent. User authenticates with your service and grants the requested scopes. Your service redirects back to the AI client’s redirect URI with an authorization code.
  5. Token exchange. AI client POSTs the authorization code (plus the PKCE code verifier) to your token endpoint. You verify the PKCE challenge and return an access token, refresh token, and (optionally) an ID token.
  6. Resource indicator binding. The access token is bound to your MCP server’s resource indicator (RFC 8707), preventing replay against other MCP servers.
  7. Tool invocation. AI client uses the access token in the Authorization: Bearer ... header on JSON-RPC requests to your MCP server.
  8. Refresh. When the access token expires, AI client uses the refresh token to obtain a new one.

OAuth 2.1 + PKCE + DCR is the right default for any MCP app shipping to spec-compliant clients (Claude, Gemini, modern ChatGPT). For enterprise tier on Claude or for Microsoft Copilot, you additionally need to support SAML and SCIM.

Per-client variation

Each AI client implements auth differently, with the standing caveat that any specific claim should be verified against current documentation:

ClientPrimary authDCRRFC 8414Scope granularityNotable
ClaudeOAuth 2.1 + PKCEYes (mandatory)YesPer-tool consent at install; per-action confirmationStrongest spec compliance; enterprise tier adds SAML/SCIM
ChatGPTOAuth or API keyPartialPartialApp-level; per-tool on newer buildsVerify install path before designing scopes
Cursor + IDEsAPI key dominant; OAuth on remoteLimitedLimitedPermissive; per-server consentMature OAuth here is a differentiator
Microsoft CopilotEntra IDN/A (Entra)N/AAdmin-granted org-levelHeaviest implementation; strongest procurement story
GeminiGoogle OAuthN/A (Google)N/AGoogle’s standard scope modelFamiliar to enterprise IT

The practical implication: an MCP app shipping to Claude needs full DCR + RFC 8414 metadata support from day one. A multi-client MCP app needs all three patterns from day one. Greenfielding for a single auth model is fast and is also the most common reason teams cannot ship to client number two without a substantial rebuild six months later.

Scope Design: Per-Resource, Per-Verb, Per-Sensitivity

The most common scope mistake is the binary scope: access your data. This is what API-key-era integrations defaulted to, and it is what enterprise procurement now reflexively pushes back on. The fault lines that produce a defensible scope taxonomy:

  1. Per-resource, not per-app. A scope for read tickets is different from a scope for read customers. Granting both should be a deliberate choice, not a side effect of installing the connector.
  2. Per-verb within resource. Within a resource, separate read from write. tickets:read and tickets:write should be distinct scopes; the user should be able to grant one without the other.
  3. Per-sensitivity within verb. Some writes are higher-stakes than others. tickets:write (create and update) should be distinct from tickets:delete. customers:write should be distinct from customers:export. Anything that exfiltrates data or causes irreversible state change deserves its own scope.

Sample scope taxonomy: a CRM MCP app

A representative scope structure for a mid-complexity B2B SaaS MCP app:

# Customer scopes
crm:customers:read
crm:customers:write
crm:customers:delete
crm:customers:export
crm:customers:merge          // irreversible

# Deal scopes
crm:deals:read
crm:deals:write
crm:deals:delete
crm:deals:export
crm:deals:close              // mark won/lost; high-stakes

# Contact scopes
crm:contacts:read
crm:contacts:write
crm:contacts:delete
crm:contacts:export
crm:contacts:bulk_email      // rate-limited; high-stakes

# Note scopes (low-sensitivity attached records)
crm:notes:read
crm:notes:write

# Configuration scopes (admin-level)
crm:settings:read
crm:settings:write           // org-level; admin-only

# Billing scopes (highest sensitivity)
crm:billing:read
crm:billing:write

The taxonomy reflects that delete, export, merge, close, bulk_email, and settings:write are higher-blast-radius than the routine read/write scopes. Procurement teams reviewing this scope list can immediately see what is at stake and which scopes need additional admin approval.

The cost of fine-grained scopes is install-time UX (longer consent screens, more questions). The benefit is two-fold: enterprise procurement moves faster because the scopes map onto reviewers’ existing mental models, and the blast radius of any individual mistake is smaller. Most products should err toward fine.

Tokens, Audit, and Revocation

Token lifetimes

Short-lived access tokens (1 hour or less) with refresh tokens are the right default. Permanent tokens are a procurement red flag.

Token typeRecommended lifetime
Access token (routine scopes)30–60 minutes
Access token (high-sensitivity scopes)15 minutes
Refresh token30–90 days
Re-consent prompt cadence90 days for high-stakes scopes

Refresh should be transparent to the agent and visible to the user. Silent refresh that never surfaces to the user is acceptable for short windows; refresh that extends access indefinitely without re-prompting is not.

JWT vs opaque tokens

Opaque access tokens are the safer default for most MCP apps. Opaque tokens require a database lookup on every request, which gives you immediate revocation – the moment you delete the token from your store, requests using it fail. JWTs are stateless (operationally appealing) but cannot be revoked before they expire without a separate revocation list, which negates most of the JWT advantage.

If you use JWTs, recommended claims include sub (user ID), aud (your MCP server resource indicator), iss (your authorization server), exp (expiration), iat (issued at), client_id (the AI client), scope (granted scopes), and session_id (for cross-call session reconstruction).

Revocation

Three actors must be able to revoke MCP app access at any time:

  1. The user can revoke through the AI client’s connector or app management UI
  2. The admin (for enterprise installs) can revoke through their identity provider or admin console
  3. Your service can revoke through your own admin tools

Revocation must be effective immediately, not at next token rotation. Many MCP apps have a stated revocation path that, when exercised, leaves stale tokens working for hours. Test this. The gap between policy and practice is the gap that matters when an incident is live.

Audit log requirements

Every action the agent takes through your MCP app should be auditable. The minimum bar:

  • User identity (which user the agent is acting on behalf of)
  • Service account / agent identity (if applicable, especially at level 3)
  • Host AI client (which client’s agent invoked the tool)
  • Session identifier (so a sequence of calls can be reconstructed)
  • Tool invoked (which capability was called)
  • Parameters passed (with what arguments – possibly sanitized for sensitive data)
  • Result (success, failure, error message)
  • Timestamp (with sub-second precision)
  • Source IP and user agent (for forensics)
  • Scope used (which OAuth scope authorized this call)

The reason this matters is procedural rather than technical. When something goes wrong – and at least once a year, for any MCP app touching real data, it will – the question what did the agent do is a customer-facing, sometimes legally meaningful question. A team that can answer it in five minutes from a query against the audit log keeps the customer. A team that says we’d have to reconstruct from individual logs, give us a few days loses them.

Common Failure Mode

Audit logs have become the highest-leverage feature for enterprise close rates we have observed in MCP apps shipped over the past year. The most common failure pattern at level 2 is a team that built the log internally and never exposed it to customer admins – the audit exists technically and not procedurally, which is worse than not having it because it produces false confidence. Build the log on day one. Surface it to the customer's admin console, even minimally, by month three.

For SOC 2 Type II–compliant environments, the audit log additionally needs tamper-evident storage (append-only log; cryptographic hashing or chained hashing across entries), retention aligned with customer’s data retention requirements (typically 1–7 years), and access controls on the log itself.

SOC 2 mapping

MCP apps shipping into enterprise commonly need SOC 2 Type II attestation. The mapping of MCP-specific security work to SOC 2 trust services criteria:

SOC 2 CriterionWhat it requiresMCP-specific evidence
CC6.1 Logical access controlsRestrict access to information and IT systemsOAuth scopes; per-tool consent; revocation procedures
CC6.2 New users, periodic reviewOnboard/offboard with appropriate accessToken lifecycle; consent re-prompts; revocation logs
CC6.3 Access provisioningGrant access based on roleScope taxonomy; admin-vs-user scope distinctions
CC6.6 Encrypted transmissionEncrypt data in transitTLS for all MCP transport; bearer tokens in HTTPS only
CC6.7 Restrict transmission of informationRestrict data movement to authorized partiesResource indicators; scope-based data access
CC7.2 System monitoringMonitor for security eventsAudit log; anomaly detection on tool invocation patterns
CC7.3 Incident responseDetect and respond to incidentsRevocation procedures; audit forensics; communication plan

For most MCP apps the auth and audit work above maps cleanly onto SOC 2 controls. The work to add for SOC 2 compliance specifically is the documentation, evidence collection, and audit by a third-party assessor – typically 6–9 months and $50K–$150K for first-time SOC 2 Type II attestation.

Threat Model and Enterprise Readiness

Threat model

A working threat model for MCP apps shipping in 2026:

ThreatLikelihoodMitigation
Token theft via AI client compromiseLow–mediumShort-lived access tokens; immediate revocation; bind tokens to client_id and resource indicator
Scope sprawl (user grants too much at install)HighFine-grained scopes; clear consent UI; per-tool consent where supported
Confused-deputy attackMediumTreat agent inputs as untrusted; validate parameters server-side; never trust the agent’s framing of user intent
Prompt injection causing unintended tool invocationMedium–highIntent-preview UI; high-stakes scopes require user confirmation per-action; rate limits
Replay attack (token replayed against different MCP server)Low–mediumRFC 8707 Resource Indicators; bind tokens to specific MCP server identity
OAuth client impersonationLowStrict redirect URI validation; PKCE; verify client_id matches registered client
Audit log gapHighPer-invocation logging from day one; expose to customer admins; tamper-evident storage
Stale revocation (revoked tokens still working)HighTest revocation latency; use opaque tokens; if JWTs, maintain revocation list
Privilege escalation via scope inheritanceMediumNo scope inheritance; explicit grants per scope; admin scopes require step-up auth

The two threats most often missed in initial designs are the confused-deputy attack and prompt injection. Both arise from the agent acting on parameters or framing it received from an untrusted source. Server-side validation of every parameter – not trusting the agent’s interpretation of intent – is the primary defense.

Enterprise readiness checklist

If you are selling into enterprise, the auth and security shape of your MCP app needs to clear this bar before procurement begins, not during it:

  • OAuth 2.1 with PKCE as a primary auth path
  • Dynamic Client Registration (RFC 7591) supported
  • Authorization Server Metadata (RFC 8414) discovery document published
  • Resource Indicators (RFC 8707) used to bind tokens to MCP server identity
  • Scope taxonomy is per-resource, per-verb, per-sensitivity
  • Sensitivity axis genuinely separates high-blast-radius operations from routine ones
  • Token lifetimes are bounded (access tokens ≤1 hour)
  • Refresh is transparent to the user; silent windows are short
  • Re-consent prompts run on a defined cadence for high-stakes scopes
  • Revocation is effective immediately (and tested)
  • Audit logging captures user, client, session, tool, parameters, result, scope, IP, user-agent
  • Customer-admin-facing audit log surface exists in your product
  • Audit log uses tamper-evident storage (append-only, cryptographic chaining)
  • SSO via Entra ID, Okta, and Google Workspace is supported (at minimum)
  • SAML 2.0 supported
  • SCIM provisioning supported
  • SOC 2 Type II or equivalent third-party attestation is current
  • Documented incident response process for compromise scenarios
  • Penetration test report from the past 12 months available under NDA

The two items teams routinely think they have and don’t are the customer-facing audit log surface and the SCIM provisioning. Both are pre-procurement work, not post-. Both consistently get pushed past the first ship and consistently delay the first enterprise close by a quarter.

Pre-Procurement Diligence Questions

If a customer's security team asks you these questions and you cannot answer with documented evidence, you have work to do before procurement. Walk through your auth design. Show the scope taxonomy. Demonstrate revocation latency end-to-end. Pull a sample audit log entry. Demonstrate the customer-admin audit surface. Walk through your SOC 2 control mapping. Each of these is a question security teams know how to ask in their sleep – and treating any of them as "we'll figure that out after we close the deal" is how the deal stops closing.

Common MCP Auth Mistakes

Three patterns recur in MCP auth implementations that teams later regret:

  • Auth retrofit. Shipping on API keys to move fast, then retrofitting OAuth when enterprise demand materializes. The retrofit is painful, breaks existing installs, and consumes a quarter of feature work. Fix: design for OAuth from day one even if the first ship uses API keys.
  • Scope sprawl. Shipping with one or two scopes, adding tools quickly, never re-examining the scope structure. Eighteen months in, the MCP app has fifty tools and three scopes. Fix: scope review every quarter, with new tools mapped to the right scope deliberately.
  • Auth-as-blocker. Treating auth as a blocker to ship, deferring to engineering, ending up with a design that constrains future choices. Fix: treat auth as a first-class design problem owned by product.

Two additional mistakes worth naming:

  • Skipping Resource Indicators. RFC 8707 Resource Indicators bind tokens to a specific MCP server identity, preventing replay against other servers. Many early MCP implementations skipped this. The cost is a class of token-confusion attacks that are easy to mitigate but hard to recover from after a compromise.
  • Audit log without admin UI. Building the audit log internally but not exposing it to customer admins. The log exists technically; procedurally it is invisible to the buyer’s security team.

Auth and security choices interact tightly with embedding depth (MCP embedding types) and the build-vs-buy decision (MCP build vs buy). Get the auth right and the rest of the work has somewhere to land.

Frequently Asked Questions

Does MCP use OAuth?

Yes – the MCP specification mandates OAuth 2.1 with PKCE for remote MCP servers, plus Dynamic Client Registration (RFC 7591) and Authorization Server Metadata (RFC 8414). API keys are still common in local-server contexts (Cursor, AI-first IDEs) but are not spec-compliant for remote MCP servers in 2026.

What scopes should my MCP app request?

Request scopes per-resource, per-verb, per-sensitivity. A CRM MCP app should have separate scopes for customers:read, customers:write, customers:delete, customers:export, and customers:merge rather than a single crm:access scope. Fine-grained scopes are slower at install but faster through procurement.

Is MCP HIPAA / SOC 2 compliant?

The MCP protocol itself is not certified for any compliance regime; certification applies to your MCP app's implementation. To meet HIPAA, SOC 2, GDPR, or other regimes, your MCP app's auth, audit, encryption, and data handling must meet the regime's requirements. Most enterprise-grade MCP apps in 2026 hold SOC 2 Type II.

Can I use API keys for my MCP app?

You can, but only for clients that support API key install (primarily Cursor and AI-first IDEs running local servers). API keys are not spec-compliant for remote MCP servers and are insufficient for enterprise procurement. For OAuth-supporting clients with remote-server requirements, OAuth 2.1 + PKCE + DCR is the right choice.

What is Dynamic Client Registration in MCP?

Dynamic Client Registration (RFC 7591) is the OAuth feature that lets an AI client register itself with your authorization server programmatically, rather than requiring you to manually provision a client_id for each AI client. The MCP spec mandates DCR for spec-compliant remote servers. Without DCR, you cannot ship a generally-installable MCP app to clients like Claude that expect spec compliance.

How do I revoke access to my MCP app?

Three revocation paths must work: user-initiated (through the AI client's connector management), admin-initiated (through the customer's identity provider for enterprise installs), and service-initiated (through your own admin tools). All three should be effective immediately; verify this works in practice.

What is per-tool consent in MCP?

Per-tool consent is a UX pattern where, at install, the user sees a list of the specific tools the MCP app exposes (create_ticket, delete_customer, etc.) and can grant or deny access to individual tools. Claude has the strongest per-tool consent UX in 2026; other clients are less granular.

How often should MCP tokens be refreshed?

Access tokens should expire within 30–60 minutes (15 minutes for high-sensitivity scopes); refresh tokens within 30–90 days. Re-consent prompts every 90 days are appropriate for high-stakes scopes; lower-stakes scopes can run longer between re-consents.

Does MCP work with SSO?

Yes, for clients that support enterprise SSO. Microsoft Copilot delegates to Entra ID; Gemini to Google OAuth; Claude's enterprise tier supports SAML and SCIM. Enterprise MCP app sales typically require SSO support for the major IdPs.

Should I use JWTs or opaque tokens?

Opaque tokens are the safer default for most MCP apps because they support immediate revocation. Use JWTs only when token lifetimes are very short (≤5 minutes) or when you have a specific high-throughput need and have built a JWT revocation list.

What is RFC 8707 (Resource Indicators) in MCP?

Resource Indicators is an OAuth extension that binds a token to a specific resource – in MCP's case, your MCP server's identity. Without resource indicators, a token issued for one MCP server could be replayed against another, causing a token-confusion attack. The MCP spec recommends RFC 8707; spec-compliant clients enforce it.

← AI Guides

Start a Conversation

15 minutes with an advisor. No pitch, no pressure.
We'll help you figure out what you actually need.

Talk to an Advisor