MCP Client Decision Matrix
All major AI clients support MCP as of mid-2026 – Claude, ChatGPT, Cursor, Microsoft Copilot, Gemini, Perplexity – but they differ sharply in distribution model, auth pattern, audience, and discovery. The protocol itself is standardized (JSON-RPC 2.0, three primitives, three transports), but two clients that both support MCP can still be radically different distribution channels. Choosing where to ship first is one of the highest-leverage product decisions in an MCP roadmap, and the matrices most product teams build for themselves stop at “which clients support MCP.” That is not the question that moves the decision.
This guide compares the six clients that account for the bulk of professional MCP-mediated use across the eight dimensions that consistently matter. It uses the working vocabulary in our MCP terminology guide: an MCP app is the user-installable artifact, an MCP server is the engineering artifact underneath, a tool is a single capability the server exposes.
Snapshot, Not Forecast
Last verified: May 2026. The MCP client landscape moves faster than this page can be updated; we re-verify quarterly. If you are about to make a meaningful product investment based on what is below, confirm specifics with each vendor. The dimensions most likely to shift between updates are monetization (no client has a mature paid-app pathway yet) and agent-led routing (the discovery channel growing fastest).
Which AI Clients Support MCP in 2026
All major AI clients support MCP as of mid-2026. Long-tail vertical AI clients (legal, medical, sales-specific) are adopting at varying rates. The protocol itself is standardized; the differences across clients are in distribution, auth, audience, and discovery, not in whether the protocol works.
The clients in scope for this comparison:
- Claude (Anthropic) – first-class MCP support since 2024 launch
- ChatGPT (OpenAI) – first-class as of 2025, consolidated from earlier Plugin/GPT framework
- Cursor + AI-first IDEs (Windsurf, Cline, Continue) – first-class, treats MCP as primary extension model
- Microsoft Copilot – supported via Copilot agent framework
- Gemini (Google) – first-party servers from late 2025; third-party support firming through 2026
- Perplexity – supported, retrieval-focused
The Eight Dimensions That Matter
If you are deciding which AI clients to ship to and in what order, the questions that actually move the decision are: how does each client distribute MCP apps, what is its auth model, who is its audience, and how does discovery work inside it.
The eight dimensions:
- MCP support. Whether the client supports MCP, what version of the spec it implements, whether support is first-class or retrofitted.
- User-facing term. What the client calls an installable MCP app inside its own product. Determines marketing language.
- Distribution mechanism. How a user gets your MCP app installed: curated marketplace, manual install via configuration, or admin-controlled enterprise distribution.
- Discovery. How a user finds your MCP app among alternatives: marketplace search, featured slots, agent-led routing, external catalogs.
- Auth model. OAuth 2.1 with PKCE, API-key handoff, or enterprise SSO via the host’s identity provider.
- Permissions granularity. Whether access is granted at the MCP app level, the tool level, or per-action at runtime.
- Audience. Who actually uses this client – consumer, prosumer, developer, enterprise knowledge worker, vertical specialist.
- Monetization pathway. Whether and how an MCP app developer can charge for use.
The matrix
| Client | MCP support | User-facing term | Distribution | Discovery | Auth model | Permissions | Audience | Monetization |
|---|---|---|---|---|---|---|---|---|
| Claude (Anthropic) | First-class, since 2024 launch; tracks current spec | Connector | In-product connector marketplace + manual config (claude.json) | Marketplace search, editorial featured, agent-led suggestion | OAuth 2.1 + PKCE; Dynamic Client Registration; SAML/SCIM on enterprise tier | Per-tool consent at install; per-action confirmation for mutations | Prosumer + enterprise; strong dev adoption | No first-class paid-app billing yet; vendor-side billing common |
| ChatGPT (OpenAI) | First-class as of 2025; consolidated from Plugin/GPT framework | App | Consolidated GPT/MCP app store + manual install paths | Store search, featured slots, prompt-led routing | OAuth or API key depending on install path | App-level scopes; per-tool consent on newer builds | Broad consumer + prosumer; growing enterprise via Teams/Enterprise | App store with revenue share emerging; details still firming |
| Cursor + AI-first IDEs | First-class; treats MCP as primary extension model | MCP server | Manual install via config file (mcp.json); community catalogs; one-click install URLs | Community catalogs, GitHub, word-of-mouth, Cursor’s directory | API key dominant; OAuth supported on remote servers | Largely permissive; per-server install consent | Developers, AI-first engineering teams | No first-party billing; open-source norms dominate |
| Microsoft Copilot | Supported via Copilot agent framework | Agent / Copilot extension | IT admin-controlled deployment via Microsoft 365 admin center | Internal corporate catalogs; Microsoft AppSource | Entra ID; SAML; SCIM | Admin-granted org-level scopes; per-user consent for sensitive scopes | Enterprise knowledge workers, Microsoft 365 customers | AppSource billing; co-sell programs |
| Gemini (Google) | First-party servers from late 2025; third-party support expanding through 2026 | Connector / extension (terminology in flux) | Workspace admin distribution; account-level installs for individuals | Workspace marketplace; Google search-led discovery | Google OAuth | Scope-based (Google’s standard model) | Workspace customers, Google account holders | Workspace marketplace billing pathways |
| Perplexity | Supported, focused on retrieval and research-flow tools | Connector | In-product, lightweight install | Curated; small surface | OAuth | App-level | Research-heavy professionals, prosumer | Limited; product-led growth focus |
Transport and Auth Support Per Client
The MCP spec defines three transports (stdio for local servers, SSE for early remote servers, streamable HTTP for current remote servers). Client support varies, and this affects which hosting model you can use.
Transports
| Client | stdio (local) | SSE (legacy remote) | Streamable HTTP (current remote) |
|---|---|---|---|
| Claude desktop | Yes | Yes | Yes |
| Claude.ai (web) | No | Yes | Yes |
| ChatGPT | Limited (developer mode) | Yes | Yes |
| Cursor | Yes | Yes | Yes |
| Microsoft Copilot | No | Yes | Yes |
| Gemini | Limited | Yes | Yes |
| Perplexity | No | Yes | Yes |
For a remote MCP server in 2026, streamable HTTP is the right default. SSE is supported across all clients but is the older transport and is slowly being deprecated. Stdio matters only for desktop clients and developer-mode integrations.
Auth and security capabilities
| Client | OAuth 2.1 | PKCE | Dynamic Client Registration | Authorization server metadata | Enterprise SSO | Per-tool consent |
|---|---|---|---|---|---|---|
| Claude | Yes | Required | Yes | Yes (RFC 8414) | SAML, SCIM (enterprise tier) | Yes, at install |
| ChatGPT | Yes (modern install paths) | Yes | Partial | Partial | OIDC | App-level + emerging per-tool |
| Cursor | Yes (remote servers) | Yes | Limited | Limited | Limited | No native UI |
| Microsoft Copilot | Via Entra ID | Yes | N/A (Entra-managed) | N/A (Entra-managed) | Native (Entra) | Admin-granted org-level |
| Gemini | Via Google OAuth | Yes | N/A (Google-managed) | N/A | Native (Google Workspace) | Scope-level |
| Perplexity | Yes | Yes | Limited | Limited | No | App-level |
The practical implication: an MCP app shipping to Claude needs full DCR + RFC 8414 metadata support from day one. A multi-client MCP app needs all three auth patterns. For deep auth design guidance, see MCP auth and security.
Per-Client Decisions and Gotchas
Should I ship my MCP app to Claude?
Ship to Claude first if your buyer is prosumer or knowledge-worker. Claude has the most coherent buyer experience for connectors, the strongest baseline for if you ship to one client, ship here, and editorial featured slots that meaningfully drive distribution.
Distinguishing characteristics: in-product connector marketplace, OAuth 2.1 + PKCE with mandatory Dynamic Client Registration (the spec compliance bar is the highest in the table), per-tool consent at install, agent-led suggestion (Claude itself surfacing your connector mid-conversation when relevant), strong adoption among developers and knowledge workers.
Claude Gotchas
Connector marketplace review takes 2–3 weeks for new submissions; rejections most often cite scope-design issues or unclear tool descriptions. Per-tool consent screens get long with many tools – connectors with 30+ tools have noticeably higher abandonment at install. Group tools by capability or split into multiple connectors. Enterprise tier has additional review requirements (SAML SSO support, admin install support, audit log surface).
Should I ship my MCP app to ChatGPT?
Ship to ChatGPT first if your audience is broadly consumer or prosumer and raw audience size matters more than depth of feature support. ChatGPT has the largest user base in the table by a wide margin, and the consolidated GPT/MCP app store is the most mature MCP marketplace as of 2026.
Distinguishing characteristics: consolidated app store, OAuth or API key depending on install path, store search + featured slots + prompt-led routing, broadest audience.
Gotchas: legacy Custom GPT documentation still leaks into developer docs (verify which install path your app actually uses); revenue share program details for the app store have been firming through 2026; featured-slot positioning is the dominant distribution lever and the long tail is hard.
Should I ship my MCP app to Cursor and the AI-first IDEs?
Ship to Cursor (and Windsurf, Cline, Continue) first only if your product is a developer tool. The AI-first IDEs are the right surface for developer-tooling companies and almost no one else; the audience does not match most B2B SaaS buyer profiles.
Distinguishing characteristics: manual install via mcp.json config file, community catalogs, no marketplace gatekeeper, API-key-dominant auth.
Gotchas: the configuration file is the install vector – users manually edit mcp.json (error-prone, higher install-fail rate than marketplace clients); one-click install URLs (cursor://...) reduce this and should be provided where possible; Cline, Continue, and Windsurf have similar install patterns but slightly different config schemas; DevRel matters here more than in any other client.
Should I ship my MCP app to Microsoft Copilot?
Ship to Copilot first if your product sells into enterprise and your buyers already have a Microsoft 365 footprint. Copilot is the strongest enterprise distribution surface in the table, with the heaviest implementation requirements and the most mature procurement story.
Distinguishing characteristics: IT admin-controlled deployment via Microsoft 365 admin center, Entra ID identity, AppSource marketplace, enterprise knowledge worker audience, AppSource billing + co-sell programs.
Copilot Gotchas
Entra ID setup is the longest pole – plan for 2–4 weeks of identity-integration work even with experienced engineering. AppSource review can take 6–12 weeks for new MCP-app submissions; the review is more rigorous than other clients (security, accessibility, compliance). Copilot's MCP support sits inside the larger Copilot Studio + agents framework, which has its own concepts (knowledge sources, topics, actions) that overlap MCP terminology – expect a translation layer in conversations with Microsoft.
Should I ship my MCP app to Gemini?
Ship to Gemini first if your audience is Workspace-heavy. Gemini supports MCP across Google Workspace properties, with first-party MCP servers shipped by Google for Drive, Gmail, Calendar, Chat, and Chrome DevTools through late 2025 and 2026.
Distinguishing characteristics: Workspace admin distribution, Google OAuth, Workspace marketplace + Google search-led discovery, Workspace customer audience.
Gotchas: third-party MCP support is real but firming through 2026 (Google’s first-party servers shipped first, third-party path lags – verify current state before committing roadmap); Workspace admin distribution requires a Workspace customer account on the buyer side; feature parity with Anthropic is uneven.
Should I ship my MCP app to Perplexity?
Ship to Perplexity first only if your product augments research, retrieval, or a vertical-data workflow. Narrower surface than the other clients, focused on research-heavy professional use.
Time to First Install and What Will Change First
Realistic deployment timelines
Time from a working MCP server to first user install per client, assuming an experienced team with the server already built:
| Client | Time to first install |
|---|---|
| Cursor + AI-first IDEs | 1–2 days (no marketplace review; just publish install URL or community catalog entry) |
| Claude (consumer connectors) | 2–4 weeks (marketplace review, scope review) |
| Claude (enterprise tier) | 4–8 weeks (additional review for SAML/SCIM, admin distribution) |
| ChatGPT | 3–6 weeks (app store review; revenue share enrollment if applicable) |
| Gemini | 2–6 weeks (Workspace marketplace review) |
| Perplexity | 1–3 weeks (lightweight curation) |
| Microsoft Copilot + AppSource | 6–12 weeks (rigorous AppSource review) |
These ranges assume the server is built and OAuth is working. Full timeline from project kickoff to first user install runs 8–24 weeks depending on the client and build complexity.
Monetization landscape
Monetization is currently weak across all clients in 2026:
- Microsoft AppSource and Workspace marketplace (Gemini): mature billing infrastructure inherited from existing Microsoft / Google channels
- ChatGPT app store: revenue share programs emerging, details still firming
- Claude, Cursor, Perplexity: no first-class paid-app billing
For most B2B products in 2026, the right monetization path is your existing subscription. The MCP app is a distribution surface, not a billing channel. Revisit in twelve to eighteen months.
What will change first
Two dimensions are most likely to shift between quarterly updates of this matrix:
- Monetization. Every client knows it needs a story; none has fully shipped one. Expect meaningful changes in 2026 and 2027. The first client to ship a credible developer-revenue model will reshape distribution math.
- Agent-led routing. The degree to which the AI client itself recommends MCP apps to users mid-conversation. Tool description quality compounds disproportionately on agent-led routing in ways it does not yet on marketplace search.
Reading the matrix when you choose where to ship
A few patterns recur across the engagements we work on, and they are the patterns most absent from the matrices product teams build for themselves.
Audience first, mechanics second. The single most common mistake is choosing a client because the developer experience is good rather than because the buyer is there. Cursor has the most permissive distribution model in the table, and for most B2B SaaS products it is the wrong place to start, because the buyer of a CRM or a contracts platform is not an AI-first engineer using Cursor.
Distribution model determines distribution work. A marketplace-distributed client (Claude, ChatGPT, AppSource) means submitting, getting reviewed, optimizing for store search, and competing for featured slots – work that looks more like App Store optimization than like API integration. A manual-install client means a different motion entirely.
Auth model is a procurement signal. Enterprise buyers have radically different reactions to “OAuth with per-tool consent” versus “API key in a config file” versus “Entra-ID-mediated admin install.” If you intend to sell into enterprise, the clients with mature auth and admin-controlled distribution are higher-leverage even when the developer experience is heavier.
For the strategic question of which clients to ship to, this matrix is most useful read alongside the MCP strategy decision framework. For the technical question of what auth model to design for, see MCP auth and security. For build-vs-buy, see MCP build vs buy.
A matrix is a snapshot. The strategy is the read.
Related Guides
- What to Call MCP Apps: Terminology Guide – Working vocabulary for product teams shipping in 2026
- How to Embed Your App in AI Clients with MCP – The full strategic framework
- MCP Strategy Decision Framework – Should your app be in AI clients in 2026?
- MCP Embedding Types Explained – Read-only, actions, agent-resident
- MCP Auth and Security – OAuth 2.1, scopes, audit logs, enterprise readiness
Frequently Asked Questions
Which AI clients support MCP?
All major AI clients support MCP as of mid-2026: Claude, ChatGPT, Cursor (and other AI-first IDEs like Windsurf, Cline, Continue), Microsoft Copilot, Gemini, and Perplexity. The protocol itself is standardized across these clients; differences are in distribution, auth, audience, and discovery.
Is Claude or ChatGPT better for MCP apps?
Claude has the more coherent connector experience, stronger agent-led discovery, and the most spec-compliant OAuth implementation. ChatGPT has the larger raw audience and the more mature app store. For prosumer and knowledge-worker audiences, Claude is typically the stronger first ship. For consumer-facing products, ChatGPT typically wins on reach.
Does Microsoft Copilot support MCP?
Yes, through the Copilot agent framework (Copilot Studio + agents). Distribution is admin-controlled via the Microsoft 365 admin center, auth uses Entra ID, and AppSource provides the marketplace. Implementation overhead is higher than other clients; procurement story is stronger.
Does Cursor support MCP?
Yes, as a first-class extension model. Cursor and other AI-first IDEs (Windsurf, Cline, Continue) treat MCP as the primary extension surface. Auth is predominantly API-key based for local servers; OAuth is supported for remote servers. Distribution is via manual install (mcp.json config) and community catalogs.
Can one MCP app work in all AI clients?
The MCP protocol is standardized, so the underlying server can be reused. But each client has its own auth model, distribution mechanism, terminology, and metadata standards. Expect 30–60% additional work per added client even with full server reuse.
Which MCP client has the largest user base?
ChatGPT has the largest raw audience among the major AI clients in 2026, by a meaningful margin. Claude has the most engaged prosumer audience. Cursor leads in AI-first developer audience. Microsoft Copilot has the largest enterprise footprint via Microsoft 365 distribution.
What MCP transport should I use?
For local servers, stdio. For remote/hosted servers (the typical SaaS pattern), streamable HTTP is the right default in 2026. SSE is supported but is the older transport and is slowly being deprecated for new builds.
Does MCP support Dynamic Client Registration?
Yes – the MCP spec specifies OAuth 2.1 with Dynamic Client Registration (RFC 7591) and authorization server metadata discovery (RFC 8414) for remote servers. Claude has the strongest DCR support in 2026; other clients have partial implementations.