Connecting Software is Painful
If you've built any AI application in the past year - or have built API integrations, you've likely been baffled by the problems that arise. Each data source requires a custom connector. Every API has its own authentication flow. Standards are loosely followed. And the maintenance burden compounds with every new integration.
This fragmentation isn't just a technical annoyance - it's a business problem with real costs, but also presents opportunities for business expansion.
If you're dealing in data, digital subscriptions or SaaS products, MCP should be on your radar.
MCP: The USB-C Moment for LLMs
The Model Context Protocol (MCP) is a universal, open standard that enables AI applications to connect to data sources and tools through a single, standardized interface.
Introduced by Anthropic in November 2024 MCP was built to address the growing ecosystem of 'homegrown' solutions to connect various data sources into LLMs, and has grown from being a protocol mainly known to developers, but is slowly becoming the standard for connecting things in the AI ecosystem.
If you've used Claude, ChatGPT or similar AI tools, you've likely been relying on MCP servers to connect your favourite tools into the LLM without noticing; and in October of 2025 ChatGPT apps are being powered exclusively by MCP.
From Anthropic to the Open Source Community
In a significant move for the AI ecosystem, Anthropic donated MCP to the Agentic AI Foundation in December 2025 — a new directed fund under the Linux Foundation. The foundation was co-founded by Anthropic, Block, and OpenAI, with support from Google, Microsoft, Amazon Web Services, Cloudflare, and Bloomberg.
As Anthropic's Chief Product Officer Mike Krieger stated, this donation ensures MCP will remain "open, neutral, and community-driven" as it evolves into critical AI infrastructure.
The adoption has been remarkable. Within its first year, MCP has achieved:
- Over 10,000 active public MCP servers
- Integration into major AI platforms (ChatGPT, Cursor, Gemini, Microsoft Copilot)
- Support from all major cloud providers (AWS, Google Cloud, Microsoft Azure)
The Context Problem
Before diving into how MCP works, it's worth understanding why context matters so much for LLM performance.
Modern language models are incredibly capable, but they're fundamentally limited by their context window—the amount of information they can "see" at once. When you ask an LLM a question, it can only use:
- Its training data (frozen at a point in time)
- The context you provide in the conversation
- Any tools or data sources it can access
The third point is where most AI applications start to become truly powerful. An LLM without access to your customer data, internal documents, or real-time information might be a really good chat partner - but if you're looking to deliver consistent results with real information, or even automate workflows, you benefit from providing more and better context.
What Happens When AI Loses the Thread
Context loss is subtle but devastating. Imagine you're having a conversation with an AI assistant about a customer account. You ask:
- "What's the status of the Acme Corp deal?"
- "Show me their recent support tickets"
- "Do we see signals of increased usage?"
- "Draft a follow-up email based on their concerns"
If each of these queries goes to a different system—or worse, requires switching between AI models—the context is lost. The AI might draft an email that doesn't reference the support tickets. Or it might lose track of which customer you're discussing.
A Stylized Example: Automated Lead Processing with MCP
Let's walk through a concrete example of how MCP enables automated, context-rich workflows.
Scenario: An AI agent automatically processes new leads as they enter your CRM. Here's what happens when a webhook fires indicating a new lead from Acme Industries:
Trigger: New lead detected in CRM
AI Agent (autonomous workflow via MCP):
Step 1: Lead Enrichment
├─ Queries CRM server → Basic contact info, form submission data
├─ Queries LinkedIn server → Company size, recent news, decision makers
├─ Queries company database server → Past interactions, similar customers
└─ Builds enrichment profile with full context
Step 2: Automated Qualification (maintaining context from Step 1)
├─ References enrichment data without re-querying
├─ Queries deal scoring server → Calculates fit score based on ICP
├─ Queries calendar server → Checks sales team capacity
├─ Queries historical data server → Win rate for similar profiles
└─ Assigns priority score and routing decision
Step 3: Intelligent Outreach (maintaining full context from Steps 1-2)
├─ References all previous enrichment and qualification context
├─ Queries email template server → Selects best template for this profile
├─ Queries recent news server → Finds timely hooks (funding, hiring, etc.)
├─ Generates personalized email draft
└─ Creates task in CRM with context summary for sales rep review
Step 4: Notification & Handoff
├─ Posts to Slack with lead summary and AI-generated insights
├─ Updates CRM with enrichment data, priority score, and draft email
└─ Schedules follow-up reminder based on priority level
The entire workflow runs in ~60 seconds without any human intervention. The sales rep receives a Slack notification with everything they need to get into the conversation.
MCP in this case allows a workflow builder to easily connect various tools and add it to the arsenal of an AI agent, making it significantly more powerful than a simple chatbot.
How MCP Works
At its core, MCP uses a client-host-server architecture that's both simple and powerful.
How It Compares to REST
If you're familiar with REST APIs, you might be wondering: isn't this just another API standard?
Kind of, but also no. REST is a general-purpose architectural style for building web services. It's flexible, but that flexibility means every REST API is different. You still need to read documentation, understand each endpoint, and write custom integration code.
MCP is more opinionated and purpose-built for AI systems. It defines:
- Standard primitives: Resources (data), Tools (actions), and Prompts (templates)
- Built-in context management: Conversation history and state are first-class concepts
- Capability discovery: Servers advertise what they can do; clients discover capabilities automatically
- Streaming support: Real-time data flows are baked into the protocol
We can think of REST as a box of ingredients without any instructions. You can make anything you want, but you have to figure out how to put it all together yourself. MCP is like a meal kit - it comes with everything you need, pre-portioned and ready to use.
The Three-Part Architecture
According to the official MCP documentation, there are three key participants:
1. MCP Host (The AI Application) This is your AI application—Claude Desktop, an IDE, or your custom AI tool. The host:
- Coordinates and manages one or multiple MCP clients
- Handles user authorization and security policies
- Integrates with the LLM for context and responses
- Aggregates context from multiple sources
2. MCP Client (The Connector) Each client maintains a one-to-one connection with a specific MCP server. The client:
- Handles protocol negotiation with the server
- Discovers what capabilities the server offers
- Routes messages between the host and server
- Maintains isolation between different integrations
3. MCP Server (The Data/Tool Provider) Servers are lightweight programs that expose data and capabilities. A server:
- Provides Resources (structured data like files, database records)
- Offers Tools (actions like querying a database, sending an email)
- Exposes Prompts (reusable templates for common interactions)
- Operates independently with focused responsibilities
Architecture Diagram
Here's how these components work together:
┌─────────────────────────────────────────────────────────┐
│ MCP Host │
│ (AI Application) │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ MCP Client 1 │ │ MCP Client 2 │ │ MCP Client 3 │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
└─────────┼──────────────────┼──────────────────┼─────────┘
│ │ │
│ MCP Protocol │ MCP Protocol │ MCP Protocol
│ │ │
┌─────────▼─────────┐ ┌──────▼────────┐ ┌──────▼────────┐
│ MCP Server 1 │ │ MCP Server 2 │ │ MCP Server 3 │
│ (e.g., CRM) │ │ (e.g., Email) │ │ (e.g., Docs) │
└───────────────────┘ └───────────────┘ └───────────────┘
Source: Model Context Protocol Architecture Documentation
Model Capability Discovery
MCP servers advertise their capabilities through a standardized negotiation process:
// Server declares what it can do
{
"capabilities": {
"resources": { "subscribe": true },
"tools": { "listChanged": true },
"prompts": { "listChanged": true }
}
}
The client discovers these capabilities automatically and can route requests accordingly.
Transport Protocols: The Evolution
MCP's underlying transport layer has evolved to meet the needs of modern AI applications.
From HTTP+SSE to Streamable HTTP
Initially, MCP supported HTTP with Server-Sent Events (SSE) as its primary transport mechanism (standardized on 2024-11-05). This worked well for many use cases but had limitations around bidirectional streaming and connection management.
In March 2025, the protocol evolved to support Streamable HTTP (2025-03-26) as the modern standard. This new transport:
- Uses HTTP POST for client-to-server messages
- Leverages Server-Sent Events for streaming responses
- Supports standard HTTP authentication methods
- Enables remote server communication with better reliability
Why the Protocol Changed
The evolution was driven by real-world usage. As companies deployed MCP in production, they needed:
- Better streaming support for real-time AI interactions
- Standard authentication compatible with enterprise security policies
- Remote server capabilities for cloud-based deployments
- Improved error handling for production reliability
The protocol also supports stdio transport for local processes—using standard input/output streams for direct process communication. This is ideal for local development and provides optimal performance with zero network overhead.
What Developers Need to Know
If you're building with MCP today, use Streamable HTTP for remote servers and stdio for local development. The official MCP SDKs handle the transport layer for you, so you can focus on building servers and clients rather than managing protocols. Here MCP Inspector is a great developer tool to help you debug and understand the flow of data and context in your MCP application.
Security and Authentication
When AI systems start accessing sensitive business data, security becomes the conversation that can make or break a deal. MCP supports multiple authentication patterns to meet different deployment scenarios.
Authentication Options
The right authentication approach depends on who's using your AI system and where it's deployed:
OAuth 2.1 with PKCE (For user-facing applications)
This is what you want when individual users need to grant your AI access to their personal data. Think "Sign in with Google" for AI applications. OAuth handles the authorization flow, and PKCE adds an extra layer of security for public clients.
Use this when: Your AI assistant needs to access a user's Google Drive, Slack workspace, or any third-party service on their behalf.
SAML 2.0 (For enterprise SSO integration)
SAML is the enterprise standard for single sign-on. If you're selling to large organizations, they'll expect your AI system to integrate with their existing identity provider—whether that's Okta, Azure AD (now Microsoft Entra ID), Ping Identity, or another enterprise IdP.
Use this when: You're deploying AI tools within an enterprise that already has centralized identity management and wants employees to use their existing corporate credentials.
API Keys (For internal tools and trusted environments)
Sometimes simple is better. API keys work well when you control both sides of the integration and don't need the overhead of OAuth flows or SAML assertions.
Use this when: Your internal AI tools need to access company databases or services within your own infrastructure.
Choosing the Right Approach
Here's how to think about authentication for different scenarios:
If you're building a SaaS product, start with OAuth 2.1. Your customers expect the familiar authorization flow, and it gives you proper consent management out of the box.
If you're selling to enterprises, you'll want to consider SAML integration. Enterprise security teams want centralized control over who can access what. They want to enforce multi-factor authentication through their existing systems, maintain audit logs in one place, and be able to revoke access instantly when someone leaves the company. SAML gives them all of this without requiring them to manage yet another set of credentials.
If you're building internal tools, API keys are often sufficient. You control the environment, you can rotate keys as needed, and you don't need the complexity of user authorization flows.
Privacy and End-to-End Encryption
Beyond authentication, MCP supports end-to-end encryption for sensitive context data. This ensures that even if transport is compromised, the actual conversation context remains secure.
For regulated industries (healthcare, finance, legal), this is table stakes. For everyone else, it's good practice.
What's Next
The Model Context Protocol represents a fundamental shift in how we build AI applications—from fragmented, custom integrations to standardized, composable infrastructure.
Whether you're a business owner looking to deploy AI internally, a product builder serving customers, or an engineer implementing AI features, MCP offers a path forward that's more sustainable, more scalable, and more open than the status quo.
The ecosystem is growing rapidly and presents both challenges due to quickly changing environment - but also immense opportunity to reposition your product within the market, as use-cases and workflows evolve.