Building with MCP: Tools, Resources, Authentication, and Rich UI
The Model Context Protocol is not just another API specification — it's the connective tissue that will define how enterprises integrate AI into their operational fabric. While the industry debates whether LLMs are commodities or moats, the real question is this: how do you give these models the context and capabilities to generate actual business value?
MCP provides the answer. It's a standardized protocol for connecting AI applications to data sources, business systems, and execution environments. Think of it as the USB-C of AI integration — a universal standard that lets you plug any AI application into your business infrastructure without writing custom integrations for every combination of model and data source.
After building MCP servers and working with the protocol, I'm convinced this will become as foundational to enterprise AI as REST APIs were to the web. Here's why it matters, and how to leverage its capabilities.
The Core Concept: Why MCP Exists
Large language models are remarkably capable, but they're fundamentally constrained by their training data. They know nothing about your customer database, your internal workflows, or the real-time state of your systems. MCP bridges this gap by providing a standardized way for AI applications to:
- Read data from your systems (databases, APIs, file systems)
- Execute actions with proper authorization and auditing
- Maintain context across multi-turn conversations
- Return rich interfaces beyond simple text responses
The protocol is designed to be model-agnostic and transport-agnostic. Whether you're using Claude, GPT, or local models, whether you're communicating over Streamable HTTP or stdio, MCP provides a consistent interface.
Tools: Giving AI the Ability to Act
Tools are the most fundamental primitive in MCP. They're functions that the AI can invoke to perform actions in the real world. Unlike traditional APIs where you control the invocation, with MCP the model decides when and how to call your tools based on the user's intent.
Basic Tool Registration
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { z } from 'zod'
const server = new McpServer({
name: 'crm-server',
version: '1.0.0',
}, {
capabilities: {
tools: { listChanged: true }
}
})
server.registerTool(
'search_customers',
{
title: 'Search Customers',
description: 'Search customer database by name, email, or company',
inputSchema: {
query: z.string().describe('Search query'),
limit: z.number().default(10).describe('Maximum results to return'),
},
},
async ({ query, limit }) => {
const customers = await db.customers.search(query, { limit })
return {
content: [{
type: 'text',
text: `Found ${customers.length} customers matching "${query}"`,
}],
structuredContent: { customers }
}
}
)
Tool Annotations: Communicating Intent
This is where MCP gets sophisticated. Tools can declare their behavior characteristics through annotations, helping clients make intelligent decisions about when and how to invoke them:
type ToolAnnotations = {
readOnlyHint?: boolean // Doesn't modify data
destructiveHint?: boolean // May delete/overwrite data
idempotentHint?: boolean // Safe to call multiple times
openWorldHint?: boolean // Interacts with external systems
}
server.registerTool(
'delete_customer',
{
title: 'Delete Customer',
description: 'Permanently delete a customer record',
annotations: {
destructiveHint: true,
idempotentHint: true,
openWorldHint: false,
} satisfies ToolAnnotations,
inputSchema: {
customerId: z.string().uuid()
}
},
async ({ customerId }) => {
await db.customers.delete(customerId)
return {
content: [{
type: 'text',
text: `Customer ${customerId} deleted successfully`
}]
}
}
)
These annotations aren't security controls — they're hints for UX. A client might warn users before invoking destructive operations, or allow read-only tools to execute without confirmation.
Structured Output Schemas
One of MCP's most powerful features is the ability to define structured output schemas. This enables reliable tool chaining and workflow automation:
const customerSchema = z.object({
id: z.string().uuid(),
name: z.string(),
email: z.string().email(),
totalRevenue: z.number(),
})
server.registerTool(
'get_customer',
{
title: 'Get Customer',
description: 'Retrieve customer details by ID',
outputSchema: { customer: customerSchema },
inputSchema: {
customerId: z.string().uuid()
}
},
async ({ customerId }) => {
const customer = await db.customers.findById(customerId)
return {
structuredContent: { customer },
content: [{
type: 'text',
text: JSON.stringify(customer, null, 2)
}]
}
}
)
The structured output is machine-parseable and type-safe. Clients can validate responses, enable downstream tool calls, and build reliable automation pipelines.
Resources: Contextualizing the Conversation
While tools are AI-initiated actions, resources are application-driven context. They let you expose data structures, files, or API responses that the AI can reference during a conversation. This distinction is crucial: resources are about giving the model visibility into your system's state; tools are about giving it agency.
Static Resources
server.registerResource(
'customer_segments',
'crm://segments',
{
title: 'Customer Segments',
description: 'Available customer segmentation categories',
},
async (uri) => {
const segments = await db.segments.list()
return {
contents: [{
mimeType: 'application/json',
text: JSON.stringify(segments, null, 2),
uri: uri.toString(),
}]
}
}
)
Resource Templates: Dynamic Resources
Resource templates enable parameterized resources, making them far more powerful:
import { ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js'
server.registerResource(
'customer',
new ResourceTemplate('crm://customers/{id}', {
list: async () => {
const customers = await db.customers.list({ limit: 100 })
return {
resources: customers.map(c => ({
name: c.name,
uri: `crm://customers/${c.id}`,
mimeType: 'application/json',
}))
}
},
complete: {
async id(value) {
const customers = await db.customers.search(value)
return customers
.map(c => c.id)
.slice(0, 100)
}
}
}),
{
title: 'Customer',
description: 'Get customer details by ID',
},
async (uri, { id }) => {
invariant(typeof id === 'string', 'Customer ID required')
const customer = await db.customers.findById(id)
return {
contents: [{
mimeType: 'application/json',
text: JSON.stringify(customer, null, 2),
uri: uri.toString(),
}]
}
}
)
The list
callback enables efficient resource discovery without loading full data. The complete
callback provides intelligent autocompletion as users type resource URIs. This is resource management at scale.
Embedding vs Linking Resources in Tool Responses
Tools can return resources in two ways:
Embedded Resources (for small data):
return {
content: [{
type: 'resource',
resource: {
uri: `crm://customers/${customerId}`,
mimeType: 'application/json',
text: JSON.stringify(customer),
}
}]
}
Linked Resources (for large data):
return {
content: [
{
type: 'text',
text: `Found ${customers.length} customers`,
},
...customers.map(c => ({
type: 'resource_link',
uri: `crm://customers/${c.id}`,
name: c.name,
description: `Customer: ${c.name}`,
mimeType: 'application/json',
}))
]
}
Linked resources keep responses lightweight while giving the AI (or user) the ability to fetch details on demand.
Prompts: Democratizing AI Workflows
Not everyone wants to be a prompt engineer. MCP prompts let you encapsulate common workflows into reusable, parameterized templates. Users select from a menu of prompts rather than crafting natural language requests from scratch.
server.registerPrompt(
'quarterly_review',
{
title: 'Generate Quarterly Business Review',
description: 'Analyze customer metrics and generate QBR report',
argsSchema: {
customerId: z.string().uuid().describe('Customer ID'),
quarter: z.enum(['Q1', 'Q2', 'Q3', 'Q4']).describe('Quarter to analyze'),
},
},
async ({ customerId, quarter }) => {
const customer = await db.customers.findById(customerId)
const metrics = await analytics.getQuarterlyMetrics(customerId, quarter)
return {
messages: [
{
role: 'user',
content: {
type: 'resource',
resource: {
uri: `crm://customers/${customerId}`,
mimeType: 'application/json',
text: JSON.stringify(customer),
}
}
},
{
role: 'user',
content: {
type: 'text',
text: `Using the customer data and these metrics, generate a comprehensive QBR:\n${JSON.stringify(metrics, null, 2)}`
}
}
]
}
}
)
The prompt preloads context (customer data, metrics) so the AI can immediately generate useful output without multiple tool calls. This reduces latency and improves response quality.
Elicitation: Human-in-the-Loop Workflows
Sometimes the AI needs additional information from the user mid-workflow. Elicitation enables this through structured form requests:
server.registerTool(
'export_data',
{
title: 'Export Customer Data',
description: 'Export customer data to CSV',
inputSchema: {
segmentId: z.string()
}
},
async ({ segmentId }) => {
const segment = await db.segments.findById(segmentId)
const recordCount = await db.customers.count({ segmentId })
// Check if client supports elicitation
const capabilities = server.server.getClientCapabilities()
if (capabilities?.elicitation) {
const result = await server.server.elicitInput({
message: `Export ${recordCount} records from segment "${segment.name}"?`,
requestedSchema: {
type: 'object',
properties: {
confirmed: {
type: 'boolean',
description: 'Confirm export operation',
},
format: {
type: 'string',
enum: ['csv', 'xlsx', 'json'],
description: 'Export format',
}
}
}
})
if (result.action !== 'accept' || !result.content?.confirmed) {
return {
content: [{
type: 'text',
text: 'Export cancelled'
}]
}
}
const format = result.content.format || 'csv'
await exports.create(segmentId, format)
return {
content: [{
type: 'text',
text: `Exported ${recordCount} records as ${format}`
}]
}
}
// Fallback: proceed without confirmation
await exports.create(segmentId, 'csv')
return {
content: [{
type: 'text',
text: `Exported ${recordCount} records`
}]
}
}
)
Elicitation is not secure — don't use it for sensitive data like passwords. But for confirmations, format selections, and workflow parameters, it creates a natural conversational flow.
Sampling: Borrowing the User's Model
This is one of MCP's most elegant features. Sampling lets your server request LLM completions from the client's model. Instead of embedding your own model or API keys, you leverage the user's existing AI infrastructure:
async function generateTagsForEntry(entryId: string) {
const entry = await db.entries.findById(entryId)
const existingTags = await db.tags.list()
// Check if client supports sampling
if (!client.capabilities?.sampling) {
return { tags: [] }
}
const systemPrompt = `You are a content tagging assistant.
Review the journal entry and suggest 3-5 relevant tags from the existing tag list.
If existing tags don't fit well, suggest new tags using kebab-case format.
Existing tags: ${existingTags.map(t => t.name).join(', ')}
Respond with a JSON array of tag names only.`
const response = await client.request('sampling/createMessage', {
messages: [
{
role: 'system',
content: { type: 'text', text: systemPrompt }
},
{
role: 'user',
content: { type: 'text', text: entry.content }
}
],
maxTokens: 200,
temperature: 0.7,
})
const tags = JSON.parse(response.content.text)
return { tags }
}
Sampling enables powerful agentic behaviors while respecting user control over model selection, costs, and privacy. The user sees the sampling request and can decline it.
Long-Running Tasks: Progress and Cancellation
Many real-world operations take time. MCP provides first-class support for progress notifications and graceful cancellation:
server.registerTool(
'analyze_codebase',
{
title: 'Analyze Codebase',
description: 'Run static analysis across entire codebase',
inputSchema: {
path: z.string(),
checks: z.array(z.string()).default(['security', 'performance', 'style'])
}
},
async ({ path, checks }, { signal, progressToken }) => {
const files = await getFilesRecursively(path)
let processed = 0
const issues: Issue[] = []
// Setup cancellation handler
function onAbort() {
log.info('Analysis cancelled by user')
}
signal?.addEventListener('abort', onAbort)
try {
for (const file of files) {
// Check for cancellation
if (signal?.aborted) {
return {
content: [{
type: 'text',
text: `Analysis cancelled after ${processed} files`
}]
}
}
// Analyze file
const fileIssues = await analyzeFile(file, checks)
issues.push(...fileIssues)
processed++
// Send progress update
if (progressToken) {
await server.server.notification({
method: 'notifications/progress',
params: {
progressToken,
progress: processed,
total: files.length,
message: `Analyzing ${file}...`
}
})
}
}
return {
content: [{
type: 'text',
text: `Analysis complete. Found ${issues.length} issues in ${processed} files.`
}],
structuredContent: { issues }
}
} finally {
signal?.removeEventListener('abort', onAbort)
}
}
)
The progressToken
and signal
are provided by the SDK. Progress updates keep users informed; cancellation ensures resources are cleaned up properly. This is essential for production systems where operations might timeout or users might change their minds.
Changes: Dynamic Capabilities
MCP servers can be dynamic. Available tools, resources, and prompts can change based on authentication state, feature flags, or external system availability. The list_changed
notification tells clients to refresh their capability lists:
const server = new McpServer({
name: 'crm-server',
version: '1.0.0',
}, {
capabilities: {
tools: { listChanged: true },
resources: { listChanged: true, subscribe: true },
}
})
// Conditionally register tools based on user permissions
async function initializeCapabilities(authInfo: AuthInfo) {
const user = await db.users.findById(authInfo.userId)
// Everyone gets read tools
registerReadTools(server)
// Only admins get write tools
if (user.role === 'admin') {
registerWriteTools(server)
// Notify clients that tool list changed
server.sendToolListChanged()
}
// Register resources based on accessible data sources
const dataSources = await getAccessibleDataSources(user)
dataSources.forEach(source => {
registerResourceForDataSource(server, source)
})
server.sendResourceListChanged()
}
Resource Subscriptions
For dynamic resources that change frequently, subscriptions enable push notifications:
import { SubscribeRequestSchema, UnsubscribeRequestSchema } from '@modelcontextprotocol/sdk/types.js'
const subscriptions = new Set<string>()
server.server.setRequestHandler(SubscribeRequestSchema, async ({ params }) => {
subscriptions.add(params.uri)
// Start watching the resource
watchResource(params.uri, (changes) => {
server.server.notification({
method: 'notifications/resources/updated',
params: {
uri: params.uri,
title: `Resource updated: ${params.uri}`,
}
})
})
return {}
})
server.server.setRequestHandler(UnsubscribeRequestSchema, async ({ params }) => {
subscriptions.delete(params.uri)
stopWatchingResource(params.uri)
return {}
})
This enables real-time workflows: a customer record updates, and the AI conversation automatically refreshes with new data. Critical for operational tools.
MCP-UI: Rich Interfaces Beyond Text
MCP-UI extends the protocol to support rich, interactive interfaces. While the base MCP spec is text-focused, MCP-UI enables servers to return HTML, React components, or full iframe-based applications.
Raw HTML Responses
import { createUIResource } from '@mcp-ui/server'
server.registerTool(
'visualize_metrics',
{
title: 'Visualize Customer Metrics',
description: 'Show customer metrics as an interactive chart',
inputSchema: {
customerId: z.string().uuid()
}
},
async ({ customerId }) => {
const metrics = await analytics.getMetrics(customerId)
const htmlString = `
<div style="padding: 20px; font-family: system-ui;">
<h2>Customer Metrics: ${metrics.name}</h2>
<div style="display: grid; grid-template-columns: repeat(3, 1fr); gap: 16px;">
<div style="padding: 16px; border: 1px solid #e0e0e0; border-radius: 8px;">
<div style="font-size: 32px; font-weight: bold; color: #2563eb;">
${metrics.totalRevenue.toLocaleString('en-US', { style: 'currency', currency: 'USD' })}
</div>
<div style="color: #666; margin-top: 8px;">Total Revenue</div>
</div>
<div style="padding: 16px; border: 1px solid #e0e0e0; border-radius: 8px;">
<div style="font-size: 32px; font-weight: bold; color: #16a34a;">
${metrics.activeUsers}
</div>
<div style="color: #666; margin-top: 8px;">Active Users</div>
</div>
<div style="padding: 16px; border: 1px solid #e0e0e0; border-radius: 8px;">
<div style="font-size: 32px; font-weight: bold; color: #dc2626;">
${metrics.churnRate}%
</div>
<div style="color: #666; margin-top: 8px;">Churn Rate</div>
</div>
</div>
</div>
`
return {
content: [
createUIResource({
uri: `ui://metrics/${customerId}`,
content: {
type: 'rawHtml',
htmlString,
},
encoding: 'text',
})
]
}
}
)
Remote DOM: Consistent Component Systems
For more sophisticated UIs, Remote DOM (from Shopify) provides a component-based approach with consistent styling:
function createMetricsDashboard(metrics: CustomerMetrics): string {
return `
const stack = document.createElement('ui-stack')
stack.setAttribute('direction', 'vertical')
stack.setAttribute('spacing', '16')
const title = document.createElement('ui-text')
title.setAttribute('content', 'Customer Metrics Dashboard')
title.setAttribute('size', 'large')
title.setAttribute('weight', 'bold')
stack.appendChild(title)
const grid = document.createElement('ui-grid')
grid.setAttribute('columns', '3')
grid.setAttribute('spacing', '16')
${metrics.cards.map(card => `
const card${card.id} = document.createElement('ui-card')
const cardContent = document.createElement('ui-stack')
cardContent.setAttribute('direction', 'vertical')
cardContent.setAttribute('spacing', '8')
const value = document.createElement('ui-text')
value.setAttribute('content', '${card.value}')
value.setAttribute('size', 'xlarge')
value.setAttribute('weight', 'bold')
cardContent.appendChild(value)
const label = document.createElement('ui-text')
label.setAttribute('content', '${card.label}')
label.setAttribute('color', 'secondary')
cardContent.appendChild(label)
card${card.id}.appendChild(cardContent)
grid.appendChild(card${card.id})
`).join('\n')}
stack.appendChild(grid)
root.appendChild(stack)
`.trim()
}
server.registerTool(
'show_dashboard',
{
title: 'Show Metrics Dashboard',
description: 'Display customer metrics in a visual dashboard',
},
async ({ customerId }) => {
const metrics = await analytics.getMetrics(customerId)
return {
content: [
createUIResource({
uri: `ui://dashboard/${customerId}`,
content: {
type: 'remoteDOM',
script: createMetricsDashboard(metrics),
},
encoding: 'text',
})
]
}
}
)
Iframe-Based Applications
For complex workflows requiring full framework support, MCP-UI supports iframe embedding:
server.registerTool(
'open_editor',
{
title: 'Open Customer Editor',
description: 'Open full-featured customer editor interface',
inputSchema: {
customerId: z.string().uuid()
}
},
async ({ customerId }) => {
const customer = await db.customers.findById(customerId)
return {
content: [
createUIResource({
uri: `ui://editor/${customerId}/${Date.now()}`,
content: {
type: 'externalUrl',
iframeUrl: `https://app.example.com/customers/${customerId}/edit`,
},
encoding: 'text',
uiMetadata: {
'preferred-frame-size': ['800px', '600px'],
'initial-render-data': { customer }
}
})
]
}
}
)
The iframe can communicate back to the host using postMessage
:
// Inside your iframe application
import { sendMcpMessage } from './mcp-client'
async function saveCustomer(customer: Customer) {
// Call MCP tool from within iframe
const result = await sendMcpMessage('tool', {
toolName: 'update_customer',
params: { customerId: customer.id, data: customer }
})
if (result.success) {
showNotification('Customer saved successfully')
}
}
async function requestAdditionalData() {
// Send prompt to AI from within iframe
await sendMcpMessage('prompt', {
prompt: 'Analyze this customer\'s purchase history and suggest upsell opportunities'
})
}
This creates a seamless experience where rich UI and AI conversation are deeply integrated. Users get interactive dashboards, data visualizations, and complex forms — all within their AI chat interface.
Authentication: OAuth 2.0 Integration
Production MCP servers require proper authentication. MCP uses OAuth 2.0 with token introspection for secure, standards-compliant auth:
Service Discovery
Clients discover authentication requirements through well-known endpoints:
// /.well-known/oauth-protected-resource/mcp
export async function handleOAuthProtectedResourceRequest(request: Request) {
return Response.json({
resource: 'https://api.example.com/mcp',
authorization_servers: ['https://auth.example.com'],
scopes_supported: [
'customers:read',
'customers:write',
'analytics:read',
'admin:all'
]
})
}
Token Introspection
Validate tokens and extract user context:
const introspectResponseSchema = z.discriminatedUnion('active', [
z.object({
active: z.literal(true),
client_id: z.string(),
scope: z.string(),
sub: z.string(), // User ID
}),
z.object({
active: z.literal(false),
}),
])
export async function resolveAuthInfo(
authHeader: string | null
): Promise<AuthInfo | null> {
const token = authHeader?.replace(/^Bearer\s+/i, '')
if (!token) return null
const resp = await fetch('https://auth.example.com/oauth/introspection', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({ token }),
})
if (!resp.ok) return null
const data = introspectResponseSchema.parse(await resp.json())
if (!data.active) return null
return {
token,
clientId: data.client_id,
scopes: data.scope.split(' '),
extra: { userId: data.sub }
}
}
Scope Validation
Implement fine-grained permissions using OAuth scopes:
const supportedScopes = [
'customers:read',
'customers:write',
'analytics:read',
'admin:all'
] as const
type SupportedScope = typeof supportedScopes[number]
function validateScopes(
authInfo: AuthInfo,
requiredScopes: SupportedScope[]
): boolean {
return requiredScopes.every(scope =>
authInfo.scopes.includes(scope)
)
}
// Conditionally register tools based on scopes
class CRMServer extends McpAgent<Env, State, Props> {
async init() {
const authInfo = this.requireAuthInfo()
// Everyone gets read access
if (authInfo.scopes.includes('customers:read')) {
this.registerReadTools()
}
// Only users with write scope get mutating tools
if (authInfo.scopes.includes('customers:write')) {
this.registerWriteTools()
}
// Admin tools require admin scope
if (authInfo.scopes.includes('admin:all')) {
this.registerAdminTools()
}
}
}
Error Handling
Return proper OAuth challenge headers:
export function handleUnauthorized(request: Request): Response {
const hasAuthHeader = request.headers.has('authorization')
const metadataUrl = new URL('/.well-known/oauth-protected-resource/mcp', request.url)
const authParams = [
'Bearer realm="CRM API"',
`resource_metadata="${metadataUrl}"`,
]
if (hasAuthHeader) {
// Invalid/expired token
authParams.push(
'error="invalid_token"',
'error_description="The access token is invalid or expired"'
)
}
return new Response('Unauthorized', {
status: 401,
headers: {
'WWW-Authenticate': authParams.join(', ')
}
})
}
export function handleInsufficientScope(required: SupportedScope[]): Response {
return new Response('Forbidden', {
status: 403,
headers: {
'WWW-Authenticate': [
'Bearer realm="CRM API"',
'error="insufficient_scope"',
`error_description="Required scopes: ${required.join(', ')}"`
].join(', ')
}
})
}
This provides standards-compliant authentication with clear error messages and proper OAuth semantics.
Why This Matters for Your Business
MCP isn't just a technical specification — it's an architectural pattern for the AI-native enterprise. Here's what becomes possible:
Unified AI Integration: Instead of building custom integrations for every AI application, you build MCP servers once and connect any compatible client. Your Salesforce data, Postgres database, Stripe billing — all exposed through standardized interfaces.
Security and Auditability: OAuth 2.0 integration, scope-based permissions, and structured tool definitions mean you can grant AI systems access to production systems with confidence. Every action is attributed to a user, every permission is explicitly granted.
Rich User Experiences: MCP-UI transforms AI from text-based chat into interactive applications. Users get dashboards, data visualizations, and complex workflows — all within their AI interface.
Model Flexibility: MCP is model-agnostic. Switch from Claude to GPT-4 to local models without rewriting integration code. The protocol abstracts the model layer.
Composability: Tools, resources, and prompts compose naturally. One MCP server can call another's tools. Resources can embed other resources. Prompts can orchestrate complex multi-step workflows.
MCP Is Shaping the Future of Websites and the Internet
Recent developments in AI and open agent protocols like the Model Context Protocol are fundamentally reshaping how we access, consume, and interact with information online. This isn't just about better integrations — it's about a fundamental shift in how the internet operates.
Turning AI Apps Into Gateways: MCP bridges AI-powered applications like ChatGPT, Claude, and Cursor with external services and data sources. Users and agents can now perform real-world actions and access user-specific, gated data directly rather than browsing static websites. The AI interface becomes the primary point of interaction with digital services.
Beyond Traditional Browsers: The browser paradigm — built for human users to view documents — is being replaced by something more fundamental. AI enables interaction at the token level, remixes information seamlessly, and generates responses or triggers actions directly. This shift moves away from document-centric navigation toward structured, agent-based interactions. Why click through a website when you can simply ask an AI to complete the task?
Aligning Publisher Incentives: MCP allows publishers and service owners to control how their data is used by AI agents, set terms of access, and build new business models tailored for agent and bot consumers. Rather than optimizing for ad clicks, businesses can implement affiliate links, subscriptions, or API-based pricing models designed for programmatic access. This solves the fundamental economic question: how do content creators and service providers get compensated in an AI-first world?
Replacing Websites with Endpoints: Instead of maintaining elaborate websites for human users, businesses can build dedicated AI endpoints or MCP servers. These focus on providing structured access to services and data for agents, making traditional web "pages" increasingly secondary. The homepage becomes less important than the API documentation. The user interface becomes less important than the machine interface.
Ecosystem Growth: The MCP and agent ecosystem is expanding rapidly, with hundreds of servers already online and new use cases emerging weekly. Its open-source nature means broad, cross-platform adoption and rapid protocol evolution. We're witnessing the same kind of explosive growth that characterized the early web — but this time, it's infrastructure designed for machines first, humans second.
The implications are profound. In the near future, websites as user-facing "documents" may become obsolete for many use cases, replaced by structured, agent-ready services accessed directly by AI applications. The web will evolve from a human-centric, browsing-first experience to an ecosystem optimized for machine, agent, and bot interactions — all orchestrated by standardized protocols like MCP.
This isn't about AI replacing humans. It's about AI becoming the interface layer between humans and digital services. The user expresses intent in natural language; the AI translates that into structured API calls; the service responds with data or actions; the AI presents results in a human-friendly format. The website in the middle — designed for visual navigation and manual data entry — becomes unnecessary friction.
The Path Forward
If you're building enterprise AI applications, MCP should be your default integration pattern. The protocol is young, but the design is solid and the ecosystem is growing at remarkable speed. Anthropic, OpenAI, and other major players are investing heavily in MCP tooling, and the broader developer community is following suit.
Start with a simple server exposing a few tools. Add resources as you identify common context needs. Implement prompts for frequently-used workflows. Build MCP-UI components where visual interfaces add real value. Layer in OAuth when you're ready for production.
But think bigger than just internal tools. The companies that embrace MCP early will have a significant architectural advantage — not just for their own AI applications, but in how they position themselves in an AI-native internet. They'll have standardized, composable AI integrations while competitors maintain brittle, model-specific code. They'll be discoverable and accessible to AI agents while others are invisible. They'll be able to iterate quickly, experiment with new models, and build sophisticated workflows — all on a foundation built to last.
Consider your business model in an agent-first world. If you're a content publisher, how will AI agents access and cite your work? If you're a SaaS platform, will users interact with your UI or call your MCP server through their AI assistant? If you're an e-commerce business, how will AI agents help users discover and purchase your products? The answers to these questions will define competitive advantage in the coming years.
MCP is the infrastructure layer for AI-native applications and, increasingly, for the internet itself. The traditional website is becoming optional — a nice-to-have rather than a necessity. The MCP endpoint is becoming essential. If you're not building on this foundation yet, you're already behind. The question isn't whether to adopt MCP; it's how quickly you can position your services for the agent-driven web that's emerging right now.
Further Reading
- MCP Documentation
- MCP-UI
- Diving into MCP Advanced Server Capabilities: A Comprehensive Guide
- AI Is Making Websites Obsolete With MCP
- What is Model Context Protocol (MCP)?
- MCP vs gRPC: How AI Agents & LLMs Connect to Tools & Data
- Code Mode: the better way to use MCP
Need help building production MCP servers? I specialize in custom integrations that connect AI systems to business infrastructure. Get in touch to discuss your requirements.