If you've read the introductory post on MCP, you know the elevator pitch: it's a standard protocol that lets AI assistants connect to external tools and data sources. This post skips that part entirely. It's for people who want to actually run MCP servers, understand what's happening under the hood, and know which servers are worth setting up vs which ones are still rough around the edges.
We'll cover the 3 core primitives, how the client-server handshake actually works, how to set up 5 servers that are genuinely useful, how to build your own server in Python, and an honest look at the ecosystem in 2026.
What MCP actually does (and two misconceptions worth clearing up)
Before getting into the mechanics, two mistakes come up constantly when people talk about MCP.
Misconception 1: "MCP is just function calling."
It isn't. Function calling — also called tool use — requires the LLM provider to define and host the tools. You send tool definitions with every API request and the model returns a structured call when it wants to use one. It's stateless and lives entirely within your API call.
MCP is different. You run a tool server as a separate process, anywhere you want — on your laptop, on a remote server, inside your company's network. The AI client connects to it via a standard interface and discovers its capabilities. The tools don't have to be re-described with every message. The server process persists across your conversation.
Misconception 2: "MCP only works with Claude."
MCP is an open standard published by Anthropic, but it's not Claude-only. Any model that supports the protocol can use MCP servers. Claude via Claude Desktop and OpenClaw are the most widely-used implementations right now, but Cursor, Continue, and other developer tools have MCP support. The spec itself is model-agnostic.
What MCP actually does: it's a client-server protocol. The AI client (Claude Desktop, OpenClaw, etc.) connects to MCP servers over stdio or Server-Sent Events (SSE). The server exposes capabilities. The client discovers them, presents them to the model as available tools, and routes calls back to the server when the model uses them.
The 3 MCP primitives
Everything an MCP server exposes falls into one of three categories.
Resources
Resources are read-only data the server makes available. Think of them like files or database records — the client can read them but can't execute them. A filesystem MCP server exposes your project files as resources. A Postgres MCP server might expose table schemas as resources.
The practical value here is context without copy-pasting. Claude can read your package.json, your .env.example, your schema file — whatever the server exposes — without you having to paste the contents into the chat. This keeps your conversation focused and gives Claude accurate, live data instead of whatever you pasted 10 minutes ago.
Tools
Tools are callable functions that take parameters and return results. This is where the real action is.
A GitHub MCP server exposes tools like create_issue(title, body, labels) or search_code(query, repo). A Postgres MCP server exposes execute_query(sql). A Playwright server exposes navigate(url) and click(selector).
When Claude decides to use a tool, the client sends the call to the server, the server executes it, and the result comes back into Claude's context. Claude sees the result and continues the conversation with real, live data.
Prompts
Prompts are reusable prompt templates the server exposes. They're the least commonly used primitive, but they're worth knowing about.
A documentation MCP server might expose a summarize_api_endpoint prompt template. When invoked with a specific endpoint path, it expands into a structured prompt that Claude uses to analyze the documentation. It's a way to standardize how Claude approaches certain tasks — useful if you're building tooling for a team and want consistent behavior.
Most developers focus on resources and tools. Prompts become useful when you're building something for other people to use.
How the protocol works
Here's the simplified but accurate version of what happens when you start a conversation with MCP configured:
- The client (Claude Desktop, OpenClaw) starts your configured MCP server processes via stdio, or connects to them over SSE if they're remote
- The server responds to the
initializerequest with a capabilities list — the tools it exposes, the resources available, and any prompts - The client includes the available tools in the context it presents to the LLM (without you seeing this — it's handled automatically)
- You have a conversation normally. When the LLM decides it needs a tool, it signals that intent
- The client intercepts it and sends a
tools/callrequest to the appropriate server - The server executes the function and returns a result
- The result goes back into the LLM's context, and the response continues
The key thing to internalize: the LLM doesn't call the tool directly. The client does, on the model's behalf. The model just knows what tools are available and signals when it wants to use one.
Setting up 5 useful MCP servers
These are the servers that are genuinely worth the setup time.
1. Filesystem MCP (start here)
This is the one that changes how you work most immediately. Claude can read and write files in your project without you copy-pasting code into the chat.
# Test it runs
npx @modelcontextprotocol/server-filesystem /path/to/your/project
Add it to your Claude Desktop config at ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"]
}
}
}
Restart Claude Desktop and you'll see the filesystem server listed in the tools panel. Now you can say "look at my src/lib/mdx.ts and tell me what's missing for pagination support" and Claude actually reads the file instead of working from whatever you've described.
2. GitHub MCP
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your_token_here"
}
}
}
}
You need a GitHub personal access token with repo permissions. Create one at github.com/settings/tokens.
What this unlocks: create issues, open PRs, read file contents from any repo you have access to, search code across your organization. The search tool alone is worth it — "find all files in this org that import from @deprecated/package" is something Claude can actually do.
3. Playwright MCP (web browsing)
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-playwright"]
}
}
}
This gives Claude a real browser. It can navigate to URLs, click elements, fill forms, take screenshots, and read page content. The practical use cases are better than they sound:
- "Go to this competitor's pricing page and summarize their tier structure"
- "Navigate to our staging environment and check if the form submission works"
- "Find the current version of Next.js on their releases page"
It's not magic — pages with heavy JavaScript or complex auth flows can trip it up — but for reading public web content and simple interactions, it works well.
4. Notion MCP
{
"mcpServers": {
"notion": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-notion"],
"env": {
"NOTION_API_KEY": "secret_your_key_here"
}
}
}
}
You'll also need to authorize your Notion integration on specific pages via the Notion UI — the API key alone isn't enough for page access.
Once configured, Claude can read your Notion databases and pages, search content, and write back to pages. Useful for "pull all our Q1 action items from the planning database and draft a status update."
5. Slack MCP
{
"mcpServers": {
"slack": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-slack"],
"env": {
"SLACK_BOT_TOKEN": "xoxb-your-token",
"SLACK_TEAM_ID": "T0XXXXXXX"
}
}
}
}
The Slack server is best for reading and searching message history. The use case that makes it worth configuring: "search the #engineering channel for any discussion about the auth refactor from last month and give me a summary of what was decided."
Post messages and update channels also works, but most teams are careful about letting Claude post to Slack without review.
Building your own MCP server in Python
When to build your own: when you need to expose your own internal API, database, or custom logic. No public MCP server knows about your CRM, your internal metrics system, or your proprietary data.
The official Python SDK makes this straightforward. Install it:
pip install mcp
Here's a minimal working server that exposes a customer lookup tool:
from mcp.server import Server
from mcp.server.models import InitializationOptions
import mcp.server.stdio
import mcp.types as types
server = Server("my-custom-server")
@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
return [
types.Tool(
name="get_customer",
description="Look up a customer by ID in our database",
inputSchema={
"type": "object",
"properties": {
"customer_id": {
"type": "string",
"description": "The customer ID"
}
},
"required": ["customer_id"]
}
)
]
@server.call_tool()
async def handle_call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "get_customer":
customer_id = arguments["customer_id"]
# Your actual database call here
customer = db.get_customer(customer_id)
return [types.TextContent(type="text", text=str(customer))]
raise ValueError(f"Unknown tool: {name}")
async def main():
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="my-custom-server",
server_version="0.1.0"
)
)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Add it to your Claude Desktop config like this:
{
"mcpServers": {
"my-custom-server": {
"command": "python",
"args": ["/path/to/your/server.py"]
}
}
}
One thing worth paying attention to: the description field in the Tool definition is what Claude "reads" to understand what the tool does and when to use it. Write it like you're writing documentation for another developer — specific, accurate, with clear guidance on when to invoke it vs when not to. A vague description leads to Claude calling the wrong tool or missing when it should call yours.
To expose resources in addition to tools, add a list_resources and read_resource handler using the same decorator pattern. The SDK documentation has examples for both.
MCP vs function calling vs tool use
This hierarchy trips people up, so here's the clean version:
Function calling / tool use is defined at the API level. You're building a custom application, you send tool definitions with every API request, and the model returns structured output when it wants to use a tool. It's stateless, works with any HTTP client, and is the right choice for custom apps that need specific controlled capabilities.
MCP is a persistent server process. Tools are discovered once on connection, not re-sent with every message. The server runs independently of any individual conversation. It's the right choice for persistent tool connections in AI assistants — Claude Desktop, OpenClaw, and similar tools where you want consistent access across all your conversations.
Use function calling when you're building a custom application with a focused set of tools and need full control over the API layer. Use MCP when you want to give an AI assistant ongoing, persistent access to your systems — your filesystem, your GitHub, your databases — without configuring it per-conversation.
The MCP ecosystem in 2026 — honest assessment
The official Anthropic-maintained servers are stable and worth trusting:
Worth using now:
@modelcontextprotocol/server-filesystem— mature, stable, genuinely useful daily@modelcontextprotocol/server-github— solid, actively maintained@modelcontextprotocol/server-playwright— works well for most web tasks@modelcontextprotocol/server-postgresandserver-sqlite— reliable for database access- Brave Search MCP — good for giving Claude current web search capability
Useful but rough:
- Notion — sync delays and limited write capabilities are real pain points
- Slack — works for reading, posting is inconsistent across workspace configurations
- Jira — functional but noticeably slow on large projects
Approach with caution: Many community-built MCP servers are proof-of-concept code that hasn't been updated in months. Before installing any community server, check the GitHub stars, last commit date, and whether there's an active issue tracker. Don't give a server you found on a list access to production data or credentials without reading through the source code.
The official MCP server repository at github.com/modelcontextprotocol/servers is the most reliable starting point. Everything in there has been reviewed by Anthropic.
OpenClaw and MCP
OpenClaw (Claude Code) supports MCP natively. Any server you configure in Claude Desktop also works in OpenClaw — the config file is the same and the servers are shared. This is how you give OpenClaw persistent access to your databases, internal APIs, and custom tooling without having to paste credentials or data into every session.
The combination of OpenClaw's agentic capabilities with MCP's persistent tool access is where things get genuinely powerful. OpenClaw can use your filesystem server to read code, your GitHub server to file issues, and your custom server to query your internal metrics — all in one session, without you manually handing it any of that context.
For building more complex setups with OpenClaw, the hooks guide covers how to add custom behaviors that complement MCP server access.
Where to go from here
MCP is still maturing, but the core is stable and the filesystem and GitHub servers alone are worth the 10-minute setup. The biggest productivity win is stopping the habit of copy-pasting file contents and context into Claude — once the filesystem server is running, you just tell Claude where to look.
For the conceptual foundations, the intro MCP post has the background if you need it. For building more sophisticated agent workflows that combine MCP tools with structured reasoning, the function calling lesson in the Agents track covers how models decide when and how to use tools — which applies directly to how Claude works with your MCP servers.



