MCP (Model Context Protocol) is how AI assistants call tools. It's JSON-RPC 2.0 over stdin/stdout. Your IDE starts a server process, they do a handshake, and then the AI can fetch URLs, query APIs, read files — whatever the server exposes. One protocol, works across Claude Desktop, Cursor, VS Code, Windsurf.
"MCP" is one of those terms that's suddenly everywhere — GitHub READMEs, Twitter, conference talks — and somehow nobody explains what's actually going on under the hood. You either get the Anthropic marketing version ("a universal open standard for connecting AI systems with data sources!") or a spec link and good luck.
I spent a week reading the spec and building tooling on top of it. Here's the version I wish someone had written before I started.
MCP in one paragraph
Your AI assistant (Claude, Cursor, whatever) needs to do things beyond just generating text. Fetch a webpage. Create a GitHub issue. Read a file from disk. MCP is the protocol that makes this happen. Your IDE starts a server process, they exchange some JSON over stdin/stdout, and then the AI can call that server's tools. The AI decides when to call a tool. The server handles the how.
Why not just... build integrations?
People did. For years. That's the problem.
Want Claude to search the web? Custom plugin. Want it to read your filesystem? Different plugin. Company API? Build from scratch. Each integration is a one-off piece of plumbing between one specific AI tool and one specific service. Works fine when you have two of them. Turns into a maintenance nightmare at ten.
And then Cursor wants the same integrations. And VS Code Copilot. And Windsurf. Now you're maintaining the same logic four times.
MCP standardizes the plumbing. One protocol, one config format. Write a server once, it works with every IDE that speaks MCP. Which, at this point, is most of them.
What the config looks like
Every MCP-powered IDE has a JSON config that says "start these servers." Here's a real one from my setup:
{
"mcpServers": {
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
},
"github": {
"command": "uvx",
"args": ["mcp-server-github"],
"env": {
"GITHUB_TOKEN": "ghp_..."
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/user/projects"]
}
}
}
Three servers. Each is a command the IDE will spawn as a subprocess. fetch grabs web pages. github talks to the GitHub API (needs a token in the env). filesystem reads and writes files in a specific directory.
When you open your IDE, it starts all three, runs a handshake with each, and adds their tools to the model's toolbox. The AI doesn't know or care which server a tool comes from. It just sees "I have a fetch tool and a list_issues tool and a read_file tool."
The handshake
This is the part that took me longest to grok, because most MCP explanations just wave their hands and say "it negotiates capabilities." So here's the actual JSON. I'll walk through it message by message.
Want to see this live? Run mcptools proxy --server fetch --no-tui and watch these exact messages fly by in real time.
Message 1 — IDE says hello:
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {
"name": "claude-desktop",
"version": "1.0.0"
}
}
}
JSON-RPC 2.0 request. The id: 1 means "I expect a response with this same ID." The protocol version tells the server which MCP version we're speaking.
Message 2 — Server says what it can do:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {},
"prompts": {}
},
"serverInfo": {
"name": "mcp-fetch",
"version": "1.26.0"
}
}
}
The capabilities object is the important part. This server is saying "I have tools and prompts." No resources. So the IDE knows: don't bother asking for resources, this server doesn't have any.
Message 3 — IDE confirms:
{
"jsonrpc": "2.0",
"method": "notifications/initialized"
}
No id field. This is a notification — fire and forget, no response expected. Just "OK, we're good, let's go."
The distinction between requests (have an id, expect a response) and notifications (no id, fire and forget) tripped me up at first. It's a JSON-RPC 2.0 thing, not an MCP thing. Once you see it, the whole protocol makes more sense.
Message 4 — IDE asks what tools exist:
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list"
}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"tools": [
{
"name": "fetch",
"description": "Fetches a URL from the internet and extracts its contents as markdown.",
"inputSchema": {
"type": "object",
"properties": {
"url": { "type": "string" },
"max_length": { "type": "integer" },
"raw": { "type": "boolean" }
},
"required": ["url"]
}
}
]
}
}
That inputSchema is a standard JSON Schema. The model reads it and knows: "I have a fetch tool. It needs a URL. Optionally I can set a max length and whether I want raw content." It's not guessing — it's reading a schema. This is why MCP tools work so much more reliably than prompting the AI to "use this API."
Calling a tool
Later, the AI decides it needs to fetch a webpage. The exchange is short:
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "fetch",
"arguments": { "url": "https://example.com" }
}
}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"content": [
{
"type": "text",
"text": "# Example Domain\n\nThis domain is for use in illustrative examples..."
}
]
}
}
Request, response. ~200ms. The AI gets the text and uses it in its answer. Done.
Tools, resources, prompts
MCP servers can expose three types of things. In practice, most servers only do the first one.
Tools are actions. "Fetch this URL." "Create a GitHub issue." "Run this SQL query." The AI decides when to use them. Each has a name, description, and JSON Schema for its parameters. This is the one you'll see everywhere.
Resources are data the AI can read. Think files, database records, API responses — each identified by a URI like file:///path/to/doc.txt. Less common than tools. Useful when you want the AI to pull in specific context without an "action."
Prompts are templates the server provides. A code review server might offer a "review this PR" prompt that takes a PR URL as input. I've seen very few servers use this in the wild, but it's there if you need it.
Gotcha: The server only gets asked about capabilities it declared in the handshake. If a server says "capabilities": {"tools": {}}, the IDE will never ask it for resources or prompts, even if the server implements them. I've seen people debug this for ages. Check the initialize response first.
Stdio transport
MCP runs over stdin/stdout. This confused me initially — I expected HTTP, or WebSockets, or at least some kind of port. But no. It's just a subprocess:
- IDE starts the server as a child process
- Messages go in through stdin
- Responses come back through stdout
- One JSON message per line, terminated by
\n - stderr is yours for logging — doesn't touch the protocol
This turns out to be a really good design decision. No ports to manage. No HTTP overhead. No authentication needed — it's local, the process boundary is the security boundary. And because it's just stdin/stdout, you can write an MCP server in any language. Python, Node, Go, Rust, even a bash script if you're feeling adventurous.
There's an SSE (Server-Sent Events) transport for remote servers, but I haven't seen it used much in practice. Everything I've worked with is stdio. The simplicity is hard to beat.
See it yourself
You don't need an IDE to watch MCP happen. mcptools lets you poke at any server from your terminal:
$ pip install git+https://github.com/jannik-cas/mcptools.git # What does this server expose? $ mcptools inspect uvx mcp-server-fetch ┌───────────────────────── MCP Server ─────────────────────────┐ │ mcp-fetch v1.26.0 │ └──────────────────────────────────────────────────────────────┘ Tools (1) Name Description Parameters fetch Fetches a URL from the url: string*, internet and extracts its max_length: integer, contents as markdown. raw: boolean
That just did the full handshake — initialize, notifications/initialized, tools/list — and formatted the result. Same messages your IDE sends, but you can see them.
# Is my config healthy? $ mcptools doctor Config: ~/.claude/claude_desktop_config.json Checking fetch... ✓ 1 tool, 1 prompt Checking github... ✓ 35 tools Checking filesystem... ✓ 5 tools, 2 resources Summary: 3 healthy
Starts every server, runs the handshake, counts capabilities, measures latency. Two seconds.
Where to find MCP servers
There's already a decent ecosystem. The ones I've actually used:
- mcp-server-fetch — Grab web pages as markdown. The first one everyone installs, and honestly the most useful.
- mcp-server-github — Full GitHub API. 35 tools. Slow to start but very capable.
- mcp-server-filesystem — Sandboxed file access in a specific directory.
- mcp-server-time — Current time and timezone conversions. Sounds trivial, surprisingly useful.
- mcp-server-sqlite / mcp-server-postgres — Query databases directly.
Most are installable with uvx (Python) or npx (Node). The modelcontextprotocol GitHub org has the official ones. But anyone can write a server — it's just a process that reads a line of JSON and writes a line of JSON.
The mental model
Your IDE
↓ starts subprocess, sends JSON-RPC over stdin
MCP Server (just a process — any language)
↓ responds over stdout
Your IDE
↓ gives the response to the AI
AI decides what to do next
A subprocess that speaks JSON-RPC. The config says which ones to start. The protocol handles discovery and execution. The AI sees tools and calls them when it wants to.
The spec is surprisingly small. The whole thing fits in your head once you've seen the messages. And now you have.