🔌 Plugins & MCP
Claude doesn't know if a tool is built-in or from an MCP server — by design
Claude Code ships with ~30 built-in tools, but you can add unlimited more via MCP (Model Context Protocol)— external servers that provide tools through a standard interface. The LLM doesn't know if a tool is built-in or from an MCP server. It calls them the same way.
- MCP servers communicate via stdio or HTTP/SSE transport
- MCP tools use the same schema as built-in tools — the LLM sees no difference
- Plugins bundle skills + hooks + MCP servers in a single installable package
MCP Tool Dispatch
What you are seeing
How the agent dispatches tool calls to either the built-in registry or an MCP server, transparently. The LLM generates the same tool_use block regardless of where the tool lives.
What to try
Compare the flow for a built-in tool (Grep) vs an MCP tool (my-db-query). Notice how the LLM interaction is identical — only the dispatch path differs.
// LLM generates tool call (same format for both)
tool_use(name="Grep", input={pattern: "TODO"})
// Dispatch: is this an MCP tool?
→ "Grep" not in MCP registry → built-in execution
// MCP tool call (same LLM format)
tool_use(name="my-db-query", input={sql: "SELECT ..."})
// Dispatch: is this an MCP tool?
→ "my-db-query" belongs to "db-server" → MCP dispatch
→ JSON-RPC: tools/call {name, input}
→ Server executes query, returns result
→ Result formatted as tool_result (same as built-in)
// .mcp.json configuration
{ "mcpServers": {
"db-server": {
"command": "node",
"args": ["db-server.js"],
"transport": "stdio"
}
} }
The Intuition
What you’re seeing: stdio vs HTTP/SSE transports for MCP — local pipe vs remote-network — with their tradeoffs labeled. What to try: notice why OAuth authentication forces HTTP/SSE.
Why Make MCP Tools Indistinguishable?
If Claude knew "Bash" is built-in but "slack_send" is MCP, it might trust one more than the other — second-guessing tool calls based on origin rather than purpose. Treating all tools equally means the LLM focuses on what the tool does, not where it comes from. This also means new MCP tools get the same reasoning quality as built-in tools with zero extra prompting.
Model Context Protocol
MCP is a standard for connecting AI agents to external tool servers. Think of it like LSP (Language Server Protocol) for AI tools: the agent is the client, external servers provide capabilities, and a standard protocol handles communication. Servers can be local processes (stdio transport) or remote services (HTTP/SSE transport).
Transparent Integration
The critical design decision: MCP tools are indistinguishable from built-in tools to the LLM. They use the same JSON Schema for input validation, the same tool_use/tool_result message format, and appear in the same tool list. The LLM has no way to tell — and no reason to care — whether a tool is built-in or external.
Configuration
MCP servers are declared in .mcp.json: server name, command to start it, arguments, and transport type. At connection time, the client calls tools/list to discover available tools dynamically — no hardcoded tool definitions.
Plugin Architecture
Plugins are packages that can provide multiple extension types:
- Skills — markdown files injected as prompts
- Hooks — shell scripts that run on events (PreToolUse, PostToolUse)
- MCP servers — external tool servers bundled with the plugin
- Custom tools — additional tools added to the registry
The plugin lifecycle: install → load hooks → register tools → connect MCP servers. Uninstall reverses each step.
Server Lifecycle Management
MCP servers are external processes — they can crash, hang, or become unreachable. The client must handle this without surfacing process management complexity to the LLM. On startup, the client spawns each configured server (for stdio transport) or opens a connection (for SSE), then calls tools/listto discover available tools. If a server crashes mid-session, the client catches the broken pipe, attempts a reconnect with exponential backoff, and if reconnection fails, removes that server's tools from the registry and notifies the LLM via a tool_result error on the next call. This graceful degradation means a flaky MCP server only breaks its own tools, not the entire session. A key implication: the tool list the LLM sees at turn N may differ from turn N+10 if a server restarted and now exposes a different tool set — this is why tools/list is called at connection time, not hardcoded at startup.
Dynamic Discovery and Tool Versioning
Because MCP servers advertise their tools dynamically via tools/list at connection time, servers can evolve their tool schemas without requiring changes to the agent. A database MCP server can add a new explain_query tool in v2.0 — the next time the agent starts and reconnects, the new tool appears automatically in the registry and the LLM can use it. This is the same versioning strategy as LSP (Language Server Protocol): the client queries capabilities at connection time rather than encoding them at compile time. The downside is that the system prompt token count for tool definitions varies per session depending on which servers are available — a server with 20 tools adds ~600 extra tokens to every API call, which compounds across 50+ turns. For this reason, MCP servers should expose focused, cohesive tool sets rather than dumping every possible operation as a tool.
Why does Claude Code make MCP tools indistinguishable from built-in tools?
Key Code Patterns
MCP Configuration (.mcp.json)
{
"mcpServers": {
"my-server": {
"command": "node",
"args": ["server.js"],
"transport": "stdio"
},
"remote-api": {
"url": "https://api.example.com/mcp",
"transport": "sse"
}
}
}MCP Client (TypeScript pseudocode)
class MCPClient {
private transport: Transport | null = null;
connect(config: MCPServerConfig): void {
// Start server process, establish transport
if (config.transport === "stdio") {
const process = spawn(config.command, config.args);
this.transport = new StdioTransport(process);
} else if (config.transport === "sse") {
this.transport = new SSETransport(config.url);
}
}
async listTools(): Promise<ToolDefinition[]> {
// Get tools from server — same schema as built-in
return this.transport!.request("tools/list");
}
async callTool(name: string, input: unknown): Promise<ToolResult> {
// Execute tool on server — returns same format as built-in
return this.transport!.request("tools/call", { name, arguments: input });
}
}Tool Dispatch (built-in vs MCP)
async function dispatchToolCall(call: ToolCall): Promise<ToolResult> {
// Route to built-in or MCP — transparent to LLM
// Check if this tool belongs to an MCP server
const mcpServer = findMCPServerForTool(call.name);
let result: unknown;
if (mcpServer) {
// MCP dispatch — JSON-RPC over transport
result = await mcpServer.callTool(call.name, call.input);
} else {
// Built-in dispatch — direct execution
const tool = registry.get(call.name);
result = await tool.call(call.input);
}
// Same tool_result format regardless of source
return new ToolResult({ content: result });
}Break It — See What Happens
Real-World Numbers
| Metric | Value |
|---|---|
| Transport types | stdio (local), HTTP/SSE (remote) |
| Discovery protocol | tools/list (JSON-RPC at connection time) |
| Config file | .mcp.json (per-project or global) |
| Tool schema | Same JSON Schema as built-in tools |
| Plugin contributions | Skills, hooks, MCP servers, custom tools |
| LLM visibility | Indistinguishable from built-in tools |
Key Takeaways
What to remember for interviews
- 1MCP tools are indistinguishable from built-in tools to the LLM — same JSON Schema, same tool_use/tool_result format — so the model selects tools by capability, not origin.
- 2MCP uses JSON-RPC over stdio (local processes) or HTTP/SSE (remote servers); tools are discovered dynamically via tools/list at connection time, not hardcoded.
- 3If an MCP server crashes, only its tools are removed from the registry — the rest of the session continues unaffected via graceful degradation.
- 4Each MCP server with 20 tools adds ~600 tokens to every API call; servers should expose focused, cohesive tool sets rather than dumping every operation as a tool.
- 5Plugins bundle skills, hooks, MCP servers, and custom tools in a single declarative manifest — install reverses cleanly because contributions are enumerated, not scripted.
Further Reading
- Model Context Protocol Specification — The open standard for connecting AI agents to external tools and data sources.
- Claude Code (source) — Production implementation of MCP client integration and plugin architecture.
- JSON-RPC 2.0 Specification — The transport protocol underlying MCP communication between client and server.
- Language Server Protocol — Inspiration for MCP — a similar protocol for editor-language server communication.
- MCP Servers Registry — Official list of reference MCP server implementations — filesystem, GitHub, Postgres, Puppeteer, and more.
- MCP Quickstart — Step-by-step guide to building and connecting your first MCP server — covers stdio and Streamable HTTP transports.
- OAuth 2.0 Authorization Framework (RFC 6749) — The auth standard underlying MCP's remote server authentication — critical for securing third-party plugin access.
An MCP server registers a tool called `search_db`. From the LLM's perspective, how does this differ from the built-in `Grep` tool?
When should you prefer HTTP/SSE transport over stdio for an MCP server?
What is the key constraint that makes a plugin manifest declarative rather than imperative?
Which attack surface is unique to stdio-based MCP transport compared to HTTP-based transport?
Interview Questions
Showing 4 of 4
Design a protocol for extending an AI agent with external tool servers.
★★★What are the security implications of running external tool servers?
★★☆How would you design a plugin system that provides skills, hooks, MCP servers, and custom tools through a single package?
★★★What security risks does stdio-based MCP transport introduce compared to HTTP, and how would you mitigate them?
★★★