Model Context Protocol (MCP), and notes on implementing MCP server in Go
Longread about MCP scpecs and implementation in GO
Here we have a description of The Model Context Protocol (MCP), short notes on how to implement an MCP server in Go, including message structure, protocol specifications.
Model Context Protocol (MCP) Overview
Model Context Protocol (MCP) is an open, standardized framework (introduced by Anthropic in late 2024) for connecting AI language models to external data sources, tools, and systems. Its goal is to solve the “N×M integration” problem by providing a universal interface for things like reading files, executing functions (tools), and using contextual prompts across different applications. MCP is not a proprietary or internal protocol; it’s an open standard with an official specification and open-source reference implementation. In fact, major AI providers (including OpenAI and Google DeepMind) announced support for MCP after its introduction, underscoring that it’s intended as a broadly adopted standard rather than a vendor-specific solution.
MCP Purpose and Architecture
MCP aims to standardize how applications provide context to LLMs – the analogy often used is “a USB-C port for AI applications”. By defining a common protocol, MCP lets AI assistants and tools seamlessly interface with databases, file systems, APIs, and other resources without custom one-off integrations. This helps language models generate more relevant, up-to-date responses by giving them secure access to the data they need.
Architecture: MCP follows a client–server model with clear role separation:
- MCP Host: The parent application (e.g. a chat client or IDE) that manages connections. It contains one or more MCP clients (connectors).
- MCP Client: A connector instance (within the host) that establishes a 1:1 session with an MCP server. The client handles the session lifecycle, routes messages, and enforces any user permissions or security policies.
- MCP Server: A lightweight service that exposes specific capabilities (access to certain data or functions) via the MCP protocol. Each server might wrap a data source (files, DB, API, etc.) or tool. Multiple servers can run in parallel, each providing different integrations.
- Data Sources/Services: The actual resources that servers interface with – this can include local files and databases or remote services (web APIs, SaaS apps, etc.). The MCP server acts as an adapter to these resources, ensuring the LLM only accesses data through the standardized protocol.
This design is inspired by the Language Server Protocol (LSP) from the IDE world. Just as LSP lets any editor support any programming language via a common protocol, MCP lets any AI application connect to any data/tool integration that speaks MCP. This decoupling means AI tool developers can write an MCP server once and have it work with many AI clients, and AI application developers can add new integrations simply by plugging in an MCP server, avoiding bespoke integration code.
Protocol and Message Structure
Communication: MCP communication is built on persistent, stateful sessions using JSON-RPC 2.0 messages. All requests and responses conform to JSON-RPC’s format (with a "jsonrpc": "2.0"
field, method names, params, and correlating IDs). Either side – client or server – may send requests or notifications, enabling two-way interaction. An MCP session typically begins with a handshake:
- The client initiates with an
initialize
request, proposing a protocol version and advertising its capabilities (which features it supports). For example, the client might indicate it can handle server-driven “sampling” requests or provide certain roots for file access. The server responds with its own supported protocol version and capabilities, finalizing which features are enabled for this session (MCP uses a capability negotiation system similar to optional features in LSP). If critical capabilities or version are incompatible, the connection is gracefully aborted. - Upon agreement, the client sends an
initialized
notification to mark readiness. After this, normal operations can proceed. The session remains open for a continuous exchange of JSON-RPC messages until one side issues a shutdown.
Transports: MCP doesn’t mandate a single transport – it works over any channel that can carry JSON text. Commonly, an MCP server is run as a subprocess and communicates over STDIO (stdin/stdout pipes) for local integrations. This is analogous to how language servers operate and is convenient for local tools (the host can launch the server process and pipe messages). Alternatively, MCP servers can run as independent services accessible via HTTP. The MCP spec defines a streaming HTTP transport where the server exposes a single HTTP endpoint for JSON-RPC calls (clients POST requests, and the server can respond or stream results via Server-Sent Events for long-running operations). In either case, messages are UTF-8 JSON lines, and the protocol supports streaming responses and server-initiated messages (the HTTP+SSE approach allows the server to push notifications or partial results asynchronously). Security guidelines recommend that local servers bind to localhost and validate Origin
headers to prevent unwanted remote access, and that proper auth (e.g. tokens or OAuth flows) be used for remote servers.
Message Format: MCP leverages JSON-RPC’s three message types: Requests, Responses, and Notifications. A request contains an id
, a method
string, and (optionally) params
(usually a JSON object of arguments). The receiver must reply with a corresponding response (with matching id
) containing either a result
or an error
object. Notifications are like one-way messages with a method
and params but no id
(so they do not get a response). MCP imposes a few rules on top of base JSON-RPC (for example, id
must be non-null and not reused during a session) to maintain clarity.
Session and State: The connection is considered stateful – the client and server maintain context about each other’s capabilities and possibly some session state (like subscriptions to changes, ongoing operations, etc.). There are also defined procedures for graceful shutdown (e.g. a client may send a shutdown request or simply close the transport; servers should handle cleanup, and both sides implement timeouts for hanging operations). Error handling follows JSON-RPC conventions (error responses have a code and message) and the spec defines standard error codes for certain conditions (e.g. permission denied, tool not found, etc.). MCP also provides utilities for cross-cutting concerns: for example, there are built-in notifications for progress updates, cancellation of a long-running request (CancelledNotification
), logging/debug messages, and configuration changes. These help manage long or complex interactions (the client can cancel an in-progress tool call, or the server can log warnings to the client, etc.).
MCP Features and Operations
Once initialized, an MCP session enables the exchange of context and commands in a structured way. The core MCP server-side features are Prompts, Resources, and Tools (each of which the server declares if it supports during initialization):
-
Prompts: Pre-defined prompt templates or instructions that the server can supply to the client. These are typically user-triggered helpers (the user explicitly chooses a prompt to insert into the conversation, e.g. via a slash command in the UI). MCP provides methods to list available prompts and retrieve a prompt’s content. For example, a client can call
prompts/list
to get a list of prompt templates (each with a name, description, and optional parameters). To fetch a prompt, the client usesprompts/get
with the prompt’s name and any argument values; the server then returns the prompt content (often as a set of message(s) that the client will inject into the LLM’s context). Prompts allow reuse of complex instructions or workflows (e.g. “code review template”) that a user can invoke on demand. Servers indicate aprompts
capability (with optional sub-features likelistChanged
to notify the client if the set of prompts changes dynamically). -
Resources: Structured data or content that provides context to the model. Resources are typically things like files, documents, database entries – information that an AI assistant might read or reference. MCP standardizes how resources are identified and transferred: each resource has a URI identifier (e.g.
file:///path/to/file.txt
or a custom scheme for databases). Clients can query what resources are available viaresources/list
(the server may expose a directory tree, a list of recent documents, etc.). The server’s response includes metadata for each resource (URI, name, type, description, etc.). Then the client can request the content of a specific resource withresources/read
, passing the URI. The server replies with the resource content, which could be text (for files), or structured data (MCP supports different content types, like text, JSON, binary, etc., with MIME types). There’s also support for resource templates (parameterized resources identified by template URIs, which the client can fill in, e.g. a database query where the user provides a parameter). If enabled, servers can send notifications when resources change (e.g.notifications/resources/updated
) or allow clients to subscribe to changes on a resource (resources/subscribe
). In MCP’s design, resources are application-controlled context: the host application (client) typically decides which resource content to actually feed into the model’s prompt (often after user confirmation or based on UI context). -
Tools: Executable functions or actions that the server exposes for the model to invoke. Tools represent operations the AI can perform – e.g. call an external API, run a database query, send an email, or modify a file. Each tool has a name and a JSON schema for its input (and optionally output) parameters, so the AI (or client) knows what arguments it expects. Tools are typically model-controlled: the idea is the language model (agent) decides if and when to use a tool during a conversation to fulfill the user’s request. However, for safety, a human user or the host app may mediate tool use (e.g. require a confirmation click). Using tools in MCP involves two main operations: listing and calling. A client can call
tools/list
to get the available tools and their schemas. For example, a server might list a toolget_weather
with a description and an input schema that requires a “location” string. Then, when the model decides to use a tool (or the user invokes it), the client sends atools/call
request with the tool’s name and a JSON object of arguments. The server executes the function and returns the result, typically as aresult.content
field which can contain text or structured data (MCP supports returning multiple content parts, e.g. text plus an image, etc., though text is common). A simple example: calling aget_weather
tool might return a text payload like “Current weather in New York: 72°F, partly cloudy” as the content for the assistant to present. Tools can also indicate errors (the response has anisError
flag or an error object if something went wrong). Like prompts and resources, thetools
capability can have an optionallistChanged
flag to notify when available tools change at runtime (e.g. a dynamic plugin loaded/unloaded).
In addition to the above server-offered features, MCP also defines client-offered features (capabilities that servers can leverage if the client supports them). These include Sampling, Roots, and Elicitation:
-
Sampling allows a server to request the client (and its LLM) to perform model inference within the session. For instance, a server could initiate an LLM call (perhaps to continue a chain-of-thought or to summarize something) by sending a request like
sampling/request
– the client would then prompt the model and return the outcome. This enables agentic behaviors where the server can drive the AI to assist in its own sub-tasks. (All such actions are subject to user approval and policy – e.g. a user might have to opt-in to let a server trigger the model for additional queries.) -
Roots allows the server to inquire about or operate within certain allowed file system or URI roots. The client can provide a list of “root” directories/uris that the server is permitted to access, via
roots/list
. This is a security feature ensuring the server knows the boundaries (e.g. which folder trees it can read from). -
Elicitation lets the server ask the client to obtain more information from the user if needed. For example, if a tool needs a missing piece of info that wasn’t provided, the server can send an elicitation request, which the client (UI) would translate into a user prompt (“The X integration needs your API key, please enter it”). This way the server can interactively gather input via the client.
These features are all optional and negotiated up front. A key design aspect of MCP is that capability negotiation happens during initialization – the client and server advertise which of the above features they support, so both sides know what operations are available in the session. For example, if a server doesn’t declare the tools
capability, the client won’t attempt any tools/list
or tools/call
operations with it. This extensibility means MCP can evolve with new features over time while maintaining backward compatibility (unsupported methods simply won’t be used if not negotiated).
Implementations, SDKs, and Building an MCP Server (especially in Go)
Official Specification & Documentation: The authoritative MCP specification is openly available, including a formal schema of all message types. It’s maintained on the Model Context Protocol website and GitHub. The spec is defined in a TypeScript schema file (with a corresponding JSON Schema) that precisely documents all requests, responses, and structures. The documentation site (modelcontextprotocol.io) provides guides, a FAQ, and detailed breakdowns of each feature and message type, as well as an “MCP Inspector” tool for interactive debugging. While MCP is not (yet) an IETF or ISO standard, it is developed as an open standard with community input and uses familiar RFC 2119 terminology for requirements. It’s an evolving protocol (versions are date-stamped; e.g. 2025-06-18 is a recent revision), with a versioning policy to manage changes.
Reference Implementations: Anthropic open-sourced a number of MCP server connectors and SDKs when introducing MCP. There is a GitHub organization modelcontextprotocol
that hosts the spec and several repositories. Notably, a “servers” repository contains a collection of pre-built MCP server implementations for common services and data sources. These serve as reference integrations and can often be used out-of-the-box or as templates for custom servers. For example, the official repo includes servers for Google Drive (file access and search in Google Drive), Slack (workspace messaging and channel content), GitHub/Git (code repository context), PostgreSQL (read-only database queries with schema info), Google Maps (location and directions API), Puppeteer (web browsing and scraping), and many more. By installing or running these servers, an AI application like Claude or Cursor can immediately gain that integration. There’s also a community-driven MCP registry service (open-source in Go) for indexing available servers, and many third-party contributions extending MCP to various domains (from CRMs to blockchain data).
SDKs and Libraries: To facilitate building your own MCP servers/clients, there are official SDKs in multiple languages. As of 2025, the project provides SDKs for TypeScript/Node, Python, Java (and Kotlin), C# (developed with Microsoft), Ruby (with Shopify), Swift, and others. These libraries handle the protocol plumbing – e.g. managing the JSON-RPC transport, implementing the spec schema, and providing helper APIs to register tools or serve resources. For instance, the TypeScript SDK can be used to quickly write a server in Node.js, and the Python SDK allows integrating MCP in Python applications. The SDK approach means developers don’t have to manually construct JSON-RPC messages or implement the full state machine; instead, they call high-level methods to send requests or publish capabilities.
Go Implementation: Go has emerged as a popular choice for MCP servers due to its performance and concurrency strengths (good for handling multiple simultaneous requests). An official Go SDK is now available, maintained in collaboration with the Go team at Google. (This was announced around April 2025 and the first stable release is slated for August 2025.) The Go SDK provides a package mcp
for building clients/servers and a jsonschema
helper for tool schemas. Using the Go SDK, developers can create an MCP server with just a few calls. For example, you can instantiate a new server with a name and version, then add tools via AddTool
by providing a tool definition (name, description, input schema) along with a Go handler function to execute when that tool is called. The SDK takes care of exposing the tool in the protocol (advertising it in tools/list
and handling tools/call
requests). Similarly, you could expose resources or prompts with analogous APIs. Finally, you run the server – for instance, server.Run(ctx, mcp.NewStdioTransport())
will start processing JSON-RPC messages over stdio until the client disconnects. On the client side, the Go SDK can spawn a subprocess and connect via mcp.NewCommandTransport(exec.Command("myserver"))
, then the client can call session.CallTool(ctx, params)
to invoke a tool and get the result easily in Go code.
Example: The official Go SDK documentation shows a simple “greeter” server. The server registers a tool
"greet"
that takes a name and returns a greeting string. The client then calls this tool by name and prints the result. This illustrates the basic pattern: define tool -> client calls tool -> get result. Under the hood, this corresponds to JSON-RPC messages ("method": "tools/call", params: {"name": "greet", ...}
and the response containingresult.content
with text) as defined by the MCP spec.
Before the official Go SDK was released, the community created their own Go libraries. Notably, Ed Zynda’s mcp-go
project (mark3labs/mcp-go) was widely used and influenced the design of the official SDK. Another library, mcp-golang
by Metoro, provided a Go implementation and API (the Dev community blog post by Elton Minetto uses this library as of early 2025). These community SDKs allowed Go developers to experiment with MCP early on – for example, one tutorial shows how to build an MCP server that looks up Brazilian postal codes (CEP) by exposing a “zipcode” tool via the Metoro mcp-golang
library. In that example, the Go server registers a function that calls an external API to find an address from a ZIP, and returns the result as text – allowing an AI assistant to fetch address info on demand through MCP. Another guide demonstrates wrapping a custom in-memory database (DiceDB) as an MCP server using the mark3labs mcp-go
SDK: it defines a ping
tool to check the DB connectivity and other tools for data operations. These examples highlight how straightforward it can be to create an MCP integration: most of the code is just the business logic (API calls, DB queries, etc.), while the SDK handles the JSON-RPC wiring.
Building an MCP Server in Go (Tutorial Highlights)
To outline the process, here’s a typical flow with the Go SDK or similar library:
-
Setup the Server: Initialize a new server instance with basic info (name, version, and declare supported capabilities). For example, in Go:
server := mcp.NewServer("MyServer", "1.0.0", nil)
will create a server that (by default) supports core protocol features. You can enable specific capabilities like prompts/resources/tools via options or simply by registering those features (adding a tool or resource implies that capability). -
Register Features: Add the functionalities you want to expose:
- If exposing Tools, define each tool’s schema and handler. E.g. using Go SDK’s
AddTool
: provide amcp.Tool{Name: "...", Description: "..."}
and a handler func that takes the call request and returns a result (which may include text or other content). The SDK will auto-generate a JSON Schema for inputs from your handler’s parameter types (or you can specify it). - If exposing Resources, you might use an API to register resource listings or a callback for reading content. In the Python SDK, for instance, you can subclass a ResourceProvider; in Go, the SDK is still evolving, but you would likely provide functions for listing and reading resources. Each resource should have a stable URI.
- If exposing Prompts, define prompt templates (could be static files or strings) and register them with names and optional parameters. The server will advertise them so the client can fetch and display them to users.
- If exposing Tools, define each tool’s schema and handler. E.g. using Go SDK’s
-
Implement Transport: Decide how the server will run. The simplest for local use is stdio – e.g.
server.Run(ctx, mcp.NewStdioTransport())
in Go will start reading JSON-RPC from stdin. If your server should be networked, you might implement an HTTP handler that uses the Go SDK to accept JSON-RPC over HTTP (the official Go SDK might soon include a helper for the HTTP/SSE transport as well). -
Client Testing: You can test the server with an MCP-compatible client. For instance, Anthropic’s Claude 2 (Claude for Desktop) supports loading local MCP servers; you would configure Claude to launch or connect to your server binary. There’s also a CLI tool called
mcp-cli
and the MCP Inspector GUI for testing servers without a full AI client – these tools send MCP requests to your server and show the results, helping with debugging. -
Security & Permissions: When building a server, consider authentication and scoping. For local servers, the host might run it with certain OS permissions or provide API keys via environment. For remote servers, use auth headers or OAuth flows. MCP includes an authorization spec for HTTP transports (the server can require a token and the client can send it). Always ensure the server only accesses data the user allowed (e.g. respect the root directories provided by the client, and do not leak data elsewhere) – the MCP guidelines emphasize user consent, data privacy, and tool safety as fundamental.
In summary, MCP is a formal yet flexible protocol for bridging LLMs with the outside world. It is not an internal API tied to one company, but an open standard with growing adoption and a rich ecosystem of integrations. The protocol defines clear message structures (JSON-RPC based) and a set of operations (methods for prompts, tools, resources, etc.) that any compliant client/server can implement. Official documentation and specs are available, and numerous SDKs, libraries, and example servers (including in Go) make it easier to implement. By using MCP, developers can build AI-powered applications that safely leverage existing data and services, without re-inventing integration logic for each new model or dataset.
Useful links
- https://www.anthropic.com/news/model-context-protocol
- https://modelcontextprotocol.io/introduction
- https://github.com/modelcontextprotocol/go-sdk - The official Go SDK for Model Context Protocol servers and clients. Maintained in collaboration with Google.