How to Create an MCP Server in Python
A step-by-step guide to building a Model Context Protocol (MCP) server using Python and FastMCP, from basic tools to dynamic resources.
So you want to build a Model Context Protocol (MCP) server in Python. The goal is to create a service that can provide tools and data to AI models like Claude, Gemini, or others that support the protocol. While the MCP specification is powerful, implementing it from scratch involves a lot of boilerplate: handling JSON-RPC, managing session state, and correctly formatting requests and responses.
This is where FastMCP comes in. It’s a high-level framework that handles all the protocol complexities for you, letting you focus on what matters: writing the Python functions that power your server.
This guide will walk you through creating a fully-featured MCP server from scratch using FastMCP.
Every code block in this tutorial is a complete, runnable example. You can copy and paste it into a file and run it, or paste it directly into a Python REPL like IPython to try it out.
Prerequisites
Make sure you have FastMCP installed. If not, follow the installation guide.
Step 1: Create the Basic Server
Every FastMCP application starts with an instance of the FastMCP
class. This object acts as the container for all your tools and resources.
Create a new file called my_mcp_server.py
:
That’s it! You have a valid (though empty) MCP server. Now, let’s add some functionality.
Step 2: Add a Tool
Tools are functions that an LLM can execute. Let’s create a simple tool that adds two numbers.
To do this, simply write a standard Python function and decorate it with @mcp.tool
.
FastMCP automatically handles the rest:
- Tool Name: It uses the function name (
add
) as the tool’s name. - Description: It uses the function’s docstring as the tool’s description for the LLM.
- Schema: It inspects the type hints (
a: int
,b: int
) to generate a JSON schema for the inputs.
This is the core philosophy of FastMCP: write Python, not protocol boilerplate.
Step 3: Expose Data with Resources
Resources provide read-only data to the LLM. You can define a resource by decorating a function with @mcp.resource
, providing a unique URI.
Let’s expose a simple configuration dictionary as a resource.
When a client requests the URI resource://config
, FastMCP will execute the get_config
function and return its output (serialized as JSON) to the client. The function is only called when the resource is requested, enabling lazy-loading of data.
Step 4: Generate Dynamic Content with Resource Templates
Sometimes, you need to generate resources based on parameters. This is what Resource Templates are for. You define them using the same @mcp.resource
decorator but with placeholders in the URI.
Let’s create a template that provides a personalized greeting.
Now, clients can request dynamic URIs:
greetings://Ford
will callpersonalized_greeting(name="Ford")
.greetings://Marvin
will callpersonalized_greeting(name="Marvin")
.
FastMCP automatically maps the {name}
placeholder in the URI to the name
parameter in your function.
Step 5: Run the Server
To make your server executable, add a __main__
block to your script that calls mcp.run()
.
Now you can run your server from the command line:
This starts the server using the default STDIO transport, which is how clients like Claude Desktop communicate with local servers. To learn about other transports, like HTTP, see the Running Your Server guide.
The Complete Server
Here is the full code for my_mcp_server.py
(click to expand):
Next Steps
You’ve successfully built an MCP server! From here, you can explore more advanced topics:
- Tools in Depth: Learn about asynchronous tools, error handling, and custom return types.
- Resources & Templates: Discover different resource types, including files and HTTP endpoints.
- Prompts: Create reusable prompt templates for your LLM.
- Running Your Server: Deploy your server with different transports like HTTP.