Add cross-cutting functionality to your MCP server with middleware that can inspect, modify, and respond to all MCP requests and responses.
New in version 2.9.0MCP middleware is a powerful concept that allows you to add cross-cutting functionality to your FastMCP server. Unlike traditional web middleware, MCP middleware is designed specifically for the Model Context Protocol, providing hooks for different types of MCP operations like tool calls, resource reads, and prompt requests.
MCP middleware is a FastMCP-specific concept and is not part of the official MCP protocol specification. This middleware system is designed to work with FastMCP servers and may not be compatible with other MCP implementations.
MCP middleware is a brand new concept and may be subject to breaking changes in future versions.
MCP middleware lets you intercept and modify MCP requests and responses as they flow through your server. Think of it as a pipeline where each piece of middleware can inspect what’s happening, make changes, and then pass control to the next middleware in the chain.Common use cases for MCP middleware include:
Authentication and Authorization: Verify client permissions before executing operations
Logging and Monitoring: Track usage patterns and performance metrics
Rate Limiting: Control request frequency per client or operation type
Request/Response Transformation: Modify data before it reaches tools or after it leaves
Caching: Store frequently requested data to improve performance
Error Handling: Provide consistent error responses across your server
FastMCP middleware operates on a pipeline model. When a request comes in, it flows through your middleware in the order they were added to the server. Each middleware can:
Inspect the incoming request and its context
Modify the request before passing it to the next middleware or handler
Execute the next middleware/handler in the chain by calling call_next()
Inspect and modify the response before returning it
Handle errors that occur during processing
The key insight is that middleware forms a chain where each piece decides whether to continue processing or stop the chain entirely.If you’re familiar with ASGI middleware, the basic structure of FastMCP middleware will feel familiar. At its core, middleware is a callable class that receives a context object containing information about the current JSON-RPC message and a handler function to continue the middleware chain.It’s important to understand that MCP operates on the JSON-RPC specification. While FastMCP presents requests and responses in a familiar way, these are fundamentally JSON-RPC messages, not HTTP request/response pairs like you might be used to in web applications. FastMCP middleware works with all transport types, including local stdio transport and HTTP transports, though not all middleware implementations are compatible across all transports (e.g., middleware that inspects HTTP headers won’t work with stdio transport).The most fundamental way to implement middleware is by overriding the __call__ method on the Middleware base class:
Copy
from fastmcp.server.middleware import Middleware, MiddlewareContextclass RawMiddleware(Middleware): async def __call__(self, context: MiddlewareContext, call_next): # This method receives ALL messages regardless of type print(f"Raw middleware processing: {context.method}") result = await call_next(context) print(f"Raw middleware completed: {context.method}") return result
This gives you complete control over every message that flows through your server, but requires you to handle all message types manually.
To make it easier for users to target specific types of messages, FastMCP middleware provides a variety of specialized hooks. Instead of implementing the raw __call__ method, you can override specific hook methods that are called only for certain types of operations, allowing you to target exactly the level of specificity you need for your middleware logic.
FastMCP provides multiple hooks that are called with varying levels of specificity. Understanding this hierarchy is crucial for effective middleware design.When a request comes in, multiple hooks may be called for the same request, going from general to specific:
on_message - Called for ALL MCP messages (both requests and notifications)
on_request or on_notification - Called based on the message type
Operation-specific hooks - Called for specific MCP operations like on_call_tool
For example, when a client calls a tool, your middleware will receive multiple hook calls:
on_message and on_request for any initial tool discovery operations (list_tools)
on_message (because it’s any MCP message) for the tool call itself
on_request (because tool calls expect responses) for the tool call itself
on_call_tool (because it’s specifically a tool execution) for the tool call itself
Note that the MCP SDK may perform additional operations like listing tools for caching purposes, which will trigger additional middleware calls beyond just the direct tool execution.This hierarchy allows you to target your middleware logic with the right level of specificity. Use on_message for broad concerns like logging, on_request for authentication, and on_call_tool for tool-specific logic like performance monitoring.
on_message: Called for all MCP messages (requests and notifications)
on_request: Called specifically for MCP requests (that expect responses)
on_notification: Called specifically for MCP notifications (fire-and-forget)
on_call_tool: Called when tools are being executed
on_read_resource: Called when resources are being read
on_get_prompt: Called when prompts are being retrieved
on_list_tools: Called when listing available tools
on_list_resources: Called when listing available resources
on_list_resource_templates: Called when listing resource templates
on_list_prompts: Called when listing available prompts
New in version 2.13.0
on_initialize: Called when a client connects and initializes the session (returns None)
The on_initialize hook receives the client’s initialization request but returns None rather than a result. The initialization response is handled internally by the MCP protocol and cannot be modified by middleware. This hook is useful for client detection, logging connections, or initializing session state, but not for modifying the initialization handshake itself.
Example:
Copy
from fastmcp.server.middleware import Middleware, MiddlewareContextfrom mcp import McpErrorfrom mcp.types import ErrorDataclass InitializationMiddleware(Middleware): async def on_initialize(self, context: MiddlewareContext, call_next): # Check client capabilities before initialization client_info = context.message.params.get("clientInfo", {}) client_name = client_info.get("name", "unknown") # Reject unsupported clients BEFORE call_next if client_name == "unsupported-client": raise McpError(ErrorData(code=-32000, message="This client is not supported")) # Log successful initialization await call_next(context) print(f"Client {client_name} initialized successfully")
If you raise McpError in on_initializeafter calling call_next(), the error will only be logged and will not be sent to the client. The initialization response has already been sent at that point. Always raise McpErrorbeforecall_next() if you want to reject the initialization.
New in version 2.13.1The MCP session and request context are not available during certain phases like initialization. When middleware runs during these phases, context.fastmcp_context.request_context returns None rather than the full MCP request context.This typically occurs when:
The on_request hook fires during client initialization
The MCP handshake hasn’t completed yet
To handle this in middleware, check if the MCP request context is available before accessing MCP-specific attributes. Note that the MCP request context is distinct from the HTTP request - for HTTP transports, you can use HTTP helpers to access request data even when the MCP session is not available:
Copy
from fastmcp.server.middleware import Middleware, MiddlewareContextclass SessionAwareMiddleware(Middleware): async def on_request(self, context: MiddlewareContext, call_next): ctx = context.fastmcp_context if ctx.request_context: # MCP session available - can access session-specific attributes session_id = ctx.session_id request_id = ctx.request_id else: # MCP session not available yet - use HTTP helpers for request data (if using HTTP transport) from fastmcp.server.dependencies import get_http_headers headers = get_http_headers() # Access HTTP data for auth, logging, etc. return await call_next(context)
For HTTP request data (headers, client IP, etc.) when using HTTP transports, use get_http_request() or get_http_headers() from fastmcp.server.dependencies, which work regardless of MCP session availability. See HTTP Requests for details.
Understanding how to access component information (tools, resources, prompts) in middleware is crucial for building powerful middleware functionality. The access patterns differ significantly between listing operations and execution operations.
If you need to check component properties (like tags) during execution operations, use the FastMCP server instance available through the context:
Copy
from fastmcp.server.middleware import Middleware, MiddlewareContextfrom fastmcp.exceptions import ToolErrorclass TagBasedMiddleware(Middleware): async def on_call_tool(self, context: MiddlewareContext, call_next): # Access the tool object to check its metadata if context.fastmcp_context: try: tool = await context.fastmcp_context.fastmcp.get_tool(context.message.name) # Check if this tool has a "private" tag if "private" in tool.tags: raise ToolError("Access denied: private tool") # Check if tool is enabled if not tool.enabled: raise ToolError("Tool is currently disabled") except Exception: # Tool not found or other error - let execution continue # and handle the error naturally pass return await call_next(context)
The same pattern works for resources and prompts:
Copy
from fastmcp.server.middleware import Middleware, MiddlewareContextfrom fastmcp.exceptions import ResourceError, PromptErrorclass ComponentAccessMiddleware(Middleware): async def on_read_resource(self, context: MiddlewareContext, call_next): if context.fastmcp_context: try: resource = await context.fastmcp_context.fastmcp.get_resource(context.message.uri) if "restricted" in resource.tags: raise ResourceError("Access denied: restricted resource") except Exception: pass return await call_next(context) async def on_get_prompt(self, context: MiddlewareContext, call_next): if context.fastmcp_context: try: prompt = await context.fastmcp_context.fastmcp.get_prompt(context.message.name) if not prompt.enabled: raise PromptError("Prompt is currently disabled") except Exception: pass return await call_next(context)
For listing operations, the middleware call_next function returns a list of FastMCP components prior to being converted to MCP format. You can filter or modify this list and return it to the client. For example:
Copy
from fastmcp.server.middleware import Middleware, MiddlewareContextclass ListingFilterMiddleware(Middleware): async def on_list_tools(self, context: MiddlewareContext, call_next): result = await call_next(context) # Filter out tools with "private" tag filtered_tools = [ tool for tool in result if "private" not in tool.tags ] # Return modified list return filtered_tools
This filtering happens before the components are converted to MCP format and returned to the client. Tags are accessible both during filtering and are included in the component’s meta field in the final listing response.
When filtering components in listing operations, ensure you also prevent execution of filtered components in the corresponding execution hooks (on_call_tool, on_read_resource, on_get_prompt) to maintain consistency.
You can deny access to specific tools by raising a ToolError in your middleware. This is the correct way to block tool execution, as it integrates properly with the FastMCP error handling system.
Copy
from fastmcp.server.middleware import Middleware, MiddlewareContextfrom fastmcp.exceptions import ToolErrorclass AuthMiddleware(Middleware): async def on_call_tool(self, context: MiddlewareContext, call_next): tool_name = context.message.name # Deny access to restricted tools if tool_name.lower() in ["delete", "admin_config"]: raise ToolError("Access denied: tool requires admin privileges") # Allow other tools to proceed return await call_next(context)
When denying tool calls, always raise ToolError rather than returning ToolResult objects or other values. ToolError ensures proper error propagation through the middleware chain and converts to the correct MCP error response format.
For execution operations like tool calls, you can modify arguments before execution or transform results afterward:
Copy
from fastmcp.server.middleware import Middleware, MiddlewareContextclass ToolCallMiddleware(Middleware): async def on_call_tool(self, context: MiddlewareContext, call_next): # Modify arguments before execution if context.message.name == "calculate": # Ensure positive inputs if context.message.arguments.get("value", 0) < 0: context.message.arguments["value"] = abs(context.message.arguments["value"]) result = await call_next(context) # Transform result after execution if context.message.name == "get_data": # Add metadata to result if result.structured_content: result.structured_content["processed_at"] = "2024-01-01T00:00:00Z" return result
For more complex tool rewriting scenarios, consider using Tool Transformation patterns which provide a more structured approach to creating modified tool variants.
Every middleware hook follows the same pattern. Let’s examine the on_message hook to understand the structure:
Copy
async def on_message(self, context: MiddlewareContext, call_next): # 1. Pre-processing: Inspect and optionally modify the request print(f"Processing {context.method}") # 2. Chain continuation: Call the next middleware/handler result = await call_next(context) # 3. Post-processing: Inspect and optionally modify the response print(f"Completed {context.method}") # 4. Return the result (potentially modified) return result
New in version 2.11.0In addition to modifying the request and response, you can also store state data that your tools can (optionally) access later. To do so, use the FastMCP Context to either set_state or get_state as appropriate. For more information, see the Context State Management docs.
FastMCP middleware is implemented by subclassing the Middleware base class and overriding the hooks you need. You only need to implement the hooks that are relevant to your use case.
Copy
from fastmcp import FastMCPfrom fastmcp.server.middleware import Middleware, MiddlewareContextclass LoggingMiddleware(Middleware): """Middleware that logs all MCP operations.""" async def on_message(self, context: MiddlewareContext, call_next): """Called for all MCP messages.""" print(f"Processing {context.method} from {context.source}") result = await call_next(context) print(f"Completed {context.method}") return result# Add middleware to your servermcp = FastMCP("MyServer")mcp.add_middleware(LoggingMiddleware())
This creates a basic logging middleware that will print information about every request that flows through your server.
When using Server Composition with mount or import_server, middleware behavior follows these rules:
Parent server middleware runs for all requests, including those routed to mounted servers
Mounted server middleware only runs for requests handled by that specific server
Middleware order is preserved within each server
This allows you to create layered middleware architectures where parent servers handle cross-cutting concerns like authentication, while child servers focus on domain-specific middleware.
Copy
# Parent server with middlewareparent = FastMCP("Parent")parent.add_middleware(AuthenticationMiddleware("token"))# Child server with its own middleware child = FastMCP("Child")child.add_middleware(LoggingMiddleware())@child.tooldef child_tool() -> str: return "from child"# Mount the child serverparent.mount(child, prefix="child")
When a client calls “child_tool”, the request will flow through the parent’s authentication middleware first, then route to the child server where it will go through the child’s logging middleware.
FastMCP includes several middleware implementations that demonstrate best practices and provide immediately useful functionality. Let’s explore how each type works by building simplified versions, then see how to use the full implementations.
Performance monitoring is essential for understanding your server’s behavior and identifying bottlenecks. FastMCP includes timing middleware at fastmcp.server.middleware.timing.Here’s an example of how it works:
To use the full version with proper logging and configuration:
Copy
from fastmcp.server.middleware.timing import ( TimingMiddleware, DetailedTimingMiddleware)# Basic timing for all requestsmcp.add_middleware(TimingMiddleware())# Detailed per-operation timing (tools, resources, prompts)mcp.add_middleware(DetailedTimingMiddleware())
The built-in versions include custom logger support, proper formatting, and DetailedTimingMiddleware provides operation-specific hooks like on_call_tool and on_read_resource for granular timing.
Prompt tool middleware is a compatibility middleware for clients that are unable to list or get prompts. It provides two tools: list_prompts and get_prompt which allow clients to list and get prompts respectively using only tool calls.
Copy
from fastmcp.server.middleware.tool_injection import PromptToolMiddlewaremcp.add_middleware(PromptToolMiddleware())
Resource tool middleware is a compatibility middleware for clients that are unable to list or read resources. It provides two tools: list_resources and read_resource which allow clients to list and read resources respectively using only tool calls.
Copy
from fastmcp.server.middleware.tool_injection import ResourceToolMiddlewaremcp.add_middleware(ResourceToolMiddleware())
Caching middleware is essential for improving performance and reducing server load. FastMCP provides caching middleware at fastmcp.server.middleware.caching.Here’s how to use the full version:
Copy
from fastmcp.server.middleware.caching import ResponseCachingMiddlewaremcp.add_middleware(ResponseCachingMiddleware())
Out of the box, it caches call/list tool, resources, and prompts to an in-memory cache with TTL-based expiration. Cache entries expire based on their TTL; there is no event-based cache invalidation. List calls are stored under global keys—when sharing a storage backend across multiple servers, consider namespacing collections to prevent conflicts. See Storage Backends for advanced configuration options.Each method can be configured individually, for example, caching list tools for 30 seconds, limiting caching to specific tools, and disabling caching for resource reads:
By default, caching uses in-memory storage, which is fast but doesn’t persist across restarts. For production or persistent caching across server restarts, configure a different storage backend. See Storage Backends for complete options including disk, Redis, DynamoDB, and custom implementations.Disk-based caching example:
Copy
from fastmcp.server.middleware.caching import ResponseCachingMiddlewarefrom key_value.aio.stores.disk import DiskStoremcp.add_middleware(ResponseCachingMiddleware( cache_storage=DiskStore(directory="cache"),))
Redis for distributed deployments:
Copy
from fastmcp.server.middleware.caching import ResponseCachingMiddlewarefrom key_value.aio.stores.redis import RedisStoremcp.add_middleware(ResponseCachingMiddleware( cache_storage=RedisStore(host="redis.example.com", port=6379),))
The caching middleware collects operation statistics (hits, misses, etc.) through the underlying storage layer. Access statistics from the middleware instance:
Request and response logging is crucial for debugging, monitoring, and understanding usage patterns in your MCP server. FastMCP provides comprehensive logging middleware at fastmcp.server.middleware.logging.Here’s an example of how it works:
Copy
from fastmcp.server.middleware import Middleware, MiddlewareContextclass SimpleLoggingMiddleware(Middleware): async def on_message(self, context: MiddlewareContext, call_next): print(f"Processing {context.method} from {context.source}") try: result = await call_next(context) print(f"Completed {context.method}") return result except Exception as e: print(f"Failed {context.method}: {e}") raise
To use the full versions with advanced features:
Copy
from fastmcp.server.middleware.logging import ( LoggingMiddleware, StructuredLoggingMiddleware)# Human-readable logging with payload supportmcp.add_middleware(LoggingMiddleware( include_payloads=True, max_payload_length=1000))# JSON-structured logging for log aggregation toolsmcp.add_middleware(StructuredLoggingMiddleware(include_payloads=True))
The built-in versions include payload logging, structured JSON output, custom logger support, payload size limits, and operation-specific hooks for granular control.
Rate limiting is essential for protecting your server from abuse, ensuring fair resource usage, and maintaining performance under load. FastMCP includes sophisticated rate limiting middleware at fastmcp.server.middleware.rate_limiting.Here’s an example of how it works:
Copy
import timefrom collections import defaultdictfrom fastmcp.server.middleware import Middleware, MiddlewareContextfrom mcp import McpErrorfrom mcp.types import ErrorDataclass SimpleRateLimitMiddleware(Middleware): def __init__(self, requests_per_minute: int = 60): self.requests_per_minute = requests_per_minute self.client_requests = defaultdict(list) async def on_request(self, context: MiddlewareContext, call_next): current_time = time.time() client_id = "default" # In practice, extract from headers or context # Clean old requests and check limit cutoff_time = current_time - 60 self.client_requests[client_id] = [ req_time for req_time in self.client_requests[client_id] if req_time > cutoff_time ] if len(self.client_requests[client_id]) >= self.requests_per_minute: raise McpError(ErrorData(code=-32000, message="Rate limit exceeded")) self.client_requests[client_id].append(current_time) return await call_next(context)
To use the full versions with advanced algorithms:
The built-in versions include token bucket algorithms, per-client identification, global rate limiting, and async-safe implementations with configurable client identification functions.
Consistent error handling and recovery is critical for robust MCP servers. FastMCP provides comprehensive error handling middleware at fastmcp.server.middleware.error_handling.Here’s an example of how it works: