Skip to main content
STDIO transport is perfect for local development and desktop applications. But to unlock the full potential of MCP—centralized services, multi-client access, and network availability—you need remote HTTP deployment.
This guide walks you through deploying your FastMCP server as a remote MCP service that’s accessible via a URL. Once deployed, your MCP server will be available over the network, allowing multiple clients to connect simultaneously and enabling integration with cloud-based LLM applications. This guide focuses specifically on remote MCP deployment, not local STDIO servers.

Choosing Your Approach

FastMCP provides two ways to deploy your server as an HTTP service. Understanding the trade-offs helps you choose the right approach for your needs. The direct HTTP server approach is simpler and perfect for getting started quickly. You modify your server’s run() method to use HTTP transport, and FastMCP handles all the web server configuration. This approach works well for standalone deployments where you want your MCP server to be the only service running on a port. The ASGI application approach gives you more control and flexibility. Instead of running the server directly, you create an ASGI application that can be served by Uvicorn. This approach is better when you need advanced server features like multiple workers, custom middleware, or when you’re integrating with existing web applications.

Direct HTTP Server

The simplest way to get your MCP server online is to use the built-in run() method with HTTP transport. This approach handles all the server configuration for you and is ideal when you want a standalone MCP server without additional complexity.
server.py
from fastmcp import FastMCP

mcp = FastMCP("My Server")

@mcp.tool
def process_data(input: str) -> str:
    """Process data on the server"""
    return f"Processed: {input}"

if __name__ == "__main__":
    mcp.run(transport="http", host="0.0.0.0", port=8000)
Run your server with a simple Python command:
python server.py
Your server is now accessible at http://localhost:8000/mcp/ (or use your server’s actual IP address for remote access). This approach is ideal when you want to get online quickly with minimal configuration. It’s perfect for internal tools, development environments, or simple deployments where you don’t need advanced server features. The built-in server handles all the HTTP details, letting you focus on your MCP implementation.

ASGI Application

For production deployments, you’ll often want more control over how your server runs. FastMCP can create a standard ASGI application that works with any ASGI server like Uvicorn, Gunicorn, or Hypercorn. This approach is particularly useful when you need to configure advanced server options, run multiple workers, or integrate with existing infrastructure.
app.py
from fastmcp import FastMCP

mcp = FastMCP("My Server")

@mcp.tool
def process_data(input: str) -> str:
    """Process data on the server"""
    return f"Processed: {input}"

# Create ASGI application
app = mcp.http_app()
Run with any ASGI server - here’s an example with Uvicorn:
uvicorn app:app --host 0.0.0.0 --port 8000
Your server is accessible at the same URL: http://localhost:8000/mcp/ (or use your server’s actual IP address for remote access). The ASGI approach shines in production environments where you need reliability and performance. You can run multiple worker processes to handle concurrent requests, add custom middleware for logging or monitoring, integrate with existing deployment pipelines, or mount your MCP server as part of a larger application.

Configuring Your Server

Custom Path

By default, your MCP server is accessible at /mcp/ on your domain. You can customize this path to fit your URL structure or avoid conflicts with existing endpoints. This is particularly useful when integrating MCP into an existing application or following specific API conventions.
# Option 1: With mcp.run()
mcp.run(transport="http", host="0.0.0.0", port=8000, path="/api/mcp/")

# Option 2: With ASGI app
app = mcp.http_app(path="/api/mcp/")
Now your server is accessible at http://localhost:8000/api/mcp/.

Authentication

Authentication is highly recommended for remote MCP servers. Some LLM clients require authentication for remote servers and will refuse to connect without it.
FastMCP supports multiple authentication methods to secure your remote server. See the Authentication Overview for complete configuration options including Bearer tokens, JWT, and OAuth. If you’re mounting an authenticated server under a path prefix, see Mounting Authenticated Servers below for important routing considerations.

Health Checks

Health check endpoints are essential for monitoring your deployed server and ensuring it’s responding correctly. FastMCP allows you to add custom routes alongside your MCP endpoints, making it easy to implement health checks that work with both deployment approaches.
from starlette.responses import JSONResponse

@mcp.custom_route("/health", methods=["GET"])
async def health_check(request):
    return JSONResponse({"status": "healthy", "service": "mcp-server"})
This health endpoint will be available at http://localhost:8000/health and can be used by load balancers, monitoring systems, or deployment platforms to verify your server is running.

Custom Middleware

New in version: 2.3.2 Add custom Starlette middleware to your FastMCP ASGI apps:
from fastmcp import FastMCP
from starlette.middleware import Middleware
from starlette.middleware.cors import CORSMiddleware

# Create your FastMCP server
mcp = FastMCP("MyServer")

# Define middleware
middleware = [
    Middleware(
        CORSMiddleware,
        allow_origins=["*"],
        allow_methods=["*"],
        allow_headers=["*"],
    )
]

# Create ASGI app with middleware
http_app = mcp.http_app(middleware=middleware)

Integration with Web Frameworks

If you already have a web application running, you can add MCP capabilities by mounting a FastMCP server as a sub-application. This allows you to expose MCP tools alongside your existing API endpoints, sharing the same domain and infrastructure. The MCP server becomes just another route in your application, making it easy to manage and deploy.

Mounting in Starlette

Mount your FastMCP server in a Starlette application:
from fastmcp import FastMCP
from starlette.applications import Starlette
from starlette.routing import Mount

# Create your FastMCP server
mcp = FastMCP("MyServer")

@mcp.tool
def analyze(data: str) -> dict:
    return {"result": f"Analyzed: {data}"}

# Create the ASGI app
mcp_app = mcp.http_app(path='/mcp')

# Create a Starlette app and mount the MCP server
app = Starlette(
    routes=[
        Mount("/mcp-server", app=mcp_app),
        # Add other routes as needed
    ],
    lifespan=mcp_app.lifespan,
)
The MCP endpoint will be available at /mcp-server/mcp/ of the resulting Starlette app.
For Streamable HTTP transport, you must pass the lifespan context from the FastMCP app to the resulting Starlette app, as nested lifespans are not recognized. Otherwise, the FastMCP server’s session manager will not be properly initialized.

Nested Mounts

You can create complex routing structures by nesting mounts:
from fastmcp import FastMCP
from starlette.applications import Starlette
from starlette.routing import Mount

# Create your FastMCP server
mcp = FastMCP("MyServer")

# Create the ASGI app
mcp_app = mcp.http_app(path='/mcp')

# Create nested application structure
inner_app = Starlette(routes=[Mount("/inner", app=mcp_app)])
app = Starlette(
    routes=[Mount("/outer", app=inner_app)],
    lifespan=mcp_app.lifespan,
)
In this setup, the MCP server is accessible at the /outer/inner/mcp/ path.

FastAPI Integration

For FastAPI-specific integration patterns including both mounting MCP servers into FastAPI apps and generating MCP servers from FastAPI apps, see the FastAPI Integration guide. Here’s a quick example showing how to add MCP to an existing FastAPI application:
from fastapi import FastAPI
from fastmcp import FastMCP

# Your existing API
api = FastAPI()

@api.get("/api/status")
def status():
    return {"status": "ok"}

# Create your MCP server
mcp = FastMCP("API Tools")

@mcp.tool
def query_database(query: str) -> dict:
    """Run a database query"""
    return {"result": "data"}

# Mount MCP at /mcp
api.mount("/mcp", mcp.http_app())

# Run with: uvicorn app:api --host 0.0.0.0 --port 8000
Your existing API remains at http://localhost:8000/api/ while MCP is available at http://localhost:8000/mcp/.

Mounting Authenticated Servers

New in version: 2.13.0
This section only applies if you’re mounting an OAuth-protected FastMCP server under a path prefix (like /api) inside another application using Mount().If you’re deploying your FastMCP server at root level without any Mount() prefix, the well-known routes are automatically included in mcp.http_app() and you don’t need to do anything special.
OAuth specifications (RFC 8414 and RFC 9728) require discovery metadata to be accessible at well-known paths under the root level of your domain. When you mount an OAuth-protected FastMCP server under a path prefix like /api, this creates a routing challenge: your operational OAuth endpoints move under the prefix, but discovery endpoints must remain at the root.
Common Mistakes to Avoid:
  1. Forgetting to mount .well-known routes at root - FastMCP cannot do this automatically when your server is mounted under a path prefix. You must explicitly mount well-known routes at the root level.
  2. Including mount prefix in both base_url AND mcp_path - The mount prefix (like /api) should only be in base_url, not in mcp_path. Otherwise you’ll get double paths. Correct:
    base_url = "http://localhost:8000/api"
    mcp_path = "/mcp"
    # Result: /api/mcp
    
    Wrong:
    base_url = "http://localhost:8000/api"
    mcp_path = "/api/mcp"
    # Result: /api/api/mcp (double prefix!)
    
  3. Not setting issuer_url when mounting - Without issuer_url set to root level, OAuth discovery will attempt path-scoped discovery first (which will 404), adding unnecessary error logs.
Follow the configuration instructions below to set up mounting correctly.

Route Types

OAuth-protected MCP servers expose two categories of routes: Operational routes handle the OAuth flow and MCP protocol:
  • /authorize - OAuth authorization endpoint
  • /token - Token exchange endpoint
  • /auth/callback - OAuth callback handler
  • /mcp - MCP protocol endpoint
Discovery routes provide metadata for OAuth clients:
  • /.well-known/oauth-authorization-server - Authorization server metadata
  • /.well-known/oauth-protected-resource/* - Protected resource metadata
When you mount your MCP app under a prefix, operational routes move with it, but discovery routes must stay at root level for RFC compliance.

Configuration Parameters

Three parameters control where routes are located and how they combine: base_url tells clients where to find operational endpoints. This includes any Starlette Mount() path prefix (e.g., /api):
base_url="http://localhost:8000/api"  # Includes mount prefix
mcp_path is the internal FastMCP endpoint path, which gets appended to base_url:
mcp_path="/mcp"  # Internal MCP path, NOT the mount prefix
issuer_url tells clients where to find discovery metadata. This should point to the root level of your server where well-known routes are mounted:
issuer_url="http://localhost:8000"  # Root level, no prefix
Key Invariant: base_url + mcp_path = actual externally-accessible MCP URL Example:
  • base_url: http://localhost:8000/api (mount prefix /api)
  • mcp_path: /mcp (internal path)
  • Result: http://localhost:8000/api/mcp (final MCP endpoint)
Note that the mount prefix (/api from Mount("/api", ...)) goes in base_url, while mcp_path is just the internal MCP route. Don’t include the mount prefix in both places or you’ll get /api/api/mcp.

Mounting Strategy

When mounting an OAuth-protected server under a path prefix, declare your URLs upfront to make the relationships clear:
from fastmcp import FastMCP
from fastmcp.server.auth.providers.github import GitHubProvider
from starlette.applications import Starlette
from starlette.routing import Mount

# Define the routing structure
ROOT_URL = "http://localhost:8000"
MOUNT_PREFIX = "/api"
MCP_PATH = "/mcp"
Create the auth provider with both issuer_url and base_url:
auth = GitHubProvider(
    client_id="your-client-id",
    client_secret="your-client-secret",
    issuer_url=ROOT_URL,  # Discovery metadata at root
    base_url=f"{ROOT_URL}{MOUNT_PREFIX}",  # Operational endpoints under prefix
)
Create the MCP app, which generates operational routes at the specified path:
mcp = FastMCP("Protected Server", auth=auth)
mcp_app = mcp.http_app(path=MCP_PATH)
Retrieve the discovery routes from the auth provider. The mcp_path argument should match the path used when creating the MCP app:
well_known_routes = auth.get_well_known_routes(mcp_path=MCP_PATH)
Finally, mount everything in the Starlette app with discovery routes at root and the MCP app under the prefix:
app = Starlette(
    routes=[
        *well_known_routes,  # Discovery routes at root level
        Mount(MOUNT_PREFIX, app=mcp_app),  # Operational routes under prefix
    ],
    lifespan=mcp_app.lifespan,
)
This configuration produces the following URL structure:
  • MCP endpoint: http://localhost:8000/api/mcp
  • OAuth authorization: http://localhost:8000/api/authorize
  • OAuth callback: http://localhost:8000/api/auth/callback
  • Authorization server metadata: http://localhost:8000/.well-known/oauth-authorization-server
  • Protected resource metadata: http://localhost:8000/.well-known/oauth-protected-resource/api/mcp

Complete Example

Here’s a complete working example showing all the pieces together:
from fastmcp import FastMCP
from fastmcp.server.auth.providers.github import GitHubProvider
from starlette.applications import Starlette
from starlette.routing import Mount
import uvicorn

# Define routing structure
ROOT_URL = "http://localhost:8000"
MOUNT_PREFIX = "/api"
MCP_PATH = "/mcp"

# Create OAuth provider
auth = GitHubProvider(
    client_id="your-client-id",
    client_secret="your-client-secret",
    issuer_url=ROOT_URL,
    base_url=f"{ROOT_URL}{MOUNT_PREFIX}",
)

# Create MCP server
mcp = FastMCP("Protected Server", auth=auth)

@mcp.tool
def analyze(data: str) -> dict:
    return {"result": f"Analyzed: {data}"}

# Create MCP app
mcp_app = mcp.http_app(path=MCP_PATH)

# Get discovery routes for root level
well_known_routes = auth.get_well_known_routes(mcp_path=MCP_PATH)

# Assemble the application
app = Starlette(
    routes=[
        *well_known_routes,
        Mount(MOUNT_PREFIX, app=mcp_app),
    ],
    lifespan=mcp_app.lifespan,
)

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)
For more details on OAuth authentication, see the Authentication guide.

Production Deployment

Running with Uvicorn

When deploying to production, you’ll want to optimize your server for performance and reliability. Uvicorn provides several options to improve your server’s capabilities:
# Run with basic configuration
uvicorn app:app --host 0.0.0.0 --port 8000

# Run with multiple workers for production
uvicorn app:app --host 0.0.0.0 --port 8000 --workers 4

Environment Variables

Production deployments should never hardcode sensitive information like API keys or authentication tokens. Instead, use environment variables to configure your server at runtime. This keeps your code secure and makes it easy to deploy the same code to different environments with different configurations. Here’s an example using bearer token authentication (though OAuth is recommended for production):
import os
from fastmcp import FastMCP
from fastmcp.server.auth import BearerTokenAuth

# Read configuration from environment
auth_token = os.environ.get("MCP_AUTH_TOKEN")
if auth_token:
    auth = BearerTokenAuth(token=auth_token)
    mcp = FastMCP("Production Server", auth=auth)
else:
    mcp = FastMCP("Production Server")

app = mcp.http_app()
Deploy with your secrets safely stored in environment variables:
MCP_AUTH_TOKEN=secret uvicorn app:app --host 0.0.0.0 --port 8000

OAuth Token Security

New in version: 2.13.0 If you’re using the OAuth Proxy, FastMCP issues its own JWT tokens to clients instead of forwarding upstream provider tokens. This maintains proper OAuth 2.0 token boundaries, but requires specific production configuration to ensure tokens survive server restarts. Development vs Production: By default, token cryptographic keys are ephemeral—generated from a random salt at startup and not persisted anywhere. This means keys change on every restart, invalidating all tokens and triggering client re-authentication. This works fine for development and testing where re-auth after restart is acceptable. For production, tokens should survive restarts to avoid disrupting clients. This requires four things working together:
  1. Explicit JWT signing key for signing tokens issued to clients
  2. Explicit token encryption key for encrypting upstream OAuth tokens at rest
  3. Persistent storage so encrypted upstream tokens survive restart
  4. HTTPS deployment for secure cookie handling
The two keys can be any secret strings (environment variables, secret manager, etc.) and should be different from each other. FastMCP derives proper cryptographic keys from whatever you provide using HKDF. Configuration: Add two parameters to your auth provider and use persistent storage and HTTPS:
auth = GitHubProvider(
    client_id=os.environ["GITHUB_CLIENT_ID"],
    client_secret=os.environ["GITHUB_CLIENT_SECRET"],
    jwt_signing_key=os.environ["JWT_SIGNING_KEY"],
    token_encryption_key=os.environ["TOKEN_ENCRYPTION_KEY"],
    client_storage=RedisStore(host="redis.example.com", ...),
    base_url="https://your-server.com"  # use HTTPS 
)
Without explicit keys, new keys are generated each time the server starts. Without persistent storage, encrypted tokens are lost. Both cause token validation to fail after restart, requiring all clients to re-authenticate. For more details on the token architecture, see OAuth Proxy Token Architecture.

Testing Your Deployment

Once your server is deployed, you’ll need to verify it’s accessible and functioning correctly. For comprehensive testing strategies including connectivity tests, client testing, and authentication testing, see the Testing Your Server guide.

Hosting Your Server

This guide has shown you how to create an HTTP-accessible MCP server, but you’ll still need a hosting provider to make it available on the internet. Your FastMCP server can run anywhere that supports Python web applications:
  • Cloud VMs (AWS EC2, Google Compute Engine, Azure VMs)
  • Container platforms (Cloud Run, Container Instances, ECS)
  • Platform-as-a-Service (Railway, Render, Vercel)
  • Edge platforms (Cloudflare Workers)
  • Kubernetes clusters (self-managed or managed)
The key requirements are Python 3.10+ support and the ability to expose an HTTP port. Most providers will require you to package your server (requirements.txt, Dockerfile, etc.) according to their deployment format. For managed, zero-configuration deployment, see FastMCP Cloud.
I