When defining FastMCP tools, your functions might need to interact with the underlying MCP session or access server capabilities. FastMCP provides the Context object for this purpose.

What Is Context?

The Context object provides a clean interface to access MCP features within your tool functions, including:

  • Logging: Send debug, info, warning, and error messages back to the client
  • Progress Reporting: Update the client on the progress of long-running operations
  • Resource Access: Read data from resources registered with the server
  • LLM Sampling: Request the client’s LLM to generate text based on provided messages
  • Request Information: Access metadata about the current request
  • Server Access: When needed, access the underlying FastMCP server instance

Accessing Context

To use the context object within your tool function, simply add a parameter to your function signature and type-hint it as Context. FastMCP will automatically inject the context instance when your tool is called.

from fastmcp import FastMCP, Context

mcp = FastMCP(name="ContextDemo")

@mcp.tool()
async def process_file(file_uri: str, ctx: Context) -> str:
    """Processes a file, using context for logging and resource access."""
    request_id = ctx.request_id
    await ctx.info(f"[{request_id}] Starting processing for {file_uri}")

    try:
        # Use context to read a resource
        contents_list = await ctx.read_resource(file_uri)
        if not contents_list:
            await ctx.warning(f"Resource {file_uri} is empty.")
            return "Resource empty"

        data = contents_list[0].content # Assuming TextResourceContents
        await ctx.debug(f"Read {len(data)} bytes from {file_uri}")

        # Report progress
        await ctx.report_progress(progress=50, total=100)
        
        # Simulate work
        processed_data = data.upper() # Example processing

        await ctx.report_progress(progress=100, total=100)
        await ctx.info(f"Processing complete for {file_uri}")

        return f"Processed data length: {len(processed_data)}"

    except Exception as e:
        # Use context to log errors
        await ctx.error(f"Error processing {file_uri}: {str(e)}")
        raise # Re-raise to send error back to client

Key Points:

  • The parameter name (e.g., ctx, context) doesn’t matter, only the type hint Context is important.
  • The context parameter can be placed anywhere in your function’s signature.
  • The context is optional - tools that don’t need it can omit the parameter.
  • Context is only available within tool functions during a request; attempting to use context methods outside a request will raise errors.
  • Context methods are async, so your tool function usually needs to be async as well.

Context Capabilities

Logging

Send log messages back to the MCP client. This is useful for debugging and providing visibility into tool execution during a request.

@mcp.tool()
async def analyze_data(data: list[float], ctx: Context) -> dict:
    """Analyze numerical data with logging."""
    await ctx.debug("Starting analysis of numerical data")
    await ctx.info(f"Analyzing {len(data)} data points")
    
    try:
        result = sum(data) / len(data)
        await ctx.info(f"Analysis complete, average: {result}")
        return {"average": result, "count": len(data)}
    except ZeroDivisionError:
        await ctx.warning("Empty data list provided")
        return {"error": "Empty data list"}
    except Exception as e:
        await ctx.error(f"Analysis failed: {str(e)}")
        raise

Available Logging Methods:

  • ctx.debug(message: str): Low-level details useful for debugging
  • ctx.info(message: str): General information about tool execution
  • ctx.warning(message: str): Potential issues that didn’t prevent execution
  • ctx.error(message: str): Errors that occurred during execution
  • ctx.log(level: Literal["debug", "info", "warning", "error"], message: str, logger_name: str | None = None): Generic log method supporting custom logger names

Progress Reporting

For long-running tools, notify the client about the progress of the operation. This allows clients to display progress indicators and provide a better user experience.

@mcp.tool()
async def process_items(items: list[str], ctx: Context) -> dict:
    """Process a list of items with progress updates."""
    total = len(items)
    results = []
    
    for i, item in enumerate(items):
        # Report progress as percentage
        await ctx.report_progress(progress=i, total=total)
        
        # Process the item (simulated with a sleep)
        await asyncio.sleep(0.1)
        results.append(item.upper())
    
    # Report 100% completion
    await ctx.report_progress(progress=total, total=total)
    
    return {"processed": len(results), "results": results}

Method signature:

  • ctx.report_progress(progress: float, total: float | None = None)
    • progress: Current progress value (e.g., 24)
    • total: Optional total value (e.g., 100). If provided, clients may interpret this as a percentage.

Progress reporting requires the client to have sent a progressToken in the initial request. If the client doesn’t support progress reporting, these calls will have no effect.

Resource Access

Read data from resources registered with your FastMCP server. This allows tools to access files, configuration, or dynamically generated content.

@mcp.tool()
async def summarize_document(document_uri: str, ctx: Context) -> str:
    """Summarize a document by its resource URI."""
    # Read the document content
    content_list = await ctx.read_resource(document_uri)
    
    if not content_list:
        return "Document is empty"
    
    document_text = content_list[0].content
    
    # Example: Generate a simple summary (length-based)
    words = document_text.split()
    total_words = len(words)
    
    await ctx.info(f"Document has {total_words} words")
    
    # Return a simple summary
    if total_words > 100:
        summary = " ".join(words[:100]) + "..."
        return f"Summary ({total_words} words total): {summary}"
    else:
        return f"Full document ({total_words} words): {document_text}"

Method signature:

  • ctx.read_resource(uri: str | AnyUrl) -> list[ReadResourceContents]
    • uri: The resource URI to read
    • Returns a list of resource content parts (usually containing just one item)

The returned content is typically accessed via content_list[0].content and can be text or binary data depending on the resource.

LLM Sampling

New in version 2.0.0

Request the client’s LLM to generate text based on provided messages. This is useful when your tool needs to leverage the LLM’s capabilities to process data or generate responses.

@mcp.tool()
async def analyze_sentiment(text: str, ctx: Context) -> dict:
    """Analyze the sentiment of a text using the client's LLM."""
    # Create a sampling prompt asking for sentiment analysis
    prompt = f"Analyze the sentiment of the following text as positive, negative, or neutral. Just output a single word - 'positive', 'negative', or 'neutral'. Text to analyze: {text}"
    
    # Send the sampling request to the client's LLM
    response = await ctx.sample(prompt)
    
    # Process the LLM's response
    sentiment = response.text.strip().lower()
    
    # Map to standard sentiment values
    if "positive" in sentiment:
        sentiment = "positive"
    elif "negative" in sentiment:
        sentiment = "negative"
    else:
        sentiment = "neutral"
    
    return {"text": text, "sentiment": sentiment}

Method signature:

  • ctx.sample(messages: str | list[str | SamplingMessage], system_prompt: str | None = None, temperature: float | None = None, max_tokens: int | None = None) -> TextContent | ImageContent
    • messages: A string or list of strings/message objects to send to the LLM
    • system_prompt: Optional system prompt to guide the LLM’s behavior
    • temperature: Optional sampling temperature (controls randomness)
    • max_tokens: Optional maximum number of tokens to generate (defaults to 512)
    • Returns the LLM’s response as TextContent or ImageContent

When providing a simple string, it’s treated as a user message. For more complex scenarios, you can provide a list of messages with different roles.

@mcp.tool()
async def generate_example(concept: str, ctx: Context) -> str:
    """Generate a Python code example for a given concept."""
    # Using a system prompt and a user message
    response = await ctx.sample(
        messages=f"Write a simple Python code example demonstrating '{concept}'.",
        system_prompt="You are an expert Python programmer. Provide concise, working code examples without explanations.",
        temperature=0.7,
        max_tokens=300
    )
    
    code_example = response.text
    return f"```python\n{code_example}\n```"

See Client Sampling for more details on how clients handle these requests.

Request Information

Access metadata about the current request and client.

@mcp.tool()
async def request_info(ctx: Context) -> dict:
    """Return information about the current request."""
    return {
        "request_id": ctx.request_id,
        "client_id": ctx.client_id or "Unknown client"
    }

Available Properties:

  • ctx.request_id -> str: Get the unique ID for the current MCP request
  • ctx.client_id -> str | None: Get the ID of the client making the request, if provided during initialization

Advanced Access

For advanced use cases, you can access the underlying MCP session and FastMCP server.

@mcp.tool()
async def advanced_tool(ctx: Context) -> str:
    """Demonstrate advanced context access."""
    # Access the FastMCP server instance
    server_name = ctx.fastmcp.name
    
    # Low-level session access (rarely needed)
    session = ctx.session
    request_context = ctx.request_context
    
    return f"Server: {server_name}"

Advanced Properties:

  • ctx.fastmcp -> FastMCP: Access the server instance the context belongs to
  • ctx.session: Access the raw mcp.server.session.ServerSession object
  • ctx.request_context: Access the raw mcp.shared.context.RequestContext object

Direct use of session or request_context requires understanding the low-level MCP Python SDK and may be less stable than using the methods provided directly on the Context object.

Using Context in Other Components

Currently, Context is primarily designed for use within tool functions. Support for Context in other components like resources and prompts is planned for future releases.