New in version: 2.0.0
LLM sampling allows MCP tools to request the client’s LLM to generate text based on provided messages. This is useful when tools need to leverage the LLM’s capabilities to process data, generate responses, or perform text-based analysis.
Why Use LLM Sampling?
LLM sampling enables tools to:- Leverage AI capabilities: Use the client’s LLM for text generation and analysis
- Offload complex reasoning: Let the LLM handle tasks requiring natural language understanding
- Generate dynamic content: Create responses, summaries, or transformations based on data
- Maintain context: Use the same LLM instance that the user is already interacting with
Basic Usage
Usectx.sample()
to request text generation from the client’s LLM:
Method Signature
Context Sampling Method
Request text generation from the client’s LLM
Simple Text Generation
Basic Prompting
Generate text with simple string prompts:System Prompt
Use system prompts to guide the LLM’s behavior:Model Preferences
Specify model preferences for different use cases:Complex Message Structures
Use structured messages for more complex interactions:Client Requirements
LLM sampling requires client support:- Clients must implement sampling handlers to process requests
- If the client doesn’t support sampling, calls to
ctx.sample()
will fail - See Client Sampling for details on implementing client-side sampling handlers