Use this file to discover all available pages before exploring further.
New in version 2.0.0Use this when you need to retrieve server-defined message templates for LLM interactions.Prompts are reusable message templates exposed by MCP servers. They can accept arguments to generate personalized message sequences for LLM interactions.
async with client: # Simple prompt without arguments result = await client.get_prompt("welcome_message") # result -> mcp.types.GetPromptResult # Access the generated messages for message in result.messages: print(f"Role: {message.role}") print(f"Content: {message.content}")
Pass arguments to customize the prompt:
async with client: result = await client.get_prompt("user_greeting", { "name": "Alice", "role": "administrator" }) for message in result.messages: print(f"Generated message: {message.content}")
New in version 2.9.0FastMCP automatically serializes complex arguments to JSON strings as required by the MCP specification. You can pass typed objects directly:
The client handles serialization using pydantic_core.to_json() for consistent formatting. FastMCP servers automatically deserialize these JSON strings back to the expected types.
The get_prompt() method returns a GetPromptResult containing a list of messages:
async with client: result = await client.get_prompt("conversation_starter", {"topic": "climate"}) for i, message in enumerate(result.messages): print(f"Message {i + 1}:") print(f" Role: {message.role}") print(f" Content: {message.content.text if hasattr(message.content, 'text') else message.content}")
Prompts can generate different message types. System messages configure LLM behavior:
async with client: result = await client.get_prompt("system_configuration", { "role": "helpful assistant", "expertise": "python programming" }) # Access the returned messages message = result.messages[0] print(f"Prompt: {message.content}")
Conversation templates generate multi-turn flows:
async with client: result = await client.get_prompt("interview_template", { "candidate_name": "Alice", "position": "Senior Developer" }) # Multiple messages for a conversation flow for message in result.messages: print(f"{message.role}: {message.content}")
New in version 3.0.0When a server exposes multiple versions of a prompt, you can request a specific version:
async with client: # Get the highest version (default) result = await client.get_prompt("summarize", {"text": "..."}) # Get a specific version result_v1 = await client.get_prompt("summarize", {"text": "..."}, version="1.0")
See Metadata for how to discover available versions.