Creating Reusable Prompt Templates in MCP
In the fast-evolving world of AI and machine learning, integrating large language models (LLMs) into applications requires a robust framework. The Model Context Protocol (MCP) provides this by enabling seamless bidirectional communication between clients and servers. Here, we delve into creating reusable prompt templates within MCP to streamline and enhance LLM interactions.
Understanding the Core Architecture
MCP architecture connects hosts, clients, and servers. Hosts are the applications that initiate connections, while clients within the hosts establish direct communication with servers. Servers provide the core functionality, including tools, prompts, and resources that clients can leverage.
flowchart LR
subgraph "Host"
client1[MCP Client]
client2[MCP Client]
end
subgraph "Server Process"
server1[MCP Server]
end
subgraph "Server Process"
server2[MCP Server]
end
client1 <-->|Transport Layer| server1
client2 <-->|Transport Layer| server2
This architecture’s client-server design allows for scalability and modular enhancements, making it suitable for various AI-driven applications.
Prompts: A Key Ingredient
Overview
Prompts in MCP are reusable templates that guide LLM interactions and can:
- Accept dynamic arguments
- Include context from resources
- Chain multiple interactions
- Guide specific workflows
Prompts are user-controlled, ensuring users explicitly select them for use.
Structure
A prompt is defined with:
{
name: string; // Unique identifier for the prompt
description?: string; // Human-readable description
arguments?: [ // Optional list of arguments
{
name: string; // Argument identifier
description?: string; // Argument description
required?: boolean; // Whether argument is required
}
]
}
Discovering and Using Prompts
Clients can discover available prompts via the prompts/list
endpoint:
// Request
{
method: "prompts/list"
}
// Response
{
prompts: [
{
name: "analyze-code",
description: "Analyze code for potential improvements",
arguments: [
{
name: "language",
description: "Programming language",
required: true
}
]
}
]
}
To use a prompt, clients send a prompts/get
request:
// Request
{
method: "prompts/get",
params: {
name: "analyze-code",
arguments: {
language: "python"
}
}
}
// Response
{
description: "Analyze Python code for potential improvements",
messages: [
{
role: "user",
content: {
type: "text",
text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n total = 0\n for num in numbers:\n total = total + num\n return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```"
}
}
]
}
Dynamic Prompts
Prompts can be dynamic, embedding resource context and enabling multi-step workflows.
{
"name": "analyze-project",
"description": "Analyze project logs and code",
"arguments": [
{
"name": "timeframe",
"description": "Time period to analyze logs",
"required": true
},
{
"name": "fileUri",
"description": "URI of code file to review",
"required": true
}
]
}
Implementation Example
Here’s how you might implement prompts in an MCP server:
import { Server } from "@modelcontextprotocol/sdk/server";
import {
ListPromptsRequestSchema,
GetPromptRequestSchema
} from "@modelcontextprotocol/sdk/types";
const PROMPTS = {
"git-commit": {
name: "git-commit",
description: "Generate a Git commit message",
arguments: [
{
name: "changes",
description: "Git diff or description of changes",
required: true
}
]
},
"explain-code": {
name: "explain-code",
description: "Explain how code works",
arguments: [
{
name: "code",
description: "Code to explain",
required: true
},
{
name: "language",
description: "Programming language",
required: false
}
]
}
};
const server = new Server({
name: "example-prompts-server",
version: "1.0.0"
}, {
capabilities: {
prompts: {}
}
});
// List available prompts
server.setRequestHandler(ListPromptsRequestSchema, async () => {
return {
prompts: Object.values(PROMPTS)
};
});
// Get specific prompt
server.setRequestHandler(GetPromptRequestSchema, async (request) => {
const prompt = PROMPTS[request.params.name];
if (!prompt) {
throw new Error(`Prompt not found: ${request.params.name}`);
}
if (request.params.name === "git-commit") {
return {
messages: [
{
role: "user",
content: {
type: "text",
text: `Generate a concise but descriptive commit message for these changes:\n\n${request.params.arguments?.changes}`
}
}
]
};
}
if (request.params.name === "explain-code") {
const language = request.params.arguments?.language || "Unknown";
return {
messages: [
{
role: "user",
content: {
type: "text",
text: `Explain how this ${language} code works:\n\n${request.params.arguments?.code}`
}
}
]
};
}
throw new Error("Prompt implementation not found");
});
Best Practices
- Use clear, descriptive prompt names and detailed descriptions.
- Validate all required arguments.
- Handle missing arguments gracefully.
- Version prompt templates where necessary.
- Implement error handling.
- Consider prompt composability and test with various inputs.
UI Integration
Prompts can surface in client UIs as:
- Slash commands
- Quick actions
- Context menu items
- Command palette entries
- Guided workflows
- Interactive forms
Conclusion
Creating reusable prompt templates in MCP enhances LLM interactions, providing structured and effective ways to guide workflows and interactions. By leveraging MCP’s robust protocol, developers can build scalable, efficient AI-driven applications.
By adhering to best practices and leveraging the versatile capabilities of MCP, your AI applications can effectively incorporate and utilize prompt templates for streamlined user interactions and enhanced functionalities.