On this page
Deployment Basics
Deployment Basics
This guide covers the fundamental approaches to deploying MCP Catie (Context Aware Traffic Ingress Engine) in various environments.
Deployment Options
MCP Catie can be deployed in several ways depending on your infrastructure requirements:
- Standalone Binary: Direct deployment of the compiled Go binary
- Docker Container: Containerized deployment using Docker
- Kubernetes: Orchestrated deployment in Kubernetes clusters
- Cloud Platforms: Deployment on managed cloud services
Standalone Binary Deployment
For simple deployments or development environments, you can run MCP Catie directly as a compiled binary.
Prerequisites
- Go 1.18 or higher (for building)
- Linux, macOS, or Windows environment
Steps
- Build the binary:
go build -o mcp-catie ./cmd/main.go
- Create your configuration file:
cp router_config.yaml.example router_config.yaml
# Edit router_config.yaml to match your environment
- Run the service:
./mcp-catie
- Verify the service is running:
curl http://localhost:80/health
Docker Deployment
Docker provides a consistent deployment environment and simplifies dependency management.
Prerequisites
- Docker installed on your host system
Steps
- Build the Docker image:
docker build -t mcp-catie:latest .
- Run the container:
docker run -d \
--name mcp-catie \
-p 80:80 \
-v $(pwd)/router_config.yaml:/app/router_config.yaml \
mcp-catie:latest
- Check container status:
docker ps
docker logs mcp-catie
Docker Compose Deployment
For multi-container deployments, Docker Compose offers a convenient way to define and run services.
Prerequisites
- Docker and Docker Compose installed
Steps
- Create a
docker-compose.yml
file:
version: '3'
services:
mcp-catie:
build: .
ports:
- "80:80"
volumes:
- ./router_config.yaml:/app/router_config.yaml
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
- Start the services:
docker-compose up -d
- View logs:
docker-compose logs -f
Kubernetes Deployment
For production environments, Kubernetes provides scalability, high availability, and automated management.
Prerequisites
- Kubernetes cluster
- kubectl configured
Steps
- Create a ConfigMap for the router configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: mcp-catie-config
data:
router_config.yaml: |
resources:
"^weather/.*": "http://weather-service:8080/mcp"
"^database/.*": "http://database-service:8080/mcp"
tools:
"^calculator$": "http://calculator-service:8080/mcp"
"^translator$": "http://translator-service:8080/mcp"
default: "http://default-service:8080/mcp"
ui:
username: "admin"
password: "your_secure_password"
- Create a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcp-catie
labels:
app: mcp-catie
spec:
replicas: 2
selector:
matchLabels:
app: mcp-catie
template:
metadata:
labels:
app: mcp-catie
spec:
containers:
- name: mcp-catie
image: mcp-catie:latest
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /app/router_config.yaml
subPath: router_config.yaml
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: config-volume
configMap:
name: mcp-catie-config
- Create a Service:
apiVersion: v1
kind: Service
metadata:
name: mcp-catie
spec:
selector:
app: mcp-catie
ports:
- port: 80
targetPort: 80
type: ClusterIP
- Apply the configurations:
kubectl apply -f configmap.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
- Check deployment status:
kubectl get pods
kubectl logs -l app=mcp-catie
Environment Variables
MCP Catie supports the following environment variables for configuration:
Variable | Description | Default |
---|---|---|
CONFIG_FILE |
Path to the router configuration file | router_config.yaml |
PORT |
Port to listen on | 80 |
LOG_LEVEL |
Logging level (debug, info, warn, error) | info |
METRICS_ENABLED |
Enable Prometheus metrics | true |
Production Considerations
When deploying to production environments, consider the following:
- High Availability: Deploy multiple instances behind a load balancer
- Monitoring: Set up Prometheus and Grafana for metrics visualization
- Logging: Configure centralized logging with ELK or similar stack
- Security:
- Use HTTPS with proper certificates
- Implement network policies to restrict access
- Regularly update dependencies and the base image
- Configuration Management: Use secrets management for sensitive configuration
- Resource Limits: Set appropriate CPU and memory limits
Troubleshooting
Common deployment issues and their solutions:
-
Service Unavailable:
- Check if the service is running:
docker ps
orkubectl get pods
- Verify network connectivity:
curl http://service-ip/health
- Check logs for errors:
docker logs mcp-catie
orkubectl logs pod-name
- Check if the service is running:
-
Configuration Issues:
- Validate YAML syntax:
yamllint router_config.yaml
- Check if the configuration file is mounted correctly
- Verify the configuration is being loaded (check logs)
- Validate YAML syntax:
-
Performance Problems:
- Check resource usage:
docker stats
or Kubernetes metrics - Review Prometheus metrics for bottlenecks
- Consider scaling horizontally by adding more replicas
- Check resource usage:
Next Steps
After basic deployment, consider exploring: