A client application for interacting with Model Context Protocol (MCP) servers. This client provides an AI-powered chat interface that can interact with various MCP servers to perform tasks like web browsing, Slack operations...
Model Context Protocol (MCP) is a protocol that enables AI models to securely connect to external tools and data sources. This client acts as a bridge between your conversations and various MCP servers that provide different capabilities.
- 🤖 AI-Powered Chat Interface - Natural language interaction with MCP tools
- 🔧 Configurable Server Connections - Easy JSON-based server configuration
- 🐳 Docker Support - Containerized server deployment
- 📊 Multiple Server Types:
- Hello Server - Local Python server for testing
- Puppeteer Server - Web browser automation and scraping
- Slack Integration - Send messages and interact with Slack workspaces
- 📈 Logfire Integration - Comprehensive logging and monitoring
- ⚡ Async Processing - Non-blocking server communication
From the root folder, install Python dependencies:
pip install -r requirements.txt
Pull the required Docker images for containerized servers:
docker pull mcp/puppeteer:latest
docker pull mcp/slack:latest
Create a .env
file in the root directory and configure the following variables:
# Required: OpenAI API Key for the AI agent
OPENAI_API_KEY=sk-your-openai-api-key-here
# Optional: Environment setting (development/production)
ENVIRONMENT=development
# Optional: Logfire configuration for monitoring
LOGFIRE_TOKEN=your-logfire-token-here
# Optional: Alternative LLM API key (if using different provider)
LLM_API_KEY=your-alternative-llm-key
# Optional: Logfire Pydantic plugin recording setting
LOGFIRE_PYDANTIC_PLUGIN_RECORD=all
OPENAI_API_KEY
(Required): Your OpenAI API key. Get one from OpenAI PlatformENVIRONMENT
(Optional): Set todevelopment
orproduction
to control logging levelsLOGFIRE_TOKEN
(Optional): Token for Logfire monitoring service. Sign up at LogfireLLM_API_KEY
(Optional): Alternative API key if using a different LLM providerLOGFIRE_PYDANTIC_PLUGIN_RECORD
(Optional): Controls what Logfire records (all
,failure
,off
)
The default configuration in client/servers_config.json
should work out of the box. You can customize server settings if needed:
{
"mcpServers": {
"hello": {
"command": "python",
"args": ["/path/to/server/main.py"]
},
"puppeteer": {
"command": "docker",
"args": ["run", "-i", "--rm", "--init", "-e", "DOCKER_CONTAINER=true", "mcp/puppeteer"]
}
}
}
Start the MCP client:
python client/main.py
Once the client starts, you'll see a chat interface. The AI agent has access to all configured MCP servers and can use their tools to help you.
You: Can you browse to example.com and tell me the page title?
Assistant: I'll use the Puppeteer server to browse to example.com and get the page title for you...
You: Send a message to the #general channel in Slack saying "Hello team!"
Assistant: I'll use the Slack server to send that message to the #general channel...
- Type your questions or requests naturally
- Use
quit
orexit
to end the session - Press
Ctrl+C
to interrupt a long-running operation
For Slack integration, you'll need:
- Create a Slack app in your workspace
- Get your bot token (starts with
xoxb-
) - Add these to your environment or
servers_config.json
:
SLACK_TEAM_ID=T01234567
SLACK_CHANNEL_IDS=C01234567,C76543210
SLACK_BOT_TOKEN=xoxb-your-bot-token
mcp-video/
├── client/
│ ├── main.py # Main client application
│ ├── servers_config.json # MCP server configuration
│ └── config_logfire.py # Logfire logging setup
├── server/
│ ├── main.py # Local hello server
│ └── README.md # Server-specific documentation
├── requirements.txt # Python dependencies
├── .env # Environment variables (create this)
├── .gitignore # Git ignore rules
└── README.md # This file
"MCP server initialization timed out"
- Check if Docker is running:
docker --version
- Verify server images are pulled:
docker images | grep mcp
- Increase timeout in the code if needed
"OpenAI API key not found"
- Ensure
.env
file exists in the root directory - Check that
OPENAI_API_KEY
is properly set - Verify your API key is valid and has credits
Docker servers not responding
- Ensure Docker Desktop is running
- Check Docker daemon status:
docker info
- Try restarting Docker service
Slack authentication errors
- Create an app on slack
- Verify your tokens and credentials
- Check API permissions and scopes
- Ensure your workspace/account has necessary access
Enable verbose logging by setting:
ENVIRONMENT=development
LOGFIRE_PYDANTIC_PLUGIN_RECORD=all
The client automatically performs health checks on HTTP-based servers. Check the logs for server status information.
- Python: 3.8 or higher
- Docker: For Puppeteer and Slack servers
- OpenAI API Key: For the AI agent
- Internet Connection: For external MCP servers