A comprehensive Model Context Protocol (MCP) server for Ollama integration with advanced features including script management, multi-agent workflows, and process leak prevention.
- 🔄 Async Job Management: Execute long-running tasks in the background
- 📝 Script Templates: Create reusable prompt templates with variable substitution
- 🤖 Fast-Agent Integration: Multi-agent workflows (chain, parallel, router, evaluator)
- 🛡️ Process Leak Prevention: Proper cleanup and resource management
- 📊 Comprehensive Monitoring: Job tracking, status monitoring, and output management
- 🎯 Built-in Prompts: Interactive guidance templates for common tasks
- ⚡ Multiple Model Support: Work with any locally installed Ollama model
- Python 3.8+ with uv package manager
- Ollama installed and running
- Claude Desktop for MCP integration
- Setup Environment: Be advised- This readme was revised by a less than concientious AI.
cd /path/to/ollama-mcp-server
uv venv --python 3.12 --seed
source .venv/bin/activate
uv add mcp[cli] python-dotenv
- Configure Claude Desktop:
Copy configuration from
example_of_bad_ai_gen_mcp_config_do_not_use.json
(Don't lol. Use the example_claude_desktop_config.json)to your Claude Desktop config file:
- Linux:
~/.config/Claude/claude_desktop_config.json
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
-
Update paths in the config to match your system
-
Restart Claude Desktop
list_ollama_models
- Show all available Ollama modelsrun_ollama_prompt
- Execute prompts with any model (sync/async)get_job_status
- Check job completion statuslist_jobs
- View all running and completed jobscancel_job
- Stop running jobs
save_script
- Create reusable prompt templateslist_scripts
- View saved templatesget_script
- Read template contentrun_script
- Execute templates with variables
create_fastagent_script
- Single-agent scriptscreate_fastagent_workflow
- Multi-agent workflowsrun_fastagent_script
- Execute agent workflowslist_fastagent_scripts
- View available workflows
run_bash_command
- Execute system commands safelyrun_workflow
- Multi-step workflow execution
Interactive prompts to guide common tasks:
ollama_guide
- Interactive user guideollama_run_prompt
- Simple prompt executionmodel_comparison
- Compare multiple modelsfast_agent_workflow
- Multi-agent workflowsscript_executor
- Template executionbatch_processing
- Multiple prompt processingiterative_refinement
- Content improvement workflows
ollama-mcp-server/
├── src/ollama_mcp_server/
│ └── server.py # Main server code
├── outputs/ # Generated output files
├── scripts/ # Saved script templates
├── workflows/ # Workflow definitions
├── fast-agent-scripts/ # Fast-agent Python scripts
├── prompts/ # Usage guides
│ ├── tool_usage_guide.md
│ ├── prompt_templates_guide.md
│ └── setup_guide.md
├── example_mcp_config.json # Claude Desktop config
└── README.md
cd ollama-mcp-server
uv run python -m ollama_mcp_server.server
mcp dev src/ollama_mcp_server/server.py
The server includes comprehensive process leak prevention:
- Signal Handling: Proper SIGTERM/SIGINT handling
- Background Task Tracking: All async tasks monitored
- Resource Cleanup: Automatic process termination
- Memory Management: Prevents accumulation of zombie processes
Monitor health with:
ps aux | grep mcp | wc -l # Should show <10 processes
1. Use "ollama_run_prompt" prompt in Claude
2. Specify model and prompt text
3. Get immediate results
1. Use "fast_agent_workflow" prompt
2. Choose workflow type (chain/parallel/router/evaluator)
3. Define agents and initial prompt
4. Monitor execution
1. Create template with save_script
2. Use variables: {variable_name}
3. Execute with run_script
4. Pass JSON variables object
Model not found: Use list_ollama_models
for exact names
Connection issues: Start Ollama with ollama serve
High process count: Server now prevents leaks automatically
Job stuck: Use cancel_job
to stop problematic tasks
- Follow the MCP Python SDK development guidelines
- Use proper type hints and docstrings
- Test all new features thoroughly
- Ensure process cleanup in all code paths
This project follows the same license terms as the MCP Python SDK.
Built on the Model Context Protocol and Ollama with process management patterns from MCP best practices.
Ready to get started? Check the prompts/setup_guide.md
for detailed installation instructions!