A powerful, modular command-line interface for interacting with AI models enhanced with Model Context Protocol (MCP) tool integration. Features a centralized architecture that makes it easy to add new LLM providers while providing robust tool integration and subagent management capabilities.
- Multiple AI Backends: Support for Anthropic Claude, OpenAI GPT, DeepSeek, Google Gemini, and OpenRouter with easy extensibility
- MCP Model Server: Expose all AI models as standardized MCP tools with persistent conversations
- Modular Architecture: Provider-model separation with centralized base agent for maximum flexibility
- MCP Server Integration: Connect to multiple MCP servers for extended functionality
- Persistent Conversations: Maintain conversation context across multiple tool calls for each AI model
- Interactive Chat: Real-time conversation with AI models and comprehensive tool access
- Subagent System: Spawn focused subagents for complex tasks with automatic coordination
- Command-Line Tools: Manage MCP servers and query models directly
- Built-in Tools: File operations, bash execution, web fetching, todo management, and task delegation
- Enhanced Tool Display: Full parameter visibility and complete response output (no truncation)
-
Clone the repository:
git clone https://github.com/amranu/mcp-agent.git cd mcp-agent -
Install the package:
-
Configure API keys (environment variables):
# Set environment variables for the providers you want to use export OPENAI_API_KEY=your_openai_api_key_here export DEEPSEEK_API_KEY=your_deepseek_api_key_here export ANTHROPIC_API_KEY=your_anthropic_api_key_here export GEMINI_API_KEY=your_gemini_api_key_here export OPENROUTER_API_KEY=your_openrouter_api_key_here # Start with automatic provider selection agent chat # Or specify a particular provider-model combination agent chat --model openai:gpt-4-turbo-previewSmart Provider Selection: The agent automatically selects a configured provider based on available API keys.
Configuration is automatically saved to ~/.config/mcp-agent/config.json and persists across sessions.
Start an interactive chat session with your configured AI model and MCP tools:
Start the MCP model server to expose all AI models as standardized MCP tools with persistent conversations:
The model server exposes AI models from 5 providers:
- Anthropic: Claude models
- OpenAI: GPT models
- DeepSeek: Chat and reasoning models
- Gemini: Google's Gemini models
- OpenRouter: Multi-provider access
Ask a one-time question without entering interactive mode:
Switch between different AI models using the provider-model format (configuration persists automatically):
Or use slash commands within interactive chat:
The agent uses an automatic persistent configuration system that saves settings to ~/.config/mcp-agent/config.json:
- API Keys: Set via environment variables
- Model Preferences: Automatically saved when using switch commands
- MCP Servers: Managed through the CLI and persisted across sessions
- Tool Permissions: Configurable with session-based approval system
Configure the agent through environment variables:
Configuration changes made via commands (like model switching) are automatically persisted and don't require manual .env file editing.
The agent comes with comprehensive built-in tools:
- File Operations: Read, write, edit, and search files with surgical precision
- Directory Operations: List directories, get current path, navigate filesystem
- Shell Execution: Run bash commands with full output capture
- Web Fetching: Download and process web content
- Todo Management: Organize and track tasks across sessions
- Task Delegation: Spawn focused subagents for complex or context-heavy tasks
- Text Processing: Search, replace, and manipulate text content
All model tools support:
- Persistent Conversations: Maintain context across calls
- Conversation Management: Create, continue, or clear conversations
- Full Parameter Control: Temperature, max_tokens, system prompts
Within the interactive chat, use these slash commands:
- /help - Show available commands
- /tools - List all available tools
- /clear - Clear conversation history
- /model - Show current model
- /tokens - Show token usage
- /compact - Compact conversation history
- /switch <provider>:<model> - Switch to any provider-model combination
- /task - Spawn a subagent for complex tasks
In chat:
In chat:
For complex or context-heavy tasks, delegate to focused subagents:
Subagents work independently and automatically return results to the main conversation.
- Provider-Model Separation: API providers decoupled from model characteristics
- MCP Model Server: Standardized access to all AI models via MCP protocol
- Persistent Conversations: Conversation context maintained across tool calls
- Easy Extensibility: Adding new providers or models requires minimal code
- Robust Tool Integration: Unified tool execution with provider-specific optimizations
- Intelligent Subagent System: Automatic task delegation and coordination
- Multi-Provider Access: Same model accessible through different providers
- Enhanced Visibility: Full parameter display and complete response output
Please read our CONTRIBUTING.md file for more details on our code of conduct and the process for submitting pull requests.
- Fork the repository
- Create a feature branch: git checkout -b feature-name
- Make your changes
- Add tests if applicable
- Commit your changes: git commit -m 'Add feature'
- Push to the branch: git push origin feature-name
- Submit a pull request
- Python 3.10+
- API keys for desired providers:
- Anthropic API key (for Claude models)
- OpenAI API key (for GPT models)
- DeepSeek API key (for DeepSeek models)
- Google AI Studio API key (for Gemini models)
- OpenRouter API key (for multi-provider access)
- FastMCP for MCP server functionality
- Node.js (for MCP servers that require it)
- API Keys: Stored as environment variables
- Configuration: Automatically managed in user home directory (~/.config/mcp-agent/)
- MCP Servers: Local configurations with session-based tool permissions
- Tool Execution: Built-in permission system for sensitive operations
- Subagent Isolation: Subagents run in controlled environments with specific tool access
- Subagent Permissions: Can be configured to bypass permission checks for automated workflows via SUBAGENT_PERMISSIONS_BYPASS=true
This project is licensed under the MIT License - see the LICENSE file for details.
- Model Context Protocol (MCP) for the extensible tool integration framework
- DeepSeek for the powerful reasoning models
- Google AI for Gemini model access
- FastMCP for the Python MCP client implementation
- 🐛 Report Issues
- 💬 Discussions
- 📖 Wiki
Happy coding with MCP Agent! 🤖✨