Allos is an open-source, provider-agnostic agentic SDK that gives you the power to build production-ready AI agents that work with any LLM provider. Inspired by Anthropic's Claude Code, Allos delivers the same outstanding capabilities without locking you into a single ecosystem.
The Problem: Most agentic frameworks force you to choose between vendors, making it expensive and risky to switch models.
The Solution: Allos provides a unified interface across OpenAI, Anthropic, Ollama, Google, and more—so you can use the best model for each task without rewriting your code.
Switch seamlessly between OpenAI, Anthropic, Ollama, and other LLM providers. Use GPT-4 for one task, Claude for another, or run models locally—all with the same code.
Built-in tools for:
- 📁 File operations (read, write, edit)
- 💻 Shell command execution
- 🌐 Web search and fetching (coming soon)
- 🔌 MCP (Model Context Protocol) extensibility (coming soon)
- ⚡ Context Management: Automatic context window optimization
- 🔐 Fine-grained Permissions: Control what your agent can and cannot do
- 💾 Session Management: Save and resume conversations
- 📊 Production Ready: Built-in error handling, logging, and monitoring
- 🎨 Extensible: Easy to add custom tools and providers
| Provider Agnostic | ✅ | ❌ (Anthropic only) | ⚠️ (Complex) |
| Local Models Support | ✅ | ❌ | ⚠️ |
| Simple API | ✅ | ✅ | ❌ |
| Built-in Tools | ✅ | ✅ | ⚠️ |
| MCP Support | 🚧 | ✅ | ❌ |
| Production Ready | ✅ | ✅ | ⚠️ |
| Open Source | ✅ MIT | ⚠️ Limited | ✅ |
See the full workflow in action by running our CLI demo script:
We recommend using uv, a fast Python package manager.
The allos CLI is the quickest way to use the agent.
- Provider Layer: Unified interface for all LLM providers
- Tool System: Extensible toolkit with built-in and custom tools
- Agent Core: Main agentic loop with planning and execution
- Context Manager: Automatic context window optimization
- CLI: User-friendly command-line interface
| OpenAI | ✅ Ready | GPT-5, GPT-4, GPT-4o | Tool calling, streaming |
| Anthropic | ✅ Ready | Claude 3, Claude 4 (Opus, Sonnet, Haiku) | Tool calling, streaming |
| Ollama | 🚧 Coming Soon | Llama, Mistral, Qwen, etc. | Local models |
| 🚧 Coming Soon | Gemini Pro, Gemini Ultra | Tool calling | |
| Cohere | 📋 Planned | Command R, Command R+ | Tool calling |
| Custom | ✅ Ready | Any OpenAI-compatible API | Extensible |
| read_file | Read file contents | Always Allow |
| write_file | Write/create files | Ask User |
| edit_file | Edit files (string replace) | Ask User |
| list_directory | List directory contents | Always Allow |
| shell_exec | Execute shell commands | Ask User |
| web_search | Search the web | 📋 Planned |
| web_fetch | Fetch web page content | 📋 Planned |
- Getting Started - Installation and first steps
- Quickstart Guide - 5-minute tutorial
- Providers - Provider configuration
- Tools - Using built-in tools
- Custom Tools - Creating your own tools
- CLI Reference - Command-line options
- API Reference - Python API documentation
- Architecture - System design
- Initial architecture design
- Directory structure
- Provider layer (OpenAI, Anthropic)
- Tool system (filesystem, shell) with user-approval permissions
- Agent core with agentic loop and session management
- CLI interface
- Comprehensive unit, integration, and E2E test suites
- Final documentation and launch prep
See MVP_ROADMAP.md for detailed MVP timeline.
- Ollama integration (local models)
- Google Gemini support
- Web search and fetch tools
- Advanced context management
- Plugin system
- Configuration files (YAML/JSON)
- Session management improvements
- MCP (Model Context Protocol) support
- Subagents and delegation
- Pydantic AI integration
- Smolagents compatibility
- Multi-modal support
- Advanced monitoring and observability
- Cloud deployment support
The current MVP of the Allos Agent SDK is focused on providing a robust foundation. It intentionally excludes some advanced features that are planned for future releases:
- No Streaming Support: The agent currently waits for the full response from the LLM and tools. Real-time streaming of responses is a post-MVP feature.
- Limited Context Management: The agent performs a basic check to prevent exceeding the context window but does not yet implement advanced context compaction or summarization for very long conversations.
- No Async Support: The core Agent and Tool classes are synchronous. An async-first version is planned for a future release.
- Limited Provider Support: The MVP includes openai and anthropic. Support for ollama, google, and others is on the roadmap.
- No Web Tools: Built-in tools for web search (web_search) and fetching URLs (web_fetch) are planned but not yet implemented.
- Basic Error Recovery: While the agent can recover from tool execution errors (like permission denied), it does not yet have sophisticated strategies for retrying failed API calls or self-correcting flawed plans.
Please see our full ROADMAP.md for more details on our plans for these and other features.
🔵 MVP Development is almost complete
All major features for the MVP are implemented and tested.
- ✅ Providers: OpenAI and Anthropic are fully supported.
- ✅ Tools: Secure filesystem and shell tools are included.
- ✅ Agent Core: The agentic loop, permissions, and session management are functional.
- ✅ CLI: A polished and powerful CLI is the primary user interface.
- ✅ Python API: The underlying Python API is stable and ready for use.
Expected MVP Release: 6-8 weeks from project start
We welcome early contributors! See Contributing below.
We're building Allos in the open and would love your help! Whether you're:
- 🐛 Reporting bugs
- 💡 Suggesting features
- 📖 Improving documentation
- 🔧 Submitting PRs
- ⭐ Starring the repo (helps a lot!)
All contributions are welcome! See CONTRIBUTING.md for guidelines.
Ensure you have uv installed. Check out UV Installation Instructions for more information.
A huge thank you to our first 100 stargazers! You're helping build the future of AI agent development. 🚀
No stargazers yet. Be the first! ⭐
Not featured yet? ⭐ Star us on GitHub to join the Hall of Fame!
Allos (Greek: ἄλλος) means "other" or "different" - representing our core philosophy of choice and flexibility. Just as the word implies alternatives and options, Allos gives you the freedom to choose any LLM provider without constraints.
Allos is open source and available under the MIT License.
Inspired by:
- Anthropic's Claude Code - For showing what's possible with agentic coding
- LangChain - For pioneering LLM frameworks
- AutoGPT - For autonomous agent patterns
- GitHub Issues: Report bugs or request features
- Discussions: Join the conversation
- Twitter: @allos_sdk (coming soon)
- Discord: Join our community (coming soon)
.png)

