-
🚀 Introduction - Understand the vision and architecture behind ROMA
-
📦 Setup - Detailed configuration options and environment setup
-
🤖 Agents Guide - Learn how to create and customize your own agents
-
⚙️ Configuration - Detailed configuration options and environment setup
-
🗺️ Roadmap - See what's coming next for ROMA
ROMA is a meta-agent framework that uses recursive hierarchical structures to solve complex problems. By breaking down tasks into parallelizable components, ROMA enables agents to tackle sophisticated reasoning challenges while maintaining transparency that makes context-engineering and iteration straightforward. The framework offers parallel problem solving where agents work simultaneously on different parts of complex tasks, transparent development with a clear structure for easy debugging, and proven performance demonstrated through our search agent's strong benchmark results. We've shown the framework's effectiveness, but this is just the beginning. As an open-source and extensible platform, ROMA is designed for community-driven development, allowing you to build and customize agents for your specific needs while benefiting from the collective improvements of the community.
ROMA framework processes tasks through a recursive plan–execute loop:
- Atomizer – Decides whether a request is atomic (directly executable) or requires planning.
- Planner – If planning is needed, the task is broken into smaller subtasks. Each subtask is fed back into the Atomizer, making the process recursive.
- Executors – Handle atomic tasks. Executors can be LLMs, APIs, or even other agents — as long as they implement an agent.execute() interface.
- Aggregator – Collects and integrates results from subtasks. Importantly, the Aggregator produces the answer to the original parent task, not just raw child outputs.
- Top-down: Tasks are decomposed into subtasks recursively.
- Bottom-up: Subtask results are aggregated upwards into solutions for parent tasks.
- Left-to-right: If a subtask depends on the output of a previous one, it waits until that subtask completes before execution.
This structure makes the system flexible, recursive, and dependency-aware — capable of decomposing complex problems into smaller steps while ensuring results are integrated coherently.
Click to view the system flow diagramChoose between:
- Docker Setup (Recommended) - One-command setup with isolation
- Native Setup - Direct installation for development
- Framework: Built on AgnoAgents
- Backend: Python 3.12+ with FastAPI/Flask
- Frontend: React + TypeScript with real-time WebSocket
- LLM Support: Any provider via LiteLLM
- Data Persistence: Enterprise S3 mounting with security validation
- 🔒 goofys FUSE mounting for zero-latency file access
- 🛡️ Path injection protection with comprehensive validation
- 🔐 AWS credentials verification before operations
- 📁 Dynamic Docker Compose with secure volume mounting
- Code Execution: E2B sandboxes with unified S3 integration
- Security: Production-grade validation and error handling
- Features: Multi-modal, tools, MCP, hooks, caching
See setup docs for detailed instructions.
For secure code execution capabilities, optionally set up E2B sandboxes:
E2B Features:
- 🔒 Secure Code Execution - Run untrusted code in isolated sandboxes
- ☁️ S3 Integration - Automatic data sync between local and sandbox environments
- 🚀 goofys Mounting - High-performance S3 filesystem mounting
- 🔧 AWS Credentials - Passed securely via Docker build arguments
Note: These agents are demonstrations built using ROMA's framework through simple vibe-prompting and minimal manual tuning. They showcase how easily you can create high-performance agents with ROMA, rather than being production-final solutions. Our mission is to empower the community to build, share, and get rewarded for creating innovative agent recipes and use-cases.
ROMA comes with example agents that demonstrate the framework's capabilities:
A versatile agent powered by ChatGPT Search Preview for handling diverse tasks:
- Intelligent Search: Leverages OpenAI's latest search capabilities for real-time information
- Flexible Planning: Adapts task decomposition based on query complexity
- Multi-Domain: Handles everything from technical questions to creative projects
- Quick Prototyping: Perfect for testing ROMA's capabilities without domain-specific setup
Perfect for: General research, fact-checking, exploratory analysis, quick information gathering
A comprehensive research system that breaks down complex research questions into manageable sub-tasks:
- Smart Task Decomposition: Automatically splits research topics into search, analysis, and synthesis phases
- Parallel Information Gathering: Executes multiple searches simultaneously for faster results
- Multi-Source Integration: Combines results from web search, Wikipedia, and specialized APIs
- Intelligent Synthesis: Aggregates findings into coherent, well-structured reports
Perfect for: Academic research, market analysis, competitive intelligence, technical documentation
Specialized financial analysis agent with deep blockchain and DeFi expertise:
- Real-Time Market Data: Integrates with Binance, CoinGecko, and DefiLlama APIs
- On-Chain Analytics: Access to Arkham Intelligence for wallet tracking and token flows
- Technical Analysis: Advanced charting with OHLC data and market indicators
- DeFi Metrics: TVL tracking, yield analysis, protocol comparisons
- Secure Execution: Runs analysis in E2B sandboxes with data persistence
Perfect for: Token research, portfolio analysis, DeFi protocol evaluation, market trend analysis
All three agents demonstrate ROMA's recursive architecture in action, showing how complex queries that would overwhelm single-pass systems can be elegantly decomposed and solved. They serve as templates and inspiration for building your own specialized agents.
Access all the pre-defined agents through the frontend on localhost:3000 after setting up the backend on localhost:5000. Please checkout Setup and the Agents guide to get started!
We evaluate our simple implementation of a search system using ROMA, called ROMA-Search across three benchmarks: SEAL-0, FRAMES, and SimpleQA.
Below are the performance graphs for each benchmark.
SealQA is a new challenging benchmark for evaluating Search-Augmented Language models on fact-seeking questions where web search yields conflicting, noisy, or unhelpful results.
View full results
A comprehensive evaluation dataset designed to test the capabilities of Retrieval-Augmented Generation (RAG) systems across factuality, retrieval accuracy, and reasoning.
View full results
Factuality benchmark that measures the ability for language models to answer short, fact-seeking questions.
Automatically breaks down complex tasks into manageable subtasks with intelligent dependency management. Runs independent sub-tasks in parallel. |
Works with any provider (OpenAI, Anthropic, Google, local models) through unified interface, as long as it has an agent.run() command, then you can use it! |
Stage tracing shows exactly what happens at each step - debug and optimize with full visibility |
Seamlessly integrate external tools and protocols with configurable intervention points. Already includes production-grade connectors such as E2B, file-read-write, and more. |
This framework would not have been possible if it wasn't for these amazing open-source contributions!
- Inspired by the hierarchical planning approach described in "Beyond Outlining: Heterogeneous Recursive Planning" by Xiong et al.
- Pydantic - Data validation using Python type annotations
- Agno - Framework for building AI agents
- E2B - Cloud runtime for AI agents
If you use the ROMA repo in your research, please cite:
This project is licensed under the MIT License - see the LICENSE file for details.