A complete AI agent platform that lets users create, deploy, and manage intelligent agents with real-world integrations. Built with a microservices architecture using Next.js, Python, and the Model Context Protocol (MCP).
Logos is a platform where people can create AI agents that actually get things done. Think "AI assistant that can read your emails, schedule meetings, and manage your Google Drive" rather than just a chatbot. Users describe what they want their agent to do in plain English, and the system builds a working agent that can interact with their apps and services.
Our system is built with several key components that work together:
This monorepo contains three main applications:
The main user interface built with Next.js 14. This is where users:
- Create and configure AI agents
- Chat with their agents in real-time
- Manage integrations with external services
- View agent execution logs and history
Tech: Next.js 14, TypeScript, Tailwind CSS, Supabase Role: Layer 1 + Agent Builder from architecture diagram
The backend service that handles all external integrations. This Python server:
- Manages OAuth tokens and API authentication
- Provides tools for Gmail, Slack, Google Drive, Airtable, etc.
- Uses the Model Context Protocol for clean agent communication
- Handles rate limiting and error recovery
Tech: Python, FastAPI, MCP Protocol Role: MCP Gateway Server + Individual Integration Servers from architecture diagram
The development and testing environment where agents are validated before deployment:
- Test agent workflows safely without affecting real data
- Debug agent behavior with comprehensive logging
- Prototype new features and integrations
- Run automated tests for reliability
Tech: Python, Workflow Engine, Test Framework Role: Agent Sandbox from architecture diagram
Each component needs its own .env file. See the individual README files for detailed setup:
- Frontend: .env.local with Supabase and API keys
- Integration Gateway: .env with OAuth credentials and database URL
- Sandbox: .env with testing configuration
Visit http://localhost:3000 to see the application.
- User describes agent in the frontend → "I want an agent that summarizes my daily emails"
- Frontend parses intent → Identifies goal, constraints, required integrations (Gmail)
- Agent configuration saved → Stored in database with workflow definition
- User connects Gmail → OAuth flow handled by frontend, tokens stored securely
- User triggers agent → Through chat interface or scheduled trigger
- Frontend requests execution → Sends agent config + user context
- Integration gateway called → Retrieves user's Gmail emails using stored OAuth tokens
- Results returned → Email summaries displayed in chat interface
- New integration developed → Created in logos-I with proper MCP tools
- Testing in sandbox → Validated with test workflows and mock data
- Frontend integration → UI components added for new integration
- Production deployment → All components deployed together
| Gmail | Read, send, search emails | Yes |
| Google Calendar | Create events, check availability | Yes |
| Google Drive | Upload, download, search files | Yes |
| Google Sheets | Read, write spreadsheet data | Yes |
| Slack | Send messages, read channels | Yes |
| Airtable | Query databases, create records | API Key |
| Exa Search | Semantic web search | API Key |
-
Create integration in logos-I
cd logos-I/integrations # Create new_service.py with OAuth and tools -
Add to tool registry
# In logos-I/tools/__init__.py from .new_service import register_tools register_tools(mcp) -
Add frontend UI
cd logos-frontend/src/components/integrations # Create integration button and OAuth flow -
Test in sandbox
cd logos-sandbox # Create test workflow using new integration python workflow_tests.py
Each component has comprehensive logging:
- Frontend: Browser dev tools + Next.js logs
- Integration Gateway: Python logs with request/response details
- Sandbox: Detailed execution traces and performance metrics
- OAuth tokens encrypted in database
- API keys in environment variables only
- Request validation at all service boundaries
- Rate limiting on external API calls
- Audit logging for all agent actions
- Frontend: npm run dev (localhost:3000)
- Gateway: python main.py (localhost:8080)
- Database: Supabase hosted or local instance
- Frontend: Deploy to Vercel, Netlify, or similar
- Gateway: Deploy to Railway, Render, or container platform
- Database: Managed Supabase instance
- Environment variables: Set in deployment platform
Each repository has its own contribution guidelines, but generally:
- Follow existing patterns - Look at current code structure
- Add comprehensive tests - Especially for new integrations
- Update documentation - Keep READMEs current
- Test cross-service compatibility - Ensure changes work across all components
- Integration issues: Check logos-I README and logs
- Frontend bugs: Check logos-frontend README and browser console
- Testing problems: Check logos-sandbox README and execution logs
- Architecture questions: Review this README and the individual component docs
This platform is designed to make AI agents actually useful in the real world. Each component plays a crucial role in making that happen - from the user-friendly interface to the robust integration handling to the comprehensive testing environment.
.png)





