Agentic AI-Powered Interactive GitHub Repository Analysis with Advanced Chat Interface
A modern, web-based agentic AI assistant that helps you understand, analyze, and explore entire GitHub repositories through an interactive chat interface. Powered by ReAct (Reasoning + Acting) agents that autonomously use tools, reason through complex problems, and provide deep, code-aware insights with beautiful presentation, smart context retrieval, and cost-optimized AI interactions.
- 🤖 Agentic AI System - ReAct agents that think, plan, and use tools autonomously to solve complex coding questions
- 🧠 Real-time Reasoning - Watch the AI think through problems with live Thought/Action/Observation streaming
- 🛠️ Autonomous Tool Usage - AI independently explores files, searches code, and analyzes patterns across your repository
- 🎯 Interactive Repository Chat - ChatGPT-style conversation with full repository context and agent reasoning
- 🌲 Smart Codebase Explorer - High-performance file picker with search and tree navigation
- 📋 Enhanced Markdown - Context-aware emojis, file linking, and professional formatting
- 📎 Smart File & Folder Mentions - Use @filename.py or @folder/path with autocomplete to reference specific files/folders
- 👀 Integrated File Viewer - Side-by-side file viewing pane with syntax highlighting and quick actions
- ⚡ Real-time Streaming - Live AI responses with typing indicators and cost optimization
- 🎨 Modern UI/UX - Professional, responsive interface with session management
- 🔍 Advanced RAG System - Smart context sizing with 10-25 sources from indexed repositories
- 💰 Cost-Optimized AI - OpenRouter prompt caching for 25-90% cost savings
Our system uses ReAct (Reasoning + Acting) agents that don't just answer questions - they autonomously reason through complex problems and take actions to solve them:
- Multi-step problem solving - Breaks down complex questions into logical steps
- Dynamic tool selection - Chooses the right tools for each task automatically
- Context-aware decisions - Adapts approach based on repository structure and user intent
- Self-correction - Re-evaluates and adjusts when initial approaches don't work
- File exploration - Autonomously navigates and examines repository structure
- Smart code search - Semantically searches across files for relevant patterns
- Pattern analysis - Identifies architectural patterns and relationships
- Code generation - Creates context-aware examples based on existing codebase patterns
- Live reasoning display - See exactly how the AI thinks through problems
- Tool execution tracking - Watch as tools are selected and executed
- Step-by-step breakdown - Understand the agent's decision-making process
- Interactive feedback - Agent adapts based on your responses and clarifications
Real-time Agent Reasoning | Live display of Thought/Action/Observation cycles as the AI works |
Context-Aware Headers | Headers automatically get relevant emojis (🐛 for bugs, 🔧 for fixes, etc.) |
Smart Bullet Points | List items get contextual icons (❌ for errors, ✅ for success, 💡 for tips) |
File Cross-linking | Automatic detection and highlighting of referenced files |
@-Mention Autocomplete | Type @ to get intelligent file/folder suggestions with fuzzy search |
Integrated File Viewer | Click any file in the tree to view it in a side pane with syntax highlighting |
Agentic Step Visualization | Beautiful UI for displaying AI reasoning steps with type-specific styling |
Numbered Step Badges | Visual badges for step-by-step instructions |
Professional Code Blocks | VS Code theme with copy buttons and language detection |
Enhanced Blockquotes | Important notes with lightbulb indicators |
Responsive Design | Works beautifully on desktop and mobile |
Session Management | Multiple concurrent repository sessions with history |
High-Performance File Picker | Optimized for repositories with 3000+ files |
- Frontend: React 18, TypeScript, Tailwind CSS, shadcn/ui
- Backend: Python FastAPI, FAISS Vector Store, Advanced RAG
- AI Models: OpenAI, OpenRouter (Claude, Mistral, Llama) with prompt caching
- Code Analysis: Tree-sitter parsers for 20+ languages with smart fallbacks
- Real-time: Server-Sent Events for streaming responses
- Performance: Multi-level caching, smart context sizing, async processing
- LlamaIndex-powered agents with autonomous tool selection
- Multi-step reasoning with self-correction capabilities
- Real-time streaming of thought processes and tool executions
- Context-aware memory that remembers previous interactions and reasoning chains
- explore_directory - Autonomous file system navigation with metadata
- search_codebase - Semantic code search across entire repositories
- read_file - Smart file reading with size optimization
- analyze_file_structure - Deep structural analysis and pattern recognition
- find_related_files - Relationship discovery through imports and naming
- semantic_content_search - AI-powered content understanding
- generate_code_example - Context-aware code generation from repository patterns
The UI provides real-time insight into the agent's decision-making:
- 🧠 Purple Thought bubbles - Shows reasoning and planning
- ⚡ Blue Action blocks - Displays tool selection and parameters
- 👁️ Green Observation panels - Shows tool execution results
- 💬 Teal Answer sections - Final synthesized responses
- ⚠️ Status indicators - Real-time processing updates
- Clone and setup the repository
- Start the backend with repository indexing
- Launch the frontend for the chat interface
- Create a new session by entering any GitHub repo URL
- Watch the AI agent clone, index, and analyze the repository
- Ask complex questions and watch the agent reason through solutions step-by-step
- Use @-mentions for specific file context and explore with the file tree
- Observe real-time reasoning as the agent explores, searches, and synthesizes answers
Create a .env file in the root directory:
- Visit http://localhost:3000
- Paste a GitHub repository URL (public repos supported)
- Wait for repository cloning and indexing (progress shown)
- Start chatting with the AI about the codebase
- Use the sidebar to explore files and folders
- Click any file in the tree to open it in the integrated viewer pane
- Type @ in the chat for intelligent file/folder autocomplete
- Mention files with @filename.py or folders with @folder/subfolder
- Overview: "What is this repository about?" - Agent explores structure, reads key files, analyzes patterns
- Architecture: "Explain the project structure" - Agent navigates directories, examines relationships
- Code exploration: "Show me the main components" - Agent searches, categorizes, and explains findings
- Debugging: "Help me understand this error in @file.py" - Agent reads file, searches for related code, analyzes context
- Implementation: "How does the authentication work?" - Agent searches auth patterns, traces code flow
- Thought steps show the agent's reasoning process
- Action steps display which tools the agent chooses to use
- Observation steps show the results of tool execution
- Answer steps provide the final synthesized response
- File references: "Look at @config.py for the settings"
- Folder exploration: "What's in @folder/src/components?"
- Multi-file analysis: "Compare @file1.js and @file2.js"
- Complex queries: "Find all database-related files and explain the schema"
- Autocomplete suggestions: Type @ to see file/folder suggestions with fuzzy search
- Integrated file viewing: Click files in the explorer to view them alongside chat
- Use @ for file autocomplete (optimized for large repos)
- Arrow keys navigate autocomplete suggestions, Enter/Tab to select
- Ask complex questions - the agent will break them down automatically
- Watch the reasoning - each step shows how the AI approaches your problem
- Folder mentions automatically include all files in context
- File viewer pane opens when clicking files in the explorer tree
- The AI remembers conversation history and previous reasoning
- Query complexity automatically adjusts context size (10-25 sources)
- Prompt caching reduces costs for repetitive repository questions
/assistant/sessions | POST | Create new repository session |
/assistant/sessions | GET | List all sessions |
/assistant/sessions/{id}/messages | GET | Get conversation history |
/sessions/{id}/messages | POST | Send message to AI (streaming) |
/api/files | GET | Get repository file list |
/api/tree | GET | Get repository tree structure |
/api/file-content | GET | Get specific file content |
/cache-stats | GET | Monitor performance metrics |
- Understand new codebases quickly with AI-guided exploration
- Debug complex issues across multiple files and components
- Learn best practices from AI analysis of well-structured projects
- Navigate large repositories efficiently with smart search
- Get context on unfamiliar parts of the codebase
- Understand impact of proposed changes
- Generate insights for review comments
- Explore dependencies and related code
- Generate documentation from AI understanding of code
- Create tutorials based on actual implementation
- Explain complex systems with AI assistance
- Understand APIs and integration patterns
- Onboard new developers with AI-guided codebase tours
- Share knowledge through AI-explained code patterns
- Consistent analysis across team members
- Collaborative exploration of system architecture
- Repository cloning with full history and branch support
- FAISS vector indexing for semantic code search with 3000+ documents
- Smart context sizing - automatically adjusts 10-25 sources based on query complexity
- Multi-language support via tree-sitter parsers with intelligent fallbacks
- Query complexity analysis - detects repository overview vs. specific file questions
- OpenRouter prompt caching - 25-90% cost savings on repetitive queries
- Multi-level caching - RAG context, responses, and folder structure
- Async processing - background repository indexing and context retrieval
- Smart file picker - optimized for repositories with 3000+ files (30 folders + 20 files initially)
- Memory management - intelligent conversation context compression
- Session persistence - conversations survive page reloads
- Multiple repository sessions - work with different repos simultaneously
- Real-time status - repository cloning and indexing progress
- Advanced file picker - search, folders, and performance optimization
- @-mention autocomplete - intelligent file/folder suggestions with keyboard navigation
- Integrated file viewer - side-by-side code viewing with syntax highlighting and actions
- Responsive chat interface - mobile-friendly with typing indicators
- Configurable AI providers - OpenAI, OpenRouter, multiple models
- Performance monitoring - cache statistics and usage metrics
- Error handling - graceful fallbacks and detailed logging
- Security - file access restrictions and path validation
- Anthropic Claude: 90% savings on cached content (0.1x cost)
- OpenAI GPT models: 25-50% savings on cached content (0.5x-0.75x cost)
- Automatic caching: Triggers for repository context >1000 tokens
- Index size: 1000-5000+ documents per repository
- Context sources: 10-25 dynamically selected based on query
- Response time: <2s for most queries with caching
- Memory usage: Optimized for large repositories
- Large repos: Handles 3000+ files smoothly
- Initial load: 30 folders + 20 files for fast startup
- Search results: Capped at 100 items for performance
- Tree navigation: Lazy loading for deep directory structures
- Private repository support - GitHub App integration for private repos
- Multi-branch analysis - Compare code across different branches
- Code change analysis - Understand diffs and pull requests
- Integration with IDEs - VS Code extension for seamless workflow
- Team collaboration - Shared sessions and annotations
- Export functionality - Save conversations and insights
We love contributions! Here's how to get started:
- Fork the repository
- Create a feature branch (git checkout -b feature/amazing-feature)
- Commit your changes (git commit -m 'Add amazing feature')
- Push to the branch (git push origin feature/amazing-feature)
- Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with React and FastAPI
- UI components from shadcn/ui
- AI providers: OpenAI and OpenRouter
- Vector search powered by FAISS
- Code parsing via tree-sitter