Anges is an LLM powered engineering agent system designed to be easy to use, but also highly customizable and minimalist.
# Install from PyPI
pip install anges
# Set your API key. Anges defaults to using Anthropic's Claude.# (See configuration below to use other models like Gemini or OpenAI)export ANTHROPIC_API_KEY=<YOUR_API_KEY_HERE># Run your first task
anges -q "What is the OS version?"
# Interactive mode (for conversational tasks)
anges -i
# Direct task execution from the command line
anges -q "List all python files in the current directory."# Execute a task described in a file
anges -f task_description.txt
# Launch the web interface
anges ui --port 5000 --password your_password
# Help menu
anges -h
A quick demonstration of Anges checking the OS and listing files.
A quick look of Anges UI.
The default configuration is located at anges/configs/default_config.yaml.
You can override these settings by creating a config.yaml file at ~/.anges/config.yaml.
For example, to configure the default agent to use Google's Gemini Pro:
# Create the config file to switch the default model
cat >~/.anges/config.yaml <<EOFagents: default_agent: model_name: "gemini"EOF# Export the corresponding API keyexport GOOGLE_API_KEY=<YOUR_GEMINI_API_KEY>
Working Directory: You can set the agent's working directory from the UI or CLI. This sets the default location for operations but does not enforce a strict permission boundary.
Prefix Command: You can configure a prefix command (e.g., export MY_VAR=... &&) that will be executed before every command the agent runs. This is useful for setting up a consistent environment.
Default Agent vs. Orchestrator:
Default Agent: Ideal for simple, single-step tasks that a human could complete in a few minutes. It's fast and direct.
Orchestrator: For complex, multi-step problems that require research, planning, and code iteration. The orchestrator agent can break down the task and delegate to other agents.
Event Streams: Every action, thought process, and command is logged as a JSON file in ~/.anges/data/event_streams. This provides full transparency and creates a valuable dataset for fine-tuning or analysis.
We're used to LLMs being advisors—they sit behind a chat box, waiting for copy-pasted context and offering suggestions you still have to run yourself.
But what if you gave an AI real access to your shell, tools, and working environment? What if it could work alongside you, not just talk to you?
Anges turns that idea into a practical, hackable reality, giving LLMs controlled execution power while keeping engineers fully in the loop.
Real Automation, Not Just Advice
Anges doesn’t just suggest commands—it runs them. It reads output, handles errors, and plans its next move. It's a doer, not a talker.
Model Agnostic
Use any model you want—Claude, OpenAI, Gemini, Llama, local models—all are easily configurable. You control the brain.
Flexible Interfaces
Work from your terminal, in a container, or through the web UI on your phone. Anges meets you where you are.
Hackable by Design
Written in clean, modular Python. Everything is exposed and easy to modify. There are no heavy abstractions hiding the prompts or logic.
Built-in Orchestration
Tackle complex tasks with a multi-agent system that can decompose problems, delegate work, and execute recursively—with zero boilerplate.
Transparent Event Logs
Every command, decision, and observation is saved to a local event stream. You have a perfect, replayable audit trail of the agent's work.
As an Engineering Assistant
Ask it to install packages, inspect logs, restart services, or modify configuration files—all within a single, natural language query.
For DevOps & Maintenance
Automate infrastructure chores like updating dependencies, cleaning up disk space, or managing your local development environment.
For Data & File Workflows
Let it move, rename, clean, or parse files. Have it pipe commands together to build quick data pipelines without writing scripts.
For Learning & Debugging
Watch how the agent breaks down a task and plans its execution. It’s a great way to learn new system tools or understand complex commands.
To Build Custom Domain Agents
Need an agent that knows your specific codebase, product, or workflow? Fork Anges, wire in your custom logic, and create a specialized assistant.
Anges follows a modular, event-driven architecture with distinct components for task interpretation, command generation, execution management, and result processing. The framework is built around four core concepts:
BaseAgent Lifecycle: Manages agent creation, initialization, and execution flow
Event Stream: Provides persistent state management and execution history
Action System: Defines extensible actions for agent capabilities
Prompt Construction: Builds contextual prompts from event history
The BaseAgent class serves as the foundation for all agent types in Anges, managing the complete lifecycle from initialization to task completion.
Event Stream: Creates or inherits an EventStream for state persistence
Inference Function: LLM integration for decision making
Execution Context: Working directory and command prefix configuration
Interrupt Handling: Optional callback for graceful task interruption
Summarization: Automatic event summarization for long conversations
The agent follows a structured execution loop:
Request Handling: Processes new or follow-up requests
Event Creation: Adds request events to the event stream
Decision Loop: Continuously prompts LLM and executes actions until completion
Interruption Checks: Monitors for user interrupts or execution limits
State Persistence: Saves event stream after each significant action
defrun_with_new_request(self, task_description, event_stream=None):
# Initialize run configurationrun_config=self._build_run_config(task_description, event_stream)
# Add new request to event streamself._handle_received_new_request(run_config)
whileTrue:
# Check for interruptionsifself._check_interruption(run_config):
returnevent_stream# Check execution limitsifself._check_exhausted(run_config):
returnevent_stream# Get next action and executeifself._prefict_next_event_and_handle_actions(event_stream, run_config):
returnevent_stream
The Event Stream provides persistent state management and execution history, enabling agents to maintain context across complex, multi-step tasks.
Each event captures a discrete action or state change:
The system supports multiple agent types optimized for different scenarios:
Purpose: General-purpose task execution with full action capabilities
Use Cases: Complex multi-step tasks, file operations, system administration
Actions: All available actions (shell commands, file editing, help requests)
Purpose: Focused command execution with minimal overhead
Use Cases: Simple command execution, quick operations
Actions: Primarily shell commands and basic responses
Purpose: Task decomposition and analysis without execution
Use Cases: Planning, task breakdown, requirement analysis
Actions: Text responses, analysis outputs
Error Recovery: Automatic retry and fallback mechanisms
State Persistence: Recovery from unexpected failures
User Notifications: Clear error communication
Anges provides a unified CLI with multiple operation modes:
The Action system provides a modular, extensible framework for defining agent capabilities through discrete, composable actions.
All actions inherit from the base Action class:
classAction:
def__init__(self):
self.type=""# Action identifierself.guide_prompt=""# LLM instruction textself.user_visible=False# Whether action results are shown to userself.unique_action=False# Whether action must be used aloneself.returning_action=False# Whether action terminates the execution loopdefhandle_action_in_parsed_response(self, run_config, parsed_response_dict, action_json):
raiseNotImplementedError("Subclasses must implement this method")
RUN_SHELL_CMD: Execute shell commands with timeout and background support
Registering Actions: Add to agent's action registry
The prompt construction system builds contextual prompts from event history, enabling agents to maintain awareness of previous actions and current state.
Prompts are built using a template-based approach:
defconstruct_prompt_for_event_stream(event_stream, prompt_template=None, agent_config=None):
# Build event history stringevent_stream_string=construct_events_str_with_summary(event_stream, agent_config=agent_config)
# Replace template placeholdersprompt=prompt_template.replace("PLACEHOLDER_EVENT_STREAM", event_stream_string)
# Add context-specific instructionsifevent_stream.events_listandevent_stream.events_list[-1].type=="edit_file":
prompt+="\n<!!The last event was a file editing operation...>"returnprompt
Events are formatted into structured text for LLM consumption:
## Event 1 TYPE: NEW_REQUEST
REASONING:
[Agent's reasoning for this action]
CONTENT:
[Event content - command output, file changes, etc.]
## Event 2 TYPE: ACTION
REASONING:
[Why this command was executed]
CONTENT:
[Command output and results]
Truncation Strategy: Long content is truncated for older events while preserving recent context
Recent Events: Full content preserved (configurable threshold)
Older Events: Truncated with "...N lines omitted..." indicators
Summaries: Replace ranges of events with AI-generated summaries
Summary Integration: Event summaries are seamlessly integrated into the prompt:
## Summary of Events 5 to 12
[AI-generated summary of actions and results]
## Event 13 TYPE: ACTION
[Current event details]
PLACEHOLDER_EVENT_STREAM: Replaced with formatted event history
PLACEHOLDER_ACTION_INSTRUCTIONS: Replaced with available action documentation
Example template structure:
# INSTRUCTION
You are an AI agent...
# EVENT STREAM
PLACEHOLDER_EVENT_STREAM
# AVAILABLE ACTIONS
PLACEHOLDER_ACTION_INSTRUCTIONS
#!/usr/bin/env python3"""Example: Using Default Agent for file system operations"""fromanges.agents.default_agentimportDefaultAgentfromanges.agents.agent_utils.eventsimportEventimportlogging# Create a default agent instanceagent=DefaultAgent(
cmd_init_dir="/home/user/projects",
prefix_cmd="",
logging_level=logging.INFO,
auto_entitle=True
)
# Execute a simple tasktask_description="List all Python files in the current directory and show their sizes"print(f"Executing task: {task_description}")
# Add the task as a new request eventevent=Event(event_type="NEW_REQUEST", content=task_description)
agent.event_stream.add_event(event)
# Run the agentresult=agent.run()
print(f"Task completed with status: {agent.status}")
# Access the event stream to see what happenedforeventinagent.event_stream.events:
print(f"Event: {event.event_type} - {event.content[:100]}...")
Default Agent with Custom Configuration
#!/usr/bin/env python3"""Example: Default Agent with custom settings and error handling"""fromanges.agents.default_agentimportDefaultAgentfromanges.agents.agent_utils.eventsimportEvent, EventStreamfromanges.utils.inference_apiimportINFERENCE_FUNC_DICTimportloggingdefinterrupt_check():
"""Custom interrupt check function"""# Add your custom interrupt logic herereturnFalse# Create agent with custom configurationagent=DefaultAgent(
cmd_init_dir="/tmp/workspace",
prefix_cmd="sudo", # Run commands with sudointerrupt_check=interrupt_check,
max_consecutive_actions_to_summarize=5,
logging_level=logging.DEBUG,
auto_entitle=False
)
# Complex file management tasktask="""Create a backup directory structure:1. Create /tmp/backup/logs and /tmp/backup/configs2. Copy all .log files from /var/log to /tmp/backup/logs3. Create a summary report of copied files4. Set appropriate permissions (755 for directories, 644 for files)"""try:
# Add task eventevent=Event(event_type="NEW_REQUEST", content=task)
agent.event_stream.add_event(event)
# Execute the taskresult=agent.run()
ifagent.status=="completed":
print("✅ Backup task completed successfully")
# Save the event stream for later analysisfromanges.utils.data_handlerimportsave_event_streamsave_event_stream(agent.event_stream, "backup_task_log.json")
else:
print(f"❌ Task failed with status: {agent.status}")
exceptExceptionase:
print(f"Error executing task: {e}")
# Handle errors gracefully
Basic Orchestrator Example
#!/usr/bin/env python3"""Example: Using Orchestrator for complex multi-step workflows"""fromanges.agents.orchestratorimportOrchestratorfromanges.agents.agent_utils.eventsimportEventfromanges.agents.agent_utils.agent_factoryimportAgentFactory, AgentConfigimportlogging# Create orchestrator with custom configurationorchestrator=Orchestrator(
cmd_init_dir="/home/user/development",
remaining_recursive_depth=3, # Allow 3 levels of sub-agentsmax_consecutive_actions_to_summarize=10,
logging_level=logging.INFO,
auto_entitle=True
)
# Complex development workflow taskcomplex_task="""Set up a new Python project with the following requirements:1. Create project directory structure (src/, tests/, docs/, requirements/)2. Initialize git repository3. Create virtual environment and activate it4. Install development dependencies (pytest, black, flake8)5. Create basic project files (setup.py, README.md, .gitignore)6. Run initial code quality checks7. Create first commit with initial project structure"""print("Starting complex project setup workflow...")
# Add the complex taskevent=Event(event_type="NEW_REQUEST", content=complex_task)
orchestrator.event_stream.add_event(event)
# Execute the workflowresult=orchestrator.run()
print(f"Workflow completed with status: {orchestrator.status}")
# The orchestrator automatically manages sub-agents and coordinates the workflowprint(f"Total events processed: {len(orchestrator.event_stream.events)}")
print(f"Orchestrator UID: {orchestrator.uid}")
Advanced Orchestrator with Agent Factory
#!/usr/bin/env python3"""Example: Advanced Orchestrator usage with AgentFactory"""fromanges.agents.agent_utils.agent_factoryimportAgentFactory, AgentConfig, AgentTypefromanges.agents.agent_utils.eventsimportEventfromanges.utils.data_handlerimportsave_event_stream, read_event_streamimportloggingimportosdefsetup_monitoring_system():
"""Set up a comprehensive monitoring system using Orchestrator"""# Create agent configurationconfig=AgentConfig(
agent_type=AgentType.ORCHESTRATOR.value,
cmd_init_dir=os.getcwd(),
prefix_cmd="",
logging_level=logging.INFO,
auto_entitle=True,
remaining_recursive_depth=4,
orchestrator_config={
'remaining_recursive_depth': 4,
'max_consecutive_actions_to_summarize': 15
}
)
# Create orchestrator using factoryfactory=AgentFactory()
orchestrator=factory.create_agent(config)
# Complex monitoring setup taskmonitoring_task=""" Set up a comprehensive system monitoring solution: Phase 1: System Metrics Collection - Install and configure system monitoring tools (htop, iotop, nethogs) - Set up log rotation for system logs - Create custom scripts to collect CPU, memory, and disk metrics Phase 2: Log Analysis Setup - Configure centralized logging - Set up log parsing and analysis tools - Create alerting rules for critical system events Phase 3: Dashboard and Reporting - Generate system health reports - Create automated daily/weekly summary emails - Set up real-time monitoring dashboard Phase 4: Automation and Maintenance - Create cron jobs for regular maintenance tasks - Set up automated cleanup procedures - Configure backup verification and reporting """print("🚀 Starting comprehensive monitoring system setup...")
# Add the complex taskevent=Event(event_type="NEW_REQUEST", content=monitoring_task)
orchestrator.event_stream.add_event(event)
# Execute the comprehensive workflowtry:
result=orchestrator.run()
iforchestrator.status=="completed":
print("✅ Monitoring system setup completed successfully!")
# Save detailed execution loglog_file=f"monitoring_setup_{orchestrator.uid}.json"save_event_stream(orchestrator.event_stream, log_file)
print(f"📝 Execution log saved to: {log_file}")
# Print summary of actions takenaction_count=sum(1foreventinorchestrator.event_stream.eventsifevent.event_type=="ACTION")
print(f"📊 Total actions executed: {action_count}")
else:
print(f"❌ Setup failed with status: {orchestrator.status}")
exceptExceptionase:
print(f"💥 Error during setup: {e}")
returnorchestratorif__name__=="__main__":
orchestrator=setup_monitoring_system()
Development Environment Setup
#!/usr/bin/env python3"""Example: Automated development environment setup"""fromanges.agents.agent_utils.agent_factoryimportAgentFactory, AgentConfigfromanges.agents.agent_utils.eventsimportEventdefsetup_dev_environment(project_name: str, tech_stack: str):
"""Set up a complete development environment for a new project"""# Use orchestrator for complex setupconfig=AgentConfig(
agent_type="orchestrator",
cmd_init_dir=f"/home/user/projects",
auto_entitle=True,
remaining_recursive_depth=3
)
factory=AgentFactory()
agent=factory.create_agent(config)
setup_task=f""" Create a complete {tech_stack} development environment for project '{project_name}': 1. Project Structure: - Create project directory: {project_name} - Set up standard directory structure - Initialize version control (git) 2. Environment Setup: - Create and configure virtual environment - Install base dependencies for {tech_stack} - Set up development tools (linting, formatting, testing) 3. Configuration Files: - Create appropriate config files (.gitignore, requirements.txt, etc.) - Set up CI/CD configuration templates - Create development documentation templates 4. Initial Code: - Create basic project structure with example code - Set up testing framework with sample tests - Create initial documentation 5. Verification: - Run initial tests to verify setup - Check code quality tools are working - Validate all dependencies are correctly installed """event=Event(event_type="NEW_REQUEST", content=setup_task)
agent.event_stream.add_event(event)
print(f"🔧 Setting up {tech_stack} development environment for '{project_name}'...")
result=agent.run()
returnagent.status=="completed"# Example usageif__name__=="__main__":
success=setup_dev_environment("my-web-app", "Python Flask")
ifsuccess:
print("✅ Development environment setup completed!")
else:
print("❌ Setup failed. Check logs for details.")
Log Analysis and Monitoring
#!/usr/bin/env python3"""Example: Automated log analysis and monitoring"""fromanges.agents.default_agentimportDefaultAgentfromanges.agents.agent_utils.eventsimportEventimportloggingfromdatetimeimportdatetimedefanalyze_system_logs():
"""Analyze system logs for issues and generate reports"""agent=DefaultAgent(
cmd_init_dir="/var/log",
logging_level=logging.INFO,
auto_entitle=True
)
analysis_task=f""" Perform comprehensive system log analysis for {datetime.now().strftime('%Y-%m-%d')}: 1. Error Detection: - Scan system logs for ERROR, CRITICAL, and FATAL messages - Identify recurring error patterns - Extract relevant timestamps and context 2. Security Analysis: - Check authentication logs for failed login attempts - Look for suspicious network activity - Identify potential security threats 3. Performance Monitoring: - Analyze resource usage patterns - Identify performance bottlenecks - Check for memory or disk space issues 4. Report Generation: - Create summary report with findings - Generate recommendations for issues found - Save detailed analysis to /tmp/log_analysis_report.txt 5. Alerting: - If critical issues found, create alert file - Prepare notification content for system administrators """event=Event(event_type="NEW_REQUEST", content=analysis_task)
agent.event_stream.add_event(event)
print("📊 Starting system log analysis...")
result=agent.run()
ifagent.status=="completed":
print("✅ Log analysis completed successfully")
returnTrueelse:
print(f"❌ Log analysis failed: {agent.status}")
returnFalseif__name__=="__main__":
analyze_system_logs()
Performance Optimization Workflows
#!/usr/bin/env python3"""Example: Automated performance optimization workflow"""fromanges.agents.orchestratorimportOrchestratorfromanges.agents.agent_utils.eventsimportEventimportloggingdefoptimize_system_performance():
"""Run comprehensive system performance optimization"""orchestrator=Orchestrator(
cmd_init_dir="/",
remaining_recursive_depth=3,
logging_level=logging.INFO,
auto_entitle=True
)
optimization_task=""" Perform comprehensive system performance optimization: Phase 1: System Analysis - Analyze current system performance metrics - Identify resource bottlenecks (CPU, memory, disk, network) - Check running processes and services - Analyze system startup time and services Phase 2: Cleanup Operations - Clean temporary files and caches - Remove old log files and rotate current ones - Clean package manager caches - Remove orphaned packages and dependencies Phase 3: Configuration Optimization - Optimize system swappiness settings - Configure I/O scheduler for better performance - Adjust network buffer sizes if needed - Optimize filesystem mount options Phase 4: Service Optimization - Disable unnecessary startup services - Optimize database configurations if present - Configure web server settings for better performance - Set up proper caching mechanisms Phase 5: Monitoring Setup - Install performance monitoring tools - Set up automated performance tracking - Create performance baseline measurements - Schedule regular optimization maintenance Phase 6: Verification and Reporting - Run performance benchmarks before and after - Generate detailed optimization report - Create recommendations for future improvements - Set up alerting for performance degradation """event=Event(event_type="NEW_REQUEST", content=optimization_task)
orchestrator.event_stream.add_event(event)
print("🚀 Starting system performance optimization workflow...")
result=orchestrator.run()
returnorchestrator.status=="completed"if__name__=="__main__":
success=optimize_system_performance()
ifsuccess:
print("✅ Performance optimization completed!")
else:
print("❌ Optimization workflow failed.")
This documentation is part of an ongoing improvement initiative. Additional technical details, architecture diagrams, and comprehensive examples will be added in subsequent updates.