Trae Agent is an LLM-based agent for general purpose software engineering tasks. It provides a powerful CLI interface that can understand natural language instructions and execute complex software engineering workflows using various tools and LLM providers.
Please note that this project is still in the alpha stage and being actively developed. We welcome various contributions from the community.
- Unit tests
- Richer CLI support
- Migrate to Rust
- 🌊 Lakeview: Provides short and concise summarisation for agent steps
- 🤖 Multi-LLM Support: Works with OpenAI and Anthropic official APIs
- 🛠️ Rich Tool Ecosystem: File editing, bash execution, sequential thinking, and more
- 🎯 Interactive Mode: Conversational interface for iterative development
- 📊 Trajectory Recording: Detailed logging of all agent actions for debugging and analysis
- ⚙️ Flexible Configuration: JSON-based configuration with environment variable support
- 🚀 Easy Installation: Simple pip-based installation
We strongly recommend using UV to setup the project.
We recommand to configure Trae Agent using the config file.
You can also set your API keys as environment variables:
The main entry point is the trae command with several subcommands:
In interactive mode, you can:
- Type any task description to execute it
- Use status to see agent information
- Use help for available commands
- Use clear to clear the screen
- Use exit or quit to end the session
Trae Agent uses a JSON configuration file (trae_config.json) for settings:
Configuration Priority:
- Command-line arguments (highest)
- Configuration file values
- Environment variables
- Default values (lowest)
- OPENAI_API_KEY - OpenAI API key
- ANTHROPIC_API_KEY - Anthropic API key
Trae Agent comes with several built-in tools:
-
str_replace_based_edit_tool: Create, edit, view, and manipulate files
- view - Display file contents or directory listings
- create - Create new files
- str_replace - Replace text in files
- insert - Insert text at specific lines
-
bash: Execute shell commands and scripts
- Run commands with persistent state
- Handle long-running processes
- Capture output and errors
-
sequential_thinking: Structured problem-solving and analysis
- Break down complex problems
- Iterative thinking with revision capabilities
- Hypothesis generation and verification
-
task_done: Signal task completion
- Mark tasks as successfully completed
- Provide final results and summaries
Trae Agent automatically records detailed execution trajectories for debugging and analysis:
Trajectory files contain:
- LLM Interactions: All messages, responses, and tool calls
- Agent Steps: State transitions and decision points
- Tool Usage: Which tools were called and their results
- Metadata: Timestamps, token usage, and execution metrics
For more details, see TRAJECTORY_RECORDING.md.
- Fork the repository
- Create a feature branch (git checkout -b feature/amazing-feature)
- Make your changes
- Add tests for new functionality
- Commit your changes (git commit -m 'Add amazing feature')
- Push to the branch (git push origin feature/amazing-feature)
- Open a Pull Request
- Follow PEP 8 style guidelines
- Add tests for new features
- Update documentation as needed
- Use type hints where appropriate
- Ensure all tests pass before submitting
- Python 3.12+
- OpenAI API key (for OpenAI models)
- Anthropic API key (for Anthropic models)
Import Errors:
API Key Issues:
Permission Errors:
This project is licensed under the MIT License - see the LICENSE file for details.
We thank Anthropic for building the anthropic-quickstart project that served as a valuable reference for the tool ecosystem.