EvoAgentX: The First Self-Evolving AI Agent Framework

4 days ago 1

EvoAgentX

Building a Self-Evolving Ecosystem of AI Agents

EvoAgentX Homepage Docs Discord Twitter Wechat GitHub star chart GitHub fork License

An automated framework for evaluating and evolving agentic workflows.

  • [May 2025] 🎉 EvoAgentX has been officially released!

We recommend installing EvoAgentX using pip:

pip install git+https://github.com/EvoAgentX/EvoAgentX.git

For local development or detailed setup (e.g., using conda), refer to the Installation Guide for EvoAgentX.

Example (optional, for local development):
git clone https://github.com/EvoAgentX/EvoAgentX.git cd EvoAgentX # Create a new conda environment conda create -n evoagentx python=3.10 # Activate the environment conda activate evoagentx # Install the package pip install -r requirements.txt # OR install in development mode pip install -e .

To use LLMs with EvoAgentX (e.g., OpenAI), you must set up your API key.

Option 1: Set API Key via Environment Variable
  • Linux/macOS:
export OPENAI_API_KEY=<your-openai-api-key>
  • Windows Command Prompt:
set OPENAI_API_KEY=<your-openai-api-key>
  • Windows PowerShell:
$env:OPENAI_API_KEY="<your-openai-api-key>" # " is required

Once set, you can access the key in your Python code with:

import os OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
Option 2: Use .env File
  • Create a .env file in your project root and add the following:
OPENAI_API_KEY=<your-openai-api-key>

Then load it in Python:

from dotenv import load_dotenv import os load_dotenv() # Loads environment variables from .env file OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

Configure and Use the LLM

Once the API key is set, initialise the LLM with:

from evoagentx.models import OpenAILLMConfig, OpenAILLM # Load the API key from environment OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") # Define LLM configuration openai_config = OpenAILLMConfig( model="gpt-4o-mini", # Specify the model name openai_key=OPENAI_API_KEY, # Pass the key directly stream=True, # Enable streaming response output_response=True # Print response to stdout ) # Initialize the language model llm = OpenAILLM(config=openai_config) # Generate a response from the LLM response = llm.generate(prompt="What is Agentic Workflow?")

📖 More details on supported models and config options: LLM module guide.

Automatic WorkFlow Generation

Once your API key and language model are configured, you can automatically generate and execute multi-agent workflows in EvoAgentX.

🧩 Core Steps:

  1. Define a natural language goal
  2. Generate the workflow with WorkFlowGenerator
  3. Instantiate agents using AgentManager
  4. Execute the workflow via WorkFlow

💡 Minimal Example:

from evoagentx.workflow import WorkFlowGenerator, WorkFlowGraph, WorkFlow from evoagentx.agents import AgentManager goal = "Generate html code for the Tetris game" workflow_graph = WorkFlowGenerator(llm=llm).generate_workflow(goal) agent_manager = AgentManager() agent_manager.add_agents_from_workflow(workflow_graph, llm_config=openai_config) workflow = WorkFlow(graph=workflow_graph, agent_manager=agent_manager, llm=llm) output = workflow.execute() print(output)

You can also:

  • 📊 Visualise the workflow: workflow_graph.display()
  • 💾 Save/load workflows: save_module() / from_file()

📂 For a complete working example, check out the workflow_demo.py

Watch on YouTube

EvoAgentX_demo.mov

In this demo, we showcase the workflow generation and execution capabilities of EvoAgentX through two examples:

  • Application 1: Intelligent Job Recommendation from Resume
  • Application 2: Visual Analysis of A-Share Stocks

We have integrated some existing agent/workflow evolution algorithms into EvoAgentX, including TextGrad, MIPRO and AFlow.

To evaluate the performance, we use them to optimize the same agent system on three different tasks: multi-hop QA (HotPotQA), code generation (MBPP) and reasoning (MATH). We randomly sample 50 examples for validation and other 100 examples for testing.

Tip: We have integrated these benchmark and evaluation code in EvoAgentX. Please refer to the benchmark and evaluation tutorial for more details.

Method HotPotQA
(F1%) MBPP
(Pass@1 %) MATH
(Solve Rate %)
Original 63.58 69.00 66.00
TextGrad 71.02 71.00 76.00
AFlow 65.09 79.00 71.00
MIPRO 69.16 68.00 72.30

Please refer to the examples/optimization folder for more details.

We use our framework to optimize existing multi-agent systems on the GAIA benchmark. We select Open Deep Research and OWL, two representative multi-agent framework from the GAIA leaderboard that is open-source and runnable.

We apply EvoAgentX to optimize their prompts. The performance of the optimized agents on the GAIA benchmark validation set is shown in the figure below.

Full Optimization Reports: Open Deep Research and OWL.

💡 New to EvoAgentX? Start with the Quickstart Guide for a step-by-step introduction.

Explore how to effectively use EvoAgentX with the following resources:

🛠️ Follow the tutorials to build and optimize your EvoAgentX workflows.

🚀 We're actively working on expanding our library of use cases and optimization strategies. More coming soon — stay tuned!

  • Modularize Evolution Algorithms: Abstract optimization algorithms into plug-and-play modules that can be easily integrated into custom workflows.
  • Develop Task Templates and Agent Modules: Build reusable templates for typical tasks and standardized agent components to streamline application development.
  • Integrate Self-Evolving Agent Algorithms: Incorporate more recent and advanced agent self-evolution across multiple dimensions, including prompt tuning, workflow structures, and memory modules.
  • Enable Visual Workflow Editing Interface: Provide a visual interface for workflow structure display and editing to improve usability and debugging.

📢 Stay connected and be part of the EvoAgentX journey!
🚩 Join our community to get the latest updates, share your ideas, and collaborate with AI enthusiasts worldwide.

  • Discord — Chat, discuss, and collaborate in real-time.
  • X (formerly Twitter) — Follow us for news, updates, and insights.
  • WeChat — Connect with our Chinese community.

If you have any questions or feedback about this project, please feel free to contact us. We highly appreciate your suggestions!

We will respond to all questions within 2-3 business days.

🙌 Contributing to EvoAgentX

Thanks go to these awesome contributors

We appreciate your interest in contributing to our open-source initiative. We provide a document of contributing guidelines which outlines the steps for contributing to EvoAgentX. Please refer to this guide to ensure smooth collaboration and successful contributions. 🤝🚀

Star History Chart

This project builds upon several outstanding open-source projects: AFlow, TextGrad, DSPy, LiveCodeBench, and more. We would like to thank the developers and maintainers of these frameworks for their valuable contributions to the open-source community.

Source code in this repository is made available under the MIT License.

Read Entire Article