Ebiose is a distributed artificial intelligence factory, an open source project from the Inria’s incubator (French lab). Our vision: enabling humans and agents to collaborate in building tomorrow's AI in an open and democratic way.
"AI can just as easily become the weapon of a surveillance capitalism dystopia as the foundation of a democratic renaissance."
👀 Must read 👀
- Founding blog post (10 min)
- Glossary (3 min)
This first beta version implements the foundations of our vision.
- Architect agents: Specialized AIs for designing and evolving other agents
- Darwinian engine: Evolutionary system enabling continuous improvement of agents through mutation and selection
- Forges: Isolated environments where architect agents create custom agents to solve specific problems
- LangGraph Compatibility: Integration with the LangGraph ecosystem for agent orchestration
With the latest release (June 2025):
- A shared centralized ecosystem: Use Ebiose’s cloud to kickstart a forge cycle with curated agents from our shared ecosystem. The top-performing agents are automatically promoted and reintegrated, making the ecosystem stronger with every cycle. 👉 [Access the Ebiose cloud now.]
- LiteLLM support: Ebiose now integrates with LiteLLM to simplify the management of your own LLMs.
- Proof of concept: Don't expect complex or production-ready agents
- Initial architect agent to be improved: The first implemented architect agent is still simple
- Early stage: Be prepared to work through initial issues and contribute to improvements! 😇
First, clone the repository:
Initialize the project by running the following command:
This command will perform the following actions:
- Copy the model_endpoints_template.yml file to model_endpoints.yml if the file doesn't exist, and instruct you to fill it with your API keys.
- Copy the .env.example file to .env if the file doesn't exist.
There are two ways to start running Ebiose:
-
the most straightforward way is to use Docker: go to section 🐳 With Docker; 🚧 Docker support for the new release is currently untested. See Issue #26 for details.
-
if you are not yet confortable with Ebiose and wish to understand the basics of Ebiose step by step, you may also install the project dependencies and go through the quickstart.ipynb Jupyter notebook to understand the basics of Ebiose, step by step; follow the steps to install Ebiose 💻 Locally.
Ebiose uses uv as a packaging and dependency manager. See Astral's uv documentation to install it.
Once uv is installed, use it to install your project dependencies. In your project directory, run:
To install all required dependencies
By default, Ebiose supports OpenAI models but other major providers can also be used. Refer to 🤖 Model APIs support
For more detailed instructions or troubleshooting tips, refer to the official uv documentation.
💡 If you don't want to use uv, you can still use pip install -r requirements.txt command.
💡 Pro Tip: You may need to add the root of the repository to your PYTHONPATH environment variable. Alternatively, use a .env file to do so.
The Jupyter notebook quickstart.ipynb is the easiest way to understand the basics and start experimenting with Ebiose. This notebook lets you try out architect agents and forges on your very own challenges. 🤓
To go further, the examples/ directory features a complete forge example designed to optimize agents that solve math problems. Check out examples/math_forge/math_forge.py for the implementation of the MathLangGraphForge forge.
For demonstration purposes, the run.py script is configured to manage a forge cycle with only two agents per generation, using a tiny budget of $0.02. The cycle should take 1 to 2 minutes to consume the budget using the default model endpoint gpt-4o-mini. Each generated agent will be evaluated on 5 math problems from GSM-8k test dataset.
To run a cycle of the Math forge, execute the following command in your project directory:
Kick off your journey by implementing your own forge with the accompanying compute_fitness method! 🎉
As of today, Ebiose uses LangChain/LangGraph to implement agents. Using the different providers of LLMs, and ML models, has been made as easy as possible.
Since June 2025, Ebiose has been integrated with LiteLLM and now offers its own cloud — making model management even easier.
The fastest and easiest way to run your forge in just a few steps with $10 free credits.
Sign up at Ebiose Cloud.
Generate your Ebiose API key and add it to your model_endpoints.yml file:
Specify the model to use by default:
🚧 As of June 2025, the Ebiose web app only allows you to create an API key with $10 in free credits to experiment with running your own forges. More features coming soon.
🚨 To run a forge cycle with Ebiose cloud, be sure to set it up using the dedicated CloudForgeCycleConfig class.
Ebiose Cloud currently supports the following models:
- azure/gpt-4o-mini
- azure/gpt-4.1-mini
- azure/gpt-4.1-nano
- azure/gpt-4o
More models to come. Feel free to ask.
Ebiose integrates with LiteLLM, either through the cloud or a self-hosted proxy.
Refer to the LiteLLM documentation to get started and generate your LiteLLM API key.
Once you have your key, update the model_endpoints.yml file as follows:
Finally, define your LiteLLM endpoints using the appropriate model naming format:
🚨 To run a forge cycle without Ebiose cloud, be sure to set it up using the dedicated LocalForgeCycleConfig class.
🚨 The "local" mode for running forge cycles has not been fully tested. Use with caution and report any issues. See Issue #29 for details.
You may also use your own credentials without going through LiteLLM.
To do so, define the model endpoints you want to use in the model_endpoints.yml file located at the root of the project.
Fill in your secret credentials using the examples below.
For other providers not listed here, refer to LangChain's documentation
and adapt the LangGraphLLMApi class as needed.
Issues and pull requests are welcome!
🚨 To run a forge cycle without Ebiose cloud, be sure to set it up using the dedicated LocalForgeCycleConfig class.
🚨 The "local" mode for running forge cycles has not been fully tested. Use with caution and report any issues. See Issue #29 for details.
To use OpenAI LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:
To use OpenAI LLMs on Azure, fill the model_endpoints.yml file at the root of the project, with, for example:
To use other LLMs hosted on Azure fill the model_endpoints.yml file at the root of the project, with, for example:
To use Anthropic LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:
🚨 Dont'forget to install Langchain's Anthropic library by executing uv sync --extra anthropic or pip install -U langchain-anthropic
To use HuggingFace LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:
🚨 Dont'forget to install Langchain's Hugging Face library by executing uv sync --extra huggingface or pip install -U langchain-huggingface and login with the following:
To use OpenRouter LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:
It needs openai library which is installed by default.
To use Google LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:
🚨 Don't forget to install Langchain's Google GenAI library by executing uv sync --extra google or pip install langchain-google-genai.
To use Ollama LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:
🚨 Don't forget to install Langchain's Ollama library by executing uv sync --extra ollama or pip install langchain-ollama
Again, we wish to be compatible with every provider you are used to, so feel free to open an issue and contribute to expanding our LLMs' coverage. Check first if LangChain is compatible with your preferred provider here.
🚨 Langfuse Version Warning: Ebiose currently uses Langfuse version 2.x.x. Updating to Langfuse 3.x.x is planned but not yet implemented due to compatibility issues. See Issue #28 for details.
Ebiose uses Langfuse's @observe decorator to be able to observe nested agent's traces. LangFuse can be easily self-hosted. See Langfuse's documentation to do so. Once Langfuse's server is running, you can set Langfuse credentials in your .env file by adding:
Ebiose uses Loguru for logging purpose. You have nothing to do to set it up but can adapt logs to your needs easily.
Here are some common issues users might face and their solutions:
Solution: Ensure uv is installed correctly. Follow the official installation guide. Alternatively, use pip:
Solution: Use a virtual environment to isolate dependencies:
Solution: Ensure your API keys are set in the model_endpoints.yml file, for example:
Solution: Ensure Jupyter is installed and the kernel is set correctly:
Solution: Set the .env PYTHONPATH variable as shown in the .env.example file. Alternatively, add the project root to your PYTHONPATH:
We are committed to fostering a welcoming and inclusive community. Please read our Code of Conduct before participating.
We welcome contributions from the community! Here's how you can help:
- Report Bugs: Open an issue on GitHub with detailed steps to reproduce the problem.
- Suggest Features: Share your ideas for new features or improvements.
- Submit Pull Requests: Fork the repository, make your changes, and submit a PR. Please follow our contribution guidelines.
For more details, check out our Contribution Guide.
Ebiose is licensed under the MIT License. This means you're free to use, modify, and distribute the code, as long as you include the original license.
If you have any questions or need help, feel free to:
- Open an issue on GitHub.
- Join our Discord server.
- Reach out to the maintainers directly.
All feedback is highly appreciated. Thanks! 🎊
.png)



