AutoAgents – a Rust-Based Multi-Agent Framework for LLM-Powered Intelligence
3 weeks ago
1
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous
agents powered by Large Language Models (LLMs) and Ractor. Designed for
performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can
reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models
as well. It is built with a modular architecture with swappable components, Memory layer, Executors can be easily
swapped without much rework.
With our native WASM compilation support, You can depoloy the agent orchestration directly to Web Browser.
Multiple Executors: ReAct (Reasoning + Acting) and Basic executors with streaming support
Structured Outputs: Type-safe JSON schema validation and custom output types
WASM Runtime for Tool Execution: Sandboxed tool execution
Provider Agnostic: Support for OpenAI, Anthropic, Ollama, and local models
Multi-Platform: Native Rust, WASM for browsers, and server deployments
Multi-Agent: Type-safe pub/sub communication and agent orchestration
Native: High-performance server and desktop applications
Browser: Run agents directly in web browsers via WebAssembly
Edge: Local inference with ONNX models
🌐 Supported LLM Providers
AutoAgents supports a wide range of LLM providers, allowing you to choose the best fit for your use case:
Provider
Status
OpenAI
✅
OpenRouter
✅
Anthropic
✅
DeepSeek
✅
xAI
✅
Phind
✅
Groq
✅
Google
✅
Azure OpenAI
✅
Provider
Status
Mistral-rs
⚠️ Under Development
Burn
⚠️ Experimental
Onnx
⚠️ Experimental
Ollama
✅
Provider support is actively expanding based on community needs.
For contributing to AutoAgents or building from source:
Rust (latest stable recommended)
Cargo package manager
LeftHook for Git hooks management
macOS (using Homebrew):
Linux/Windows:
# Using npm
npm install -g lefthook
# Clone the repository
git clone https://github.com/liquidos-ai/AutoAgents.git
cd AutoAgents
# Install Git hooks using lefthook
lefthook install
# Build the project
cargo build --release
# Run tests to verify setup
cargo test --all-features
The lefthook configuration will automatically:
Format code with cargo fmt
Run linting with cargo clippy
Execute tests before commits
use autoagents::core::agent::memory::SlidingWindowMemory;use autoagents::core::agent::prebuilt::executor::{ReActAgent,ReActAgentOutput};use autoagents::core::agent::task::Task;use autoagents::core::agent::{AgentBuilder,AgentDeriveT,AgentOutputT,DirectAgent};use autoagents::core::error::Error;use autoagents::core::tool::{ToolCallError,ToolInputT,ToolRuntime,ToolT};use autoagents::llm::LLMProvider;use autoagents::llm::backends::openai::OpenAI;use autoagents::llm::builder::LLMBuilder;use autoagents_derive::{agent, tool,AgentHooks,AgentOutput,ToolInput};use serde::{Deserialize,Serialize};use serde_json::Value;use std::sync::Arc;#[derive(Serialize,Deserialize,ToolInput,Debug)]pubstructAdditionArgs{#[input(description = "Left Operand for addition")]left:i64,#[input(description = "Right Operand for addition")]right:i64,}#[tool( name = "Addition", description = "Use this tool to Add two numbers", input = AdditionArgs,)]structAddition{}#[async_trait]implToolRuntimeforAddition{asyncfnexecute(&self,args:Value) -> Result<Value,ToolCallError>{println!("execute tool: {:?}", args);let typed_args:AdditionArgs = serde_json::from_value(args)?;let result = typed_args.left + typed_args.right;Ok(result.into())}}/// Math agent output with Value and Explanation#[derive(Debug,Serialize,Deserialize,AgentOutput)]pubstructMathAgentOutput{#[output(description = "The addition result")]value:i64,#[output(description = "Explanation of the logic")]explanation:String,#[output(description = "If user asks other than math questions, use this to answer them.")]generic:Option<String>,}#[agent( name = "math_agent", description = "You are a Math agent", tools = [Addition], output = MathAgentOutput,)]#[derive(Default,Clone,AgentHooks)]pubstructMathAgent{}implFrom<ReActAgentOutput>forMathAgentOutput{fnfrom(output:ReActAgentOutput) -> Self{let resp = output.response;if output.done && !resp.trim().is_empty(){// Try to parse as structured JSON firstifletOk(value) = serde_json::from_str::<MathAgentOutput>(&resp){return value;}}// For streaming chunks or unparseable content, create a default responseMathAgentOutput{value:0,explanation: resp,generic:None,}}}pubasyncfnsimple_agent(llm:Arc<dynLLMProvider>) -> Result<(),Error>{let sliding_window_memory = Box::new(SlidingWindowMemory::new(10));let agent_handle = AgentBuilder::<_,DirectAgent>::new(ReActAgent::new(MathAgent{})).llm(llm).memory(sliding_window_memory).build().await?;println!("Running simple_agent with direct run method");let result = agent_handle.agent.run(Task::new("What is 1 + 1?")).await?;println!("Result: {:?}", result);Ok(())}#[tokio::main]asyncfnmain() -> Result<(),Error>{// Check if API key is setlet api_key = std::env::var("OPENAI_API_KEY").unwrap_or("".into());// Initialize and configure the LLM clientlet llm:Arc<OpenAI> = LLMBuilder::<OpenAI>::new().api_key(api_key)// Set the API key.model("gpt-4o")// Use GPT-4o-mini model.max_tokens(512)// Limit response length.temperature(0.2)// Control response randomness (0.0-1.0).build().expect("Failed to build LLM");let _ = simple_agent(llm).await?;Ok(())}
Command-line interface for running and serving AutoAgents workflows from YAML.
cargo build --package autoagents-cli --release
The binary will be available at target/release/autoagents.
Execute a workflow from a YAML file:
kind: Directname: ResearchAgentstream: falsedescription: "A research agent designed to search, retrieve, and summarize information from the web."workflow:
agent:
name: ResearchAgentdescription: "A deep research agent capable of gathering accurate information, summarizing sources, and providing references."instructions: | You are a research expert. Your task is to find accurate and up-to-date information related to the user's query. 1. Search for relevant sources on the web. 2. Extract key insights and summarize them concisely. 3. Provide references and links to original sources. 4. Make sure to cross-verify facts and avoid unverified information. 5. Present the final answer in a structured and clear manner.executor: ReActmemory:
kind: sliding_windowparameters:
window_size: 100model:
kind: llmbackend:
kind: Cloudprovider: OpenAImodel_name: gpt-4o-miniparameters:
temperature: 0.2max_tokens: 1500tools:
- name: brave_searchoutput:
type: textoutput:
type: text
autoagents run --workflow workflow.yaml --input "What is Rust?"
Serve Workflows over HTTP
Start an HTTP server to serve workflows via REST API:
--name <NAME> - Custom name for the workflow (defaults to filename)
--host <HOST> - Host to bind to (default: 127.0.0.1)
--port <PORT> - Port to bind to (default: 8080)
# Run a direct workflow
autoagents run -w workflow.yaml -i "Tell me about AI"# Serve a workflow on custom port
autoagents serve -w workflow.yaml -p 9000 --name research
# serve from directory
autoagents serve --directory ./workflows
# Serve with custom name
autoagents serve -w workflow.yaml --name my_agent --host 0.0.0.0 --port 3000
Explore our comprehensive examples to get started quickly:
Demonstrates various examples like Simple Agent with Tools, Very Basic Agent, Edge Agent, Chaining, Actor Based Model,
Streaming and Adding Agent Hooks.
Demonstrates how to integrate AutoAgents with the Model Context Protocol (MCP).
Demonstrates how to integrate AutoAgents with the Mistral-rs for Local Models.
Demonstrates various design patterns like Chaining, Planning, Routing, Parallel and Reflection.
Contains examples demonstrating how to use different LLM providers with AutoAgents.
A simple agent which can run tools in WASM runtime.
A sophisticated ReAct-based coding agent with file manipulation capabilities.
Compile agent runtime into WASM module and load it in a browser web app.
AutoAgents is built with a modular architecture:
AutoAgents/
├── crates/
│ ├── autoagents/ # Main library entry point
│ ├── autoagents-core/ # Core agent framework
│ ├── autoagents-llm/ # LLM provider implementations
│ ├── autoagents-toolkit/ # Collection of Ready to use Tools
│ ├── autoagents-burn/ # LLM provider implementations using Burn
│ ├── autoagents-mistral-rs/ # LLM provider implementations using Mistral-rs
│ ├── autoagents-onnx/ # Edge Runtime Implementation using Onnx
│ └── autoagents-derive/ # Procedural macros
│ └── autoagents-cli/ # AutoAgents CLI
│ └── autoagents-serve/ # Crate responsible for running and serving YAML based workflows
├── examples/ # Example implementations
Agent: The fundamental unit of intelligence
Environment: Manages agent lifecycle and communication
Memory: Configurable memory systems
Tools: External capability integration
Executors: Different reasoning patterns (ReAct, Chain-of-Thought)
For development setup instructions, see the Installation section above.
# Run all tests --
cargo test --all-features
# Run tests with coverage (requires cargo-tarpaulin)
cargo install cargo-tarpaulin
cargo tarpaulin --all-features --out html
This project uses LeftHook for Git hooks management. The hooks will automatically:
Format code with cargo fmt --check
Run linting with cargo clippy -- -D warnings
Execute tests with cargo test --all-features --workspace --exclude autoagents-burn