Lattice helps you build AI agents in Go with clean abstractions for LLMs, tool calling, retrieval-augmented memory, and multi-agent coordination. Focus on your domain logic while Lattice handles the orchestration plumbing.
Building production AI agents requires more than just LLM calls. You need:
- Pluggable LLM providers that swap without rewriting logic
- Tool calling that works across different model APIs
- Memory systems that remember context across conversations
- Multi-agent coordination for complex workflows
- Testing infrastructure that doesn't hit external APIs
Lattice provides all of this with idiomatic Go interfaces and minimal dependencies.
- 🧩 Modular Architecture – Compose agents from reusable modules with declarative configuration
- 🤖 Multi-Agent Support – Coordinate specialist agents through a shared catalog and delegation system
- 🔧 Rich Tooling – Implement the Tool interface once, use everywhere automatically
- 🧠 Smart Memory – RAG-powered memory with importance scoring, MMR retrieval, and automatic pruning
- 🔌 Model Agnostic – Adapters for Gemini, Anthropic, Ollama, or bring your own
- 📡 UTCP Ready – First-class Universal Tool Calling Protocol support
| GOOGLE_API_KEY | Gemini API credentials | For Gemini models |
| GEMINI_API_KEY | Alternative to GOOGLE_API_KEY | For Gemini models |
| DATABASE_URL | PostgreSQL connection string | For persistent memory |
| ADK_EMBED_PROVIDER | Embedding provider override | No (defaults to Gemini) |
Lattice includes a sophisticated memory system with retrieval-augmented generation (RAG):
Features:
- Importance Scoring – Automatically weights memories by relevance
- MMR Retrieval – Maximal Marginal Relevance for diverse results
- Auto-Pruning – Removes stale or low-value memories
- Multiple Backends – In-memory, PostgreSQL+pgvector,mongodb, neo4j or Qdrant
Create custom tools by implementing a simple interface:
Register tools with the module system and they're automatically available to all agents.
Use Shared Spaces to coordinate multiple agents with shared memory
Perfect for:
- Team-based workflows where agents need shared context
- Complex tasks requiring specialist coordination
- Projects with explicit access control requirements
Token-Oriented Object Notation (TOON) is integrated into Lattice to dramatically reduce token consumption when passing structured data to and from LLMs. This is especially critical for AI agent workflows where context windows are precious and API costs scale with token usage.
Traditional JSON is verbose and wastes tokens on repetitive syntax. Consider passing agent memory or tool responses:
Token count: ~180 tokens
TOON compresses the same data by eliminating redundancy:
Token count: ~85 tokens
Savings: ~53% fewer tokens
- Larger Context Windows – Fit more memories, tool results, and conversation history into the same context limit
- Lower API Costs – Reduce your LLM API bills by up to 50% on structured data
- Faster Processing – Fewer tokens mean faster inference times and lower latency
- Better Memory Systems – Store and retrieve more historical context without hitting token limits
- Multi-Agent Communication – Pass more information between coordinating agents efficiently
TOON is particularly effective for:
- Agent Memory Banks – Retrieving and formatting conversation history
- Tool Responses – Returning structured data from database queries or API calls
- Multi-Agent Coordination – Sharing state between specialist agents
- Batch Operations – Processing multiple similar records (users, tasks, logs)
- RAG Contexts – Injecting retrieved documents with metadata
When your agent queries its memory system, TOON can encode dozens of memories in the space where JSON would fit only a handful:
Despite its compactness, TOON remains readable for debugging and development. The format explicitly declares its schema, making it self-documenting:
You can immediately see: 2 users, with fields id/name/role, followed by their values.
Lattice automatically uses TOON for internal data serialization. To use it in your custom tools or memory adapters:
For more details, see the TOON specification.
Bottom Line: TOON helps your agents do more with less, turning token budgets into a competitive advantage rather than a constraint.
The Tool Orchestrator is an intelligent decision engine that lets the LLM choose when and how to call UTCP tools. It analyzes user input, evaluates available tools, and returns a structured JSON plan describing the next action.
This brings go-agent to the same capability tier as OpenAI’s tool choice, but with fully pluggable UTCP backends and Go-native execution.
-
Interprets the user’s request
-
Loads and renders all available UTCP tools
-
Allows the LLM to reason using TOON-Go
-
Produces a strict JSON decision object:
{ "use_tool": true, "tool_name": "search.files", "arguments": { "query": "config" }, "reason": "User asked to look for configuration files" } -
Executes the chosen tool deterministically
-
Collect Tool Definitions
rendered := a.renderUtcpToolsForPrompt() -
Build the Orchestration Prompt
choicePrompt := fmt.Sprintf(` You are a UTCP tool selection engine. A user asked: %q You have access to these UTCP tools: %s You can also discover tools dynamically using: search_tools("<query>", <limit>) Return ONLY JSON: { "use_tool": ..., "tool_name": "...", "arguments": { }, "reason": "..." } `, userInput, rendered) -
LLM Makes a Decision (via TOON)
- Coordinator executes reasoning
- Assistant returns the final JSON only
-
Agent Executes the Tool
- CallTool
- SearchTools
- CallToolStream
-
The result becomes the agent’s final response
The orchestrator uses TOON as its structured reasoning layer:
- Coordinator → analyzes tool options
- Assistant → returns the strict JSON
- No hallucinated formatting
- Easy to debug via TOON traces
- Session memory stores the entire reasoning trajectory
This yields stable, deterministic tool choice behavior.
find all files containing “db connection” in the workspace
The search.files UTCP tool is invoked, and its direct output is returned to the user.
Any UTCP tool becomes instantly available to the orchestrator:
The orchestrator will automatically include it in the prompt and allow dynamic discovery via search_tools().
UTCP tool calls can run inside the Go DSL:
The orchestrator can:
- call a tool
- run a chain step
- discover tools dynamically
- combine reasoning + execution
This makes go-agent one of the first Go frameworks with multi-step, LLM-driven tool-routing.
We follow standard Go conventions:
- Use gofmt for formatting
- Follow Effective Go guidelines
- Add tests for new features
- Update documentation when adding capabilities
New LLM Provider:
- Implement the models.LLM interface in pkg/models/
- Add provider-specific configuration
- Update documentation and examples
New Tool:
- Implement agent.Tool interface in pkg/tools/
- Register with the tool module system
- Add tests and usage examples
New Memory Backend:
- Implement memory.VectorStore interface
- Add migration scripts if needed
- Update configuration documentation
- Go 1.22+ (1.25 recommended)
- PostgreSQL 15+ with pgvector extension (optional, for persistent memory)
- API Keys for your chosen LLM provider
For persistent memory with vector search:
The memory module handles schema migrations automatically.
Missing pgvector extension
Solution: Run CREATE EXTENSION vector; in your PostgreSQL database.
API key errors
Solution: Verify your API key is correctly set in the environment where you run the application.
Tool not found
Solution: Ensure tool names are unique and properly registered in your tool catalog.
- Check existing GitHub Issues
- Review the examples for common patterns
- Join discussions in GitHub Discussions
We welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch (git checkout -b feature/amazing-feature)
- Make your changes with tests
- Update documentation
- Submit a pull request
Please ensure:
- Tests pass (go test ./...)
- Code is formatted (gofmt)
- Documentation is updated
- Commit messages are clear
This project is licensed under the Apache 2.0 License.
- Inspired by Google's Agent Development Kit (Python)
Star us on GitHub if you find Lattice useful! ⭐
.png)
