VSM is a tiny, idiomatic Ruby runtime for building agentic systems with a clear spine: Operations, Coordination, Intelligence, Governance, and Identity.
Building agentic systems often leads to tangled callback spaghetti and unclear responsibilities. As you add tools, LLM providers, and coordination logic, the complexity explodes. You end up with:
- Callbacks nested in callbacks with no clear flow
- Tool execution mixed with business logic
- No clear separation between "what the agent does" vs "how it decides" vs "what rules it follows"
- Difficulty testing individual components
- Lock-in to specific LLM providers or frameworks
VSM solves this by providing a composable, testable architecture with named responsibilities (POODR/SOLID style). You get clear separation of concerns from day one, and can start with a single capsule and grow to a swarm—without changing your interface or core loop.
The Viable System Model gives you a proven organizational pattern: every autonomous system needs Operations (doing), Coordination (scheduling), Intelligence (deciding), Governance (rules), and Identity (purpose). VSM makes this concrete in Ruby.
Ruby developers building AI agents who want clean architecture over framework magic. If you've read Sandi Metz's POODR, appreciate small objects with single responsibilities, and want your agent code to be as clean as your Rails models, VSM is for you.
Teams scaling from prototype to production who need to start simple (one tool, one LLM call) but know they'll need multiple tools, streaming, confirmations, and policy enforcement later. VSM's recursive capsule design means your "hello world" agent uses the same architecture as your production swarm.
Developers who want provider independence. VSM doesn't lock you into OpenAI, Anthropic, or any specific provider. Your Intelligence component decides how to plan—whether that's calling an LLM, following a state machine, or using your own logic.
VSM is a Ruby gem that provides:
-
Five named systems that every agent needs:
- Operations — do the work (tools/skills)
- Coordination — schedule, order, and arbitrate conversations (the "floor")
- Intelligence — plan/decide (e.g., call an LLM driver, or your own logic)
- Governance — enforce policy, safety, and budgets
- Identity — define purpose and invariants
-
Capsules — recursive building blocks. Every capsule has the five systems above plus a message bus. Capsules can contain child capsules, and "tools" are just capsules that opt-in to a tool interface.
-
Async-first architecture — powered by the async gem, VSM runs streaming, I/O, and multiple tool calls concurrently without blocking.
-
Clean interfaces — Ports translate external events (CLI, HTTP, MCP) into messages. Tools expose JSON Schema descriptors that work with any LLM provider.
-
Built-in observability — append-only JSONL ledger of all events, ready to feed into a monitoring UI.
Ruby 3.2+ recommended.
Here's a minimal agent with one tool:
Run it:
Scaffold a new VSM app with a ChatTTY interface:
Options:
- --with-llm openai|anthropic|gemini — choose LLM provider (default: openai)
- --model <name> — default model
- --git — initialize git and commit
- --bundle — run bundle install
- --path <dir> — target directory (default: ./<name>)
- --force — overwrite an existing non-empty directory
Generated layout mirrors the airb example: an Organism.build to assemble the capsule, a default ChatTTY port, and a sample echo tool ready to extend.
For a real agent with LLM integration:
Your MyLLMIntelligence would:
- Maintain conversation history
- Call your LLM provider with available tools
- Emit :tool_call messages when the LLM wants to use tools
- Stream :assistant_delta tokens as they arrive
- Emit final :assistant message when done
- Features
- Core Concepts
- Tools as Capsules
- Async & Parallelism
- Ports (Interfaces)
- Observability
- Writing an Intelligence
- Testing
- Design Goals
- Roadmap
- FAQ
- API Overview
- License
- Contributing
- Named systems: Operations, Coordination, Intelligence, Governance, Identity
- Capsules: recursive building blocks (a capsule can contain more capsules)
- Async bus: non‑blocking message channel with fan‑out subscribers
- Structured concurrency: streaming + multiple tool calls in parallel
- Tools-as-capsules: opt‑in tool interface + JSON Schema descriptors
- Executors: run tools in the current fiber or a thread pool (Ractor/Subprocess future)
- Ports: clean ingress/egress adapters for CLI/TUI/HTTP/MCP/etc.
- Observability: append‑only JSONL ledger you can feed into a UI later
- POODR/SOLID: small objects, high cohesion, low coupling
VSM includes a set of read‑only meta tools you can attach to any capsule to inspect its structure and code:
- meta_summarize_self — Summarize the current capsule including roles and tools
- meta_list_tools — List all tools available in the organism (descriptors and paths)
- meta_explain_tool — Show code and context for a specific tool
- meta_explain_role — Explain a role implementation for a capsule, with source snippets
Attach them when building your capsule:
Example calls:
- meta_summarize_self {} → high‑level snapshot and counts
- meta_list_tools {} → array of tools with descriptors
- meta_explain_tool { "tool": "some_tool" } → code snippet + descriptor
- meta_explain_role { "role": "coordination" } → role class, constructor args, source locations, and code blocks
A container with five named systems and a message bus:
Capsules can contain child capsules. Recursion means a "tool" can itself be a full agent if you want.
A non‑blocking bus built on fibers (async). Emitting a message never blocks the emitter.
Any capsule can opt‑in to act as a "tool" by including VSM::ActsAsTool (already included in VSM::ToolCapsule).
VSM provides provider‑agnostic descriptors:
Why opt‑in? Not every capsule should be callable as a tool. Opt‑in keeps coupling low. Later you can auto‑expose selected capsules as tools or via MCP.
VSM is async by default:
- The bus is fiber‑based and non‑blocking.
- The capsule loop drains messages without blocking emitters.
- Operations runs each tool call in its own task; tools can choose their execution mode:
- :fiber (default) — I/O‑bound, non‑blocking
- :thread — CPU‑ish work or blocking libraries
You can add Ractor/Subprocess executors later without changing the API.
A Port translates external events into messages and renders outgoing messages. Examples: CLI chat, TUI, HTTP, MCP stdio, editor plugin.
Start everything:
- VSM::Ports::ChatTTY — A generic, customizable chat terminal UI. Safe to run alongside MCP stdio; prefers IO.console so it won’t pollute stdout.
- VSM::Ports::MCP::ServerStdio — Exposes your capsule as an MCP server on stdio implementing tools/list and tools/call.
Enable them:
Reflect tools from an external MCP server and expose them as local tools using the DSL. This uses a tiny stdio JSON‑RPC client under the hood.
See examples/06_mcp_mount_reflection.rb and examples/07_connect_claude_mcp.rb.
Note: Many MCP servers speak LSP-style Content-Length framing on stdio. The current minimal transport uses NDJSON for simplicity. If a server hangs or doesn't respond, switch the transport to LSP framing in lib/vsm/mcp/jsonrpc.rb.
You can customize ChatTTY via options or by subclassing to override only the banner and rendering methods, while keeping the input loop.
See examples/08_custom_chattty.rb.
Use an LLM driver (e.g., OpenAI) to automatically call tools reflected from an MCP server:
See examples/09_mcp_with_llm_calls.rb.
VSM ships a tiny Monitoring role that writes an append‑only JSONL ledger:
Use it to power a TUI/HTTP "Lens" later. Because everything flows over the bus, you get consistent events across nested capsules and sub‑agents.
- MCP stdio port only reads stdin and writes strict JSON to stdout.
- ChatTTY prefers IO.console or falls back to stderr and disables input if no TTY.
- You can run both in the same process: machine protocol on stdio, human UI on the terminal.
The Intelligence role is where you plan/decide. It might:
- forward a conversation to an LLM driver (OpenAI/Anthropic/Gemini),
- emit :tool_call messages when the model asks to use tools,
- stream :assistant_delta tokens and finish with :assistant.
Minimal example (no LLM, just logic):
In your application, you can plug in provider drivers that stream and support native tool calling; Intelligence remains the same.
VSM is designed for unit tests:
- Capsules: inject fake systems and assert dispatch.
- Intelligence: feed :user / :tool_result messages and assert emitted messages.
- Tools: call #run directly.
- Ports: treat like adapters; they're thin.
Quick smoke test:
- Ergonomic Ruby (small objects, clear names, blocks/DSL where it helps)
- High cohesion, low coupling (roles are tiny; tools are self‑contained)
- Recursion by default (any capsule can contain more capsules)
- Async from day one (non‑blocking bus; concurrent tools)
- Portability (no hard dependency on a specific LLM vendor)
- Observability built‑in (event ledger everywhere)
- Executors: Ractor & Subprocess for heavy/risky tools
- Limiter: per‑tool semaphores and budgets (tokens/time/IO) in Governance
- Lens UI: terminal/HTTP viewer for plans, tools, and audits
- Drivers: optional vsm-openai, vsm-anthropic, vsm-gemini add‑ons for native tool‑calling + streaming
- MCP ports: stdio server/client to expose/consume MCP tools
Does every capsule have to be a tool?
No. Opt‑in via VSM::ActsAsTool. Many capsules (planner, auditor, coordinator) shouldn't be callable as tools.
Can I run multiple interfaces at once (chat + HTTP + MCP)?
Yes. Start multiple ports; Coordination arbitrates the "floor" per session.
How do I isolate risky or CPU‑heavy tools?
Set execution_mode to :thread today. Ractor/Subprocess executors are planned and will use the same API.
What about streaming tokens?
Handled by your Intelligence implementation (e.g., your LLM driver). Emit :assistant_delta messages as tokens arrive; finish with a single :assistant.
Is VSM tied to any specific LLM?
No. Write a driver that conforms to your Intelligence's expectations (usually "yield deltas" + "yield tool_calls"). Keep the provider in your app gem.
MIT. See LICENSE.txt.
Issues and PRs are welcome! Please include:
- A failing spec (RSpec) for bug reports
- Minimal API additions
- Clear commit messages
Run tests with:
Lint with:
.png)


