VSM is a tiny, idiomatic Ruby runtime for building agentic systems

1 month ago 1

Conforms to README.lint

VSM is a tiny, idiomatic Ruby runtime for building agentic systems with a clear spine: Operations, Coordination, Intelligence, Governance, and Identity.

Building agentic systems often leads to tangled callback spaghetti and unclear responsibilities. As you add tools, LLM providers, and coordination logic, the complexity explodes. You end up with:

  • Callbacks nested in callbacks with no clear flow
  • Tool execution mixed with business logic
  • No clear separation between "what the agent does" vs "how it decides" vs "what rules it follows"
  • Difficulty testing individual components
  • Lock-in to specific LLM providers or frameworks

VSM solves this by providing a composable, testable architecture with named responsibilities (POODR/SOLID style). You get clear separation of concerns from day one, and can start with a single capsule and grow to a swarm—without changing your interface or core loop.

The Viable System Model gives you a proven organizational pattern: every autonomous system needs Operations (doing), Coordination (scheduling), Intelligence (deciding), Governance (rules), and Identity (purpose). VSM makes this concrete in Ruby.

Ruby developers building AI agents who want clean architecture over framework magic. If you've read Sandi Metz's POODR, appreciate small objects with single responsibilities, and want your agent code to be as clean as your Rails models, VSM is for you.

Teams scaling from prototype to production who need to start simple (one tool, one LLM call) but know they'll need multiple tools, streaming, confirmations, and policy enforcement later. VSM's recursive capsule design means your "hello world" agent uses the same architecture as your production swarm.

Developers who want provider independence. VSM doesn't lock you into OpenAI, Anthropic, or any specific provider. Your Intelligence component decides how to plan—whether that's calling an LLM, following a state machine, or using your own logic.

VSM is a Ruby gem that provides:

  1. Five named systems that every agent needs:

    • Operations — do the work (tools/skills)
    • Coordination — schedule, order, and arbitrate conversations (the "floor")
    • Intelligence — plan/decide (e.g., call an LLM driver, or your own logic)
    • Governance — enforce policy, safety, and budgets
    • Identity — define purpose and invariants
  2. Capsules — recursive building blocks. Every capsule has the five systems above plus a message bus. Capsules can contain child capsules, and "tools" are just capsules that opt-in to a tool interface.

  3. Async-first architecture — powered by the async gem, VSM runs streaming, I/O, and multiple tool calls concurrently without blocking.

  4. Clean interfaces — Ports translate external events (CLI, HTTP, MCP) into messages. Tools expose JSON Schema descriptors that work with any LLM provider.

  5. Built-in observability — append-only JSONL ledger of all events, ready to feed into a monitoring UI.

# Gemfile gem "vsm", "~> 0.0.1"

Ruby 3.2+ recommended.

Here's a minimal agent with one tool:

require "securerandom" require "vsm" # 1) Define a tool as a capsule class EchoTool < VSM::ToolCapsule tool_name "echo" tool_description "Echoes a message" tool_schema({ type: "object", properties: { text: { type: "string" } }, required: ["text"] }) def run(args) "you said: #{args["text"]}" end end # 2) Define your Intelligence (decides what to do) class DemoIntelligence < VSM::Intelligence def handle(message, bus:, **) return false unless message.kind == :user if message.payload =~ /\Aecho:\s*(.+)\z/ # User said "echo: something" - call the tool bus.emit VSM::Message.new( kind: :tool_call, payload: { tool: "echo", args: { "text" => $1 } }, corr_id: SecureRandom.uuid, meta: message.meta ) else # Just respond bus.emit VSM::Message.new( kind: :assistant, payload: "Try: echo: hello", meta: message.meta ) end true end end # 3) Build your agent using the DSL capsule = VSM::DSL.define(:demo) do identity klass: VSM::Identity, args: { identity: "demo", invariants: [] } governance klass: VSM::Governance coordination klass: VSM::Coordination intelligence klass: DemoIntelligence operations do capsule :echo, klass: EchoTool end end # 4) Add a simple CLI interface class StdinPort < VSM::Port def loop session = SecureRandom.uuid print "You: " while (line = $stdin.gets&.chomp) @capsule.bus.emit VSM::Message.new( kind: :user, payload: line, meta: { session_id: session } ) @capsule.roles[:coordination].wait_for_turn_end(session) print "You: " end end def render_out(msg) case msg.kind when :assistant puts "\nBot: #{msg.payload}" when :tool_result puts "\nTool> #{msg.payload}" @capsule.bus.emit VSM::Message.new( kind: :assistant, payload: "(done)", meta: msg.meta ) end end end # 5) Start the runtime VSM::Runtime.start(capsule, ports: [StdinPort.new(capsule:)])

Run it:

ruby quickstart.rb # You: echo: hello # Tool> you said: hello

Scaffold a new VSM app with a ChatTTY interface:

gem install vsm # or build/install locally vsm new my_agent cd my_agent bundle install bundle exec exe/my-agent

Options:

  • --with-llm openai|anthropic|gemini — choose LLM provider (default: openai)
  • --model <name> — default model
  • --git — initialize git and commit
  • --bundle — run bundle install
  • --path <dir> — target directory (default: ./<name>)
  • --force — overwrite an existing non-empty directory

Generated layout mirrors the airb example: an Organism.build to assemble the capsule, a default ChatTTY port, and a sample echo tool ready to extend.

For a real agent with LLM integration:

capsule = VSM::DSL.define(:my_agent) do identity klass: VSM::Identity, args: { identity: "my_agent", invariants: ["stay in workspace"] } governance klass: VSM::Governance coordination klass: VSM::Coordination intelligence klass: MyLLMIntelligence # Your class that calls OpenAI/Anthropic/etc monitoring klass: VSM::Monitoring # Optional: writes JSONL event log operations do capsule :list_files, klass: ListFilesTool capsule :read_file, klass: ReadFileTool capsule :write_file, klass: WriteFileTool end end

Your MyLLMIntelligence would:

  1. Maintain conversation history
  2. Call your LLM provider with available tools
  3. Emit :tool_call messages when the LLM wants to use tools
  4. Stream :assistant_delta tokens as they arrive
  5. Emit final :assistant message when done
  • Named systems: Operations, Coordination, Intelligence, Governance, Identity
  • Capsules: recursive building blocks (a capsule can contain more capsules)
  • Async bus: non‑blocking message channel with fan‑out subscribers
  • Structured concurrency: streaming + multiple tool calls in parallel
  • Tools-as-capsules: opt‑in tool interface + JSON Schema descriptors
  • Executors: run tools in the current fiber or a thread pool (Ractor/Subprocess future)
  • Ports: clean ingress/egress adapters for CLI/TUI/HTTP/MCP/etc.
  • Observability: append‑only JSONL ledger you can feed into a UI later
  • POODR/SOLID: small objects, high cohesion, low coupling

VSM includes a set of read‑only meta tools you can attach to any capsule to inspect its structure and code:

  • meta_summarize_self — Summarize the current capsule including roles and tools
  • meta_list_tools — List all tools available in the organism (descriptors and paths)
  • meta_explain_tool — Show code and context for a specific tool
  • meta_explain_role — Explain a role implementation for a capsule, with source snippets

Attach them when building your capsule:

capsule = VSM::DSL.define(:my_agent) do identity klass: VSM::Identity, args: { identity: "my_agent" } governance klass: VSM::Governance coordination klass: VSM::Coordination intelligence klass: VSM::Intelligence monitoring klass: VSM::Monitoring operations do meta_tools # registers the four meta tools above on this capsule end end

Example calls:

  • meta_summarize_self {} → high‑level snapshot and counts
  • meta_list_tools {} → array of tools with descriptors
  • meta_explain_tool { "tool": "some_tool" } → code snippet + descriptor
  • meta_explain_role { "role": "coordination" } → role class, constructor args, source locations, and code blocks

A container with five named systems and a message bus:

Capsule(:name) ├─ Identity (purpose & invariants) ├─ Governance (safety & budgets) ├─ Coordination (scheduling & "floor") ├─ Intelligence (planning/deciding) ├─ Operations (tools/skills) └─ Monitoring (event ledger; optional)

Capsules can contain child capsules. Recursion means a "tool" can itself be a full agent if you want.

VSM::Message.new( kind: :user | :assistant | :assistant_delta | :tool_call | :tool_result | :plan | :policy | :audit | :confirm_request | :confirm_response, payload: "any", path: [:airb, :operations, :fs], # optional addressing corr_id: "uuid", # correlate tool_call ↔ tool_result meta: { session_id: "uuid", ... } # extra context )

A non‑blocking bus built on fibers (async). Emitting a message never blocks the emitter.

Any capsule can opt‑in to act as a "tool" by including VSM::ActsAsTool (already included in VSM::ToolCapsule).

class ReadFile < VSM::ToolCapsule tool_name "read_file" tool_description "Read the contents of a UTF-8 text file at relative path." tool_schema({ type: "object", properties: { path: { type: "string" } }, required: ["path"] }) def run(args) path = governance_safe_path(args.fetch("path")) File.read(path, mode: "r:UTF-8") end # Optional: choose how this tool executes def execution_mode = :fiber # or :thread private def governance_safe_path(rel) = governance.instance_eval { # simple helper full = File.expand_path(File.join(Dir.pwd, rel)) raise "outside workspace" unless full.start_with?(Dir.pwd) full } end

VSM provides provider‑agnostic descriptors:

tool = instance.tool_descriptor tool.to_openai_tool # => {type:"function", function:{ name, description, parameters }} tool.to_anthropic_tool # => {name, description, input_schema} tool.to_gemini_tool # => {name, description, parameters}

Why opt‑in? Not every capsule should be callable as a tool. Opt‑in keeps coupling low. Later you can auto‑expose selected capsules as tools or via MCP.

VSM is async by default:

  • The bus is fiber‑based and non‑blocking.
  • The capsule loop drains messages without blocking emitters.
  • Operations runs each tool call in its own task; tools can choose their execution mode:
    • :fiber (default) — I/O‑bound, non‑blocking
    • :thread — CPU‑ish work or blocking libraries

You can add Ractor/Subprocess executors later without changing the API.

A Port translates external events into messages and renders outgoing messages. Examples: CLI chat, TUI, HTTP, MCP stdio, editor plugin.

class MyPort < VSM::Port def loop session = SecureRandom.uuid while (line = $stdin.gets&.chomp) @capsule.bus.emit VSM::Message.new(kind: :user, payload: line, meta: { session_id: session }) @capsule.roles[:coordination].wait_for_turn_end(session) end end def render_out(msg) case msg.kind when :assistant_delta then $stdout.print(msg.payload) when :assistant then puts "\nBot: #{msg.payload}" when :confirm_request then confirm(msg) end end def confirm(msg) print "\nConfirm? #{msg.payload} [y/N] " ok = ($stdin.gets || "").strip.downcase.start_with?("y") @capsule.bus.emit VSM::Message.new(kind: :confirm_response, payload: { accepted: ok }, meta: msg.meta) end end

Start everything:

VSM::Runtime.start(capsule, ports: [MyPort.new(capsule:)])
  • VSM::Ports::ChatTTY — A generic, customizable chat terminal UI. Safe to run alongside MCP stdio; prefers IO.console so it won’t pollute stdout.
  • VSM::Ports::MCP::ServerStdio — Exposes your capsule as an MCP server on stdio implementing tools/list and tools/call.

Enable them:

require "vsm/ports/chat_tty" require "vsm/ports/mcp/server_stdio" ports = [ VSM::Ports::MCP::ServerStdio.new(capsule: capsule), # machine IO (stdio) VSM::Ports::ChatTTY.new(capsule: capsule) # human IO (terminal) ] VSM::Runtime.start(capsule, ports: ports)

MCP Client (reflect and wrap tools)

Reflect tools from an external MCP server and expose them as local tools using the DSL. This uses a tiny stdio JSON‑RPC client under the hood.

require "vsm/dsl_mcp" cap = VSM::DSL.define(:mcp_client) do identity klass: VSM::Identity, args: { identity: "mcp_client", invariants: [] } governance klass: VSM::Governance coordination klass: VSM::Coordination intelligence klass: VSM::Intelligence # or your own monitoring klass: VSM::Monitoring operations do # Prefix helps avoid name collisions mcp_server :smith, cmd: "smith-server --stdio", prefix: "smith_", include: %w[search read] end end

See examples/06_mcp_mount_reflection.rb and examples/07_connect_claude_mcp.rb.

Note: Many MCP servers speak LSP-style Content-Length framing on stdio. The current minimal transport uses NDJSON for simplicity. If a server hangs or doesn't respond, switch the transport to LSP framing in lib/vsm/mcp/jsonrpc.rb.

You can customize ChatTTY via options or by subclassing to override only the banner and rendering methods, while keeping the input loop.

class FancyTTY < VSM::Ports::ChatTTY def banner(io) io.puts "\e[95m\n ███ CUSTOM CHAT ███\n\e[0m" end def render_out(m) super # or implement your own formatting end end VSM::Runtime.start(capsule, ports: [FancyTTY.new(capsule: capsule, prompt: "Me> ")])

See examples/08_custom_chattty.rb.

Use an LLM driver (e.g., OpenAI) to automatically call tools reflected from an MCP server:

driver = VSM::Drivers::OpenAI::AsyncDriver.new(api_key: ENV.fetch("OPENAI_API_KEY"), model: ENV["AIRB_MODEL"] || "gpt-4o-mini") cap = VSM::DSL.define(:mcp_with_llm) do identity klass: VSM::Identity, args: { identity: "mcp_with_llm", invariants: [] } governance klass: VSM::Governance coordination klass: VSM::Coordination intelligence klass: VSM::Intelligence, args: { driver: driver, system_prompt: "Use tools when helpful." } monitoring klass: VSM::Monitoring operations do mcp_server :server, cmd: ["claude","mcp","serve"] # reflect tools end end VSM::Runtime.start(cap, ports: [VSM::Ports::ChatTTY.new(capsule: cap)])

See examples/09_mcp_with_llm_calls.rb.

VSM ships a tiny Monitoring role that writes an append‑only JSONL ledger:

.vsm.log.jsonl {"ts":"2025-08-14T12:00:00Z","kind":"user","path":null,"corr_id":null,"meta":{"session_id":"..."}} {"ts":"...","kind":"tool_call", ...} {"ts":"...","kind":"tool_result", ...} {"ts":"...","kind":"assistant", ...}

Use it to power a TUI/HTTP "Lens" later. Because everything flows over the bus, you get consistent events across nested capsules and sub‑agents.

MCP and ChatTTY Coexistence

  • MCP stdio port only reads stdin and writes strict JSON to stdout.
  • ChatTTY prefers IO.console or falls back to stderr and disables input if no TTY.
  • You can run both in the same process: machine protocol on stdio, human UI on the terminal.

The Intelligence role is where you plan/decide. It might:

  • forward a conversation to an LLM driver (OpenAI/Anthropic/Gemini),
  • emit :tool_call messages when the model asks to use tools,
  • stream :assistant_delta tokens and finish with :assistant.

Minimal example (no LLM, just logic):

class MyIntelligence < VSM::Intelligence def initialize @history = Hash.new { |h,k| h[k] = [] } end def handle(message, bus:, **) return false unless [:user, :tool_result].include?(message.kind) sid = message.meta&.dig(:session_id) @history[sid] << message if message.kind == :user && message.payload =~ /read (.+)/ bus.emit VSM::Message.new( kind: :tool_call, payload: { tool: "read_file", args: { "path" => $1 } }, corr_id: SecureRandom.uuid, meta: { session_id: sid } ) else bus.emit VSM::Message.new(kind: :assistant, payload: "ok", meta: { session_id: sid }) end true end end

In your application, you can plug in provider drivers that stream and support native tool calling; Intelligence remains the same.

VSM is designed for unit tests:

  • Capsules: inject fake systems and assert dispatch.
  • Intelligence: feed :user / :tool_result messages and assert emitted messages.
  • Tools: call #run directly.
  • Ports: treat like adapters; they're thin.

Quick smoke test:

require "vsm" RSpec.describe "tool dispatch" do class T < VSM::ToolCapsule tool_name "t"; tool_description "d"; tool_schema({ type: "object", properties: {}, required: [] }) def run(_args) = "ok" end it "routes tool_call to tool_result" do cap = VSM::DSL.define(:test) do identity klass: VSM::Identity, args: { identity: "t", invariants: [] } governance klass: VSM::Governance coordination klass: VSM::Coordination intelligence klass: VSM::Intelligence operations { capsule :t, klass: T } end q = Queue.new cap.bus.subscribe { |m| q << m if m.kind == :tool_result } cap.run cap.bus.emit VSM::Message.new(kind: :tool_call, payload: { tool: "t", args: {} }, corr_id: "1") expect(q.pop.payload).to eq("ok") end end
  • Ergonomic Ruby (small objects, clear names, blocks/DSL where it helps)
  • High cohesion, low coupling (roles are tiny; tools are self‑contained)
  • Recursion by default (any capsule can contain more capsules)
  • Async from day one (non‑blocking bus; concurrent tools)
  • Portability (no hard dependency on a specific LLM vendor)
  • Observability built‑in (event ledger everywhere)
  • Executors: Ractor & Subprocess for heavy/risky tools
  • Limiter: per‑tool semaphores and budgets (tokens/time/IO) in Governance
  • Lens UI: terminal/HTTP viewer for plans, tools, and audits
  • Drivers: optional vsm-openai, vsm-anthropic, vsm-gemini add‑ons for native tool‑calling + streaming
  • MCP ports: stdio server/client to expose/consume MCP tools

Does every capsule have to be a tool?
No. Opt‑in via VSM::ActsAsTool. Many capsules (planner, auditor, coordinator) shouldn't be callable as tools.

Can I run multiple interfaces at once (chat + HTTP + MCP)?
Yes. Start multiple ports; Coordination arbitrates the "floor" per session.

How do I isolate risky or CPU‑heavy tools?
Set execution_mode to :thread today. Ractor/Subprocess executors are planned and will use the same API.

What about streaming tokens?
Handled by your Intelligence implementation (e.g., your LLM driver). Emit :assistant_delta messages as tokens arrive; finish with a single :assistant.

Is VSM tied to any specific LLM?
No. Write a driver that conforms to your Intelligence's expectations (usually "yield deltas" + "yield tool_calls"). Keep the provider in your app gem.

module VSM # Messages Message(kind:, payload:, path: nil, corr_id: nil, meta: {}) # Bus class AsyncChannel def emit(message); end def pop; end def subscribe(&block); end attr_reader :context end # Roles (named systems) class Operations; end # routes tool_call -> children class Coordination; end # scheduling, floor, turn-end class Intelligence; end # your planning/LLM driver class Governance; end # policy/safety/budgets class Identity; end # invariants & escalation class Monitoring; end # JSONL ledger (observe) # Capsules & DSL class Capsule; end module DSL def self.define(:name, &block) -> Capsule end # Tools module Tool Descriptor#to_openai_tool / #to_anthropic_tool / #to_gemini_tool end module ActsAsTool; end class ToolCapsule # include ActsAsTool # def run(args) ... # def execution_mode = :fiber | :thread end # Ports & runtime class Port def initialize(capsule:); end def loop; end # optional def render_out(msg); end def egress_subscribe; end end module Runtime def self.start(capsule, ports: []) end end

MIT. See LICENSE.txt.

Issues and PRs are welcome! Please include:

  • A failing spec (RSpec) for bug reports
  • Minimal API additions
  • Clear commit messages

Run tests with:

Lint with:

Read Entire Article