In 1973, Unix revolutionized computing with a simple insight: complex systems should emerge from simple parts connected by clean interfaces. The power wasn't in the Unix commands themselves, but in how they enabled developers to build things its creators never imagined. All because every tool spoke the same language: text streams.
Today's AI industry is pouring billions into agent frameworks, orchestration platforms, and error recovery systems. Yet the most sophisticated AI applications still fail in production for a surprisingly mundane reason: we can't guarantee what comes out of our language models.
Every integration is held together with parsing logic and prayer. Every deployment requires layers of error handling. Every scale-up reveals new failure modes. We're building skyscrapers on quicksand.
There's a different path. One where language models honor strict contracts - guaranteed output formats that make them as reliable as any function call. When every component produces exactly what it promises, complex orchestration becomes unnecessary. The components simply work.
The principles that made Unix endure for fifty years apply just as much to AI systems. It's time we learned from them.
1. Small Parts, Big Dreams
Scalable software systems generally exhibit the following properties:
- They are composable.
- They are easy to understand.
- They are easy to debug.
- They are easily extended.
Unix is a perfect example. Its success stems from one of its core tenets: tools must conform to simple, clear protocols (byte streams) that enable composition. This powerful principle enables the construction of complex workflows from simple tools.
Consider the following command that finds the five most common values in the second column of a CSV file:
Each of cat, cut, sort, uniq, and head does one thing, is stateless, is easy to understand, and is well-tested. This allows us to build a complex pipeline from these simple parts confidently.
The same applies to AI systems. When agents conform to clear contracts with well-defined output types, they can be composed into robust systems. Complex behavior emerges from the composition of simple parts, rather than from the complexity of individual components.
Enforcing output types for language models is not just about robustness; when each developer's work becomes a reliable foundation for the next, progress compounds. By implementing clear contracts between components, we create a foundation that enables reliable composition and innovation at higher levels of abstraction. Imagine AI systems that grow more capable with each new element, where a thousand developers can build on each other's work without coordination, where complex behaviors emerge as naturally as piping Unix commands. That's the big dream.
2. No Bad Output
The dream of composable AI becomes a reality when we can guarantee the structure of the output of our language models. That’s what structured generation does.
Structured generation operates at the logits level to constrain the model’s output to match a predefined schema, regular expression, or grammar. This ensures that results are always well-formed and machine-readable at virtually no additional cost (with our technology).
Consider these two approaches to code review. First, an unstructured approach:
We need to parse the text and hope for consistent formatting. Error handling is complex. The parsing code is error-prone, the output structure is not guaranteed, and it will break if one switches to a different model.
With a structured approach we can guarantee a clear protocol, whatever the model:
The difference between the two approaches is fundamental:
- Contract Enforcement: Structured generation enforces a clear contract between the language model and the rest of the code, be it another agent, a standard function, or a database. The language model must produce output matching your schema.
- Self-Documenting Interfaces: The schema itself documents exactly what data flows between agents. New team members can understand the system by reading the type definitions, and don't need to go through execution traces.
Structured generation moves failure detection from runtime to type checking time, from implicit to explicit, and from "hopefully correct" to "provably correct" within the bounds of your schema.
2.1. Beyond JSON
While we've used JSON schemas in our example, structured generation encompasses a much richer type system. Regular expressions ensure formats like emails, URLs, or IDs are always valid. Context-free grammars guarantee syntactic correctness for DSLs and query languages. Dynamic constraints will soon enable truly adaptive, context-aware generation, where what's already been generated shapes what can come next.
2.2. A Note on Content Quality
Structured generation guarantees format, not facts. A model might output perfectly structured but incorrect data. The content quality still depends on the model's capabilities and the quality of your prompts.
But when you can trust the structure, you can focus entirely on improving content quality. Choose your model based on accuracy. Test what needs testing. Build systems that gracefully handle imperfect content while maintaining structural integrity.
3. The Foundation for Composable AI
The evolution of software systems teaches us that lasting innovation comes from reliable building blocks. Unix succeeded because its tools had predictable interfaces. The web exploded because APIs returned what they promised. Complex systems emerged from simple, deterministic components.
AI systems have lacked this foundation. Today's agent frameworks build elaborate orchestration layers, trying to manage the chaos of unpredictable outputs. Structured generation takes the opposite approach: instead of controlling chaos, prevent it. When every model call returns exactly what you expect, complexity vanishes.
At .txt, we're making this primitive so fast, so reliable, and so accessible that it becomes invisible - just another assumption developers can build upon. When structured generation becomes standard, AI development will resemble modern software engineering: reliable components, clear interfaces, and systems that compound rather than break.
Without this foundation, AI systems remain fragile experiments. With it, they become infrastructure. The future of AI isn't more complex frameworks, but simpler building blocks that enable infinite complexity. Do one thing exceptionally well, and everything else becomes possible.