For decades, “fast, cheap, or good: choose two” has been a foundational mantra of software development. This trade-off created a sharp dichotomy between “durable” code — reliable, maintainable, and expensive —and “disposable” code, which was fast, cheap, and often brittle.
AI is about to demolish that dichotomy. The most successful companies will be those that understand that the rules have changed.
Honeycomb’s Charity Majors makes a compelling argument that AI is currently accelerating the split between durable and disposable software. The core of her argument revolves around trust and observability (o11y): durable code is code you can trust because you can observe it, measure it, and understand its behavior in production. Everything else is disposable.
As Charity says:
Anything that can be done with disposable code probably will be, because as we all know, durable software is expensive and hard. But disposable software is a skill set; durable code is a profession.

Historically, we decided whether to build disposable or durable software based on development cost and speed. The problem is that these factors are rapidly becoming uncorrelated with trust and reliability.
Those of us who learned to code when “good” meant “expensive” are now wired to make assumptions that are no longer valid. We instinctively believe a quick prototype is inherently disposable and expensive to convert.
In my experience, disagreeing with Charity is a sucker’s game, but I believe this thinking, while correct for the last 30 years, is about to become wrong very quickly. And not because of magical AI or partnered IDEs from The Future (tm) but because of technologies we all have at our fingertips today.
Given proper tooling, AI is already good at both the rapid prototyping that has traditionally created disposable code and the tasks required to make code durable.
On the prototyping side, we have the YOLO world of vibe coding:
- Write code from descriptions or examples: Wheeeee!
- Rapidly prototype: Wheeeee plus GPUs go brrrrrr!
- Refactor and convert solutions: Get this into a lagugage other develoeprs on the team are already using
On the durability side, AI can:
- Add o11y, test, and validation code: Can we trust this piece of code?
- Document and explain existing code: What the hell did my LLM just write?
- Discover and explore existing solutions: Is the de facto standard solution actually good enough?
Taken together — especially with AI’s superhuman speed — this means that disposable code is no longer a dead end. It’s a step on the path to a durable solution.
Of course, “proper tooling” is doing a lot of work here, but it’s not magic. It’s the tooling and methodologies we already have: robust testing frameworks and a commitment to observability. If you build a foundation of good tests and o11y around your code — even a “disposable” prototype — you create the necessary guardrails for AI to perform powerful transformations.
Teams that build for o11y and LLM-assistance from the start will have a spectacular, almost unfair, advantage. Given fast, cheap, or good, they’re going to choose all three.
.png)

