What Would AI Architect?

5 hours ago 1

I’ve been thinking so much about the role of the human in the age of AI. The nature of craft and care and quality, and our role in a shifting landscape.

Therme Vals, Switzerland, Peter Zumthor

One night I stayed a little bit later in the studio. I was sitting at a desk my friend and I built many years ago. It’s inspired by the work of Donald Judd. It’s worn around the edges and it was a little bit wobbly and let me know it. I started to think about fastening the screws, when a series of relatively obscure interviews with Christopher Alexander jumped to mind.

I’d been searching for ways to think about the intersection of craft and AI. Essentially, whether algorithmic systems could grasp what Alexander meant by “the quality without a name.” How certain places feel alive, whole and deeply right, but which resists being pinned down by explicit language or computation. Instead, I found a passage about language itself, about how the words we use can trap us into seeing things backwards.

Alexander was quoting Benjamin Lee Whorf, a linguist whom I’d never encountered before. I hadn’t even heard his name. As it was recounted in the exchange, Whorf had been investigating a fire for an insurance company. Someone had thrown a match into an empty gasoline drum. “Empty” has the normal connotation of safe, Alexander wrote. “Full” suggests danger, something pregnant with volatile possibility. But actually, a full drum—sealed, intact, is relatively safe. An empty drum, however, its interior thick with gasoline vapor, is lethally dangerous. The words give you a 180-degree misreading of the situation.

I read the passage twice. Then I read it again.

It wasn’t just about gasoline drums. Alexander pushed further: “Because of this sort of Whorffian effect, most of the major concepts that have to do with the environment, such as the word ‘planning,’ the word ‘architect,’ the word ‘budget,’ even possibly the word ‘building,’ the word ‘bylaws,’ the word ‘zoning,’ every single one of these words, certainly including ‘design’ and, as we have discussed, the word ‘beauty’—is actually one of these explosive little traps.”

Explosive little traps. I paused.

The problem with language, as Whorf understood it, is that it doesn’t just describe reality. It shapes how we perceive it. We don’t see the gasoline vapor and then search for a word. We hear “empty” and experience safety. We feel it, even as we stand next to something that could harm us. The word arrives first, carrying its connotations, its histories, its 180-degree misreadings.

And this is what we train our language models on. Not gasoline drums, not the acrid smell of vapor, not the temperature differential between sealed and unsealed containers—just the words. “Empty” appears in context with “safe” enough times, and the statistical pattern becomes knowledge. It’s accepted. The LLM learns how humans talk about gasoline drums without ever encountering one.

This passage made me think about the question I’d been circling toward: what would AI architect?

Not: could AI design buildings. That’s already happening. But what would it actually create if it had only words—only the patterns of how we describe space, structure, and shelter—without ever having walked through a doorway, felt the weight of a material, or discovered that a room described as “open” might feel oppressively vast?

I came to learn of the work of Francisco Varela.

Francisco Varela spent much of his career trying to understand how living systems maintain themselves. The key insight is what Varela called “structural coupling.” An organism doesn’t adapt to its environment as if the environment were a fixed stage and adaptation were a one-way process. Instead, an organism and environment are locked in a kind of dance. Each shapes the other. The organism responds to the environment. In responding, it changes the environment, which then presents new conditions to respond to. Neither is stable. Both are always becoming.

Always becoming.

One of Varela’s favorite examples was walking. When you walk, you don’t have a complete plan in your head, one that you execute step by step. Instead, you respond to the ground as it meets your foot. The terrain slopes unexpectedly. Your body adjusts your weight distribution before you consciously register the change. A rock shifts under your heel. Your ankle flexes. Your arms swing slightly to rebalance. Your next step compensates. The path you walk is neither purely your intention nor purely the terrain’s constraint. It emerges from the coupling.

This is what Varela meant by “enaction”—the idea that cognition isn’t representation (building accurate internal models of an external world) but rather skillful engagement with an environment that essentially talks back. Knowledge, then, isn’t stored images or propositions. It’s the capacity to respond appropriately when the world does something unexpected.

I think built environments work this way too. They’re not containers that we simply occupy. They are participants in an ongoing, active, never fully-settled coupling.

A doorway that’s too narrow makes you turn sideways. You feel the constraint, you adjust your body. A room with too much echo changes how you speak; you modulate your voice without thinking about it, and the room in turn shapes what conversations are possible there. A staircase with irregular treads makes you pay attention, discover your assumptions about rhythm and height. My desk teaches me things I might not expect.

These aren’t failures of design. They are the design: the way spaces naturally teach through feedback rather than instruction.

Christopher Alexander understood this.

His pattern language was an attempt to articulate how structures and inhabitants co-evolve, not a set of rules to impose on buildings or environments. “Light on Two Sides of Every Room” is not just an aesthetic preference but a response to how humans move through space, seeking orientation, avoiding the oppressive feeling of deep enclosure. The pattern emerges from thousands of structural couplings between bodies and buildings, refined over generations of people discovering what works by living in what doesn’t.

The environment has to push back. Structural coupling requires two structures, both capable of being changed by the encounter.

An LLM doesn’t walk through spaces. It doesn’t feel the difference between a room described as “intimate” and one that actually makes you hunch your shoulders because the ceiling is too low. It processes the word “door” without ever having to orient its body to pass through one.

But the danger, I think, is precisely what Whorf identified: we can mistake the map for the territory. We can learn from the surroundings as we acclimate and accustom ourselves, only to find out that we are unaware of the 180-degree misreading until something explodes.

When an AI system trained purely on linguistic patterns encounters words like “planning,” “design,” “architect,” “beauty,” what Alexander called explosive little traps, it inherits all our embedded misreadings without the material feedback that might correct them. It knows “open plan” correlates with “contemporary” and “collaborative” in the corpus, but it has never experienced how sound travels in such a space, how the lack of walls eliminates the possibility of concentration, how “open” can mean “exposed.”

Yes, it can generate a floor plan. But it can’t (yet( discover that the plan is not quite right by walking through it. There’s no structural coupling, only the first half of the dance, the gesture without the response.

In Varela’s framework, an organism cut off from its environment’s responses loses the capacity for knowledge itself. Without the feedback loop, without the environment pushing back, there’s no enaction. There’s only the simulation of action, the appearance of knowing.

I keep thinking about what it would mean to design systems where this coupling isn’t severed.

Maybe this means built environments that insist on being walked through, touched, tested—spaces that preserve what Varela called “structural plasticity,” the capacity to respond to how people actually use them. Buildings that remain dangerous in the way an empty gasoline drum is dangerous: capable of teaching through error before the error becomes catastrophic.

“What would AI architect?” I’d written the question at the top of my notes. The studio was still. I added another line beneath it: “What would catch the mistakes?”

I thought about Whorf’s investigator standing in front of that empty drum, match in hand. The drum itself knew. The vapor knew. Reality pushed back, but too late.

We build environments all the time that teach us things language can’t convey. A hot stove. A steep drop. A beam that won’t support the weight we’ve placed on it.

I noticed the desk again. It was still wobbling beneath my hands. Tomorrow I’d call my friend and come back with tools and tighten the screws, feel the joints become solid again. The desk would stop responding to my body in that way, at least for a while. And maybe that was the lesson: you keep working and tinkering until the thing stops letting you know, until the coupling finds its equilibrium.

That is, until something shifts again, and the learning begins anew.

Discussion about this post

Read Entire Article