When the world zigs, zag: Chris Lattner, Jeremy Howard on craftsmanship and AI

4 hours ago 2

Today, I watched a brilliant conversation between Chris Lattner and Jeremy Howard about software engineering craftsmanship and the challenges that AI-generated code and tools bring to its future.

I viewed it through the lens of my own day-to-day development experience. AI tools constantly leave me astonished by their capabilities, yet at the same time somewhat underwhelmed, especially when compared to the workflows some developers on X showcase when orchestrating multiple agents and dramatically boosting their productivity. Watching Chris and Jeremy talk about craftsmanship clarified something I had been feeling. AI tools often amaze me, yet I keep wondering if the hype I see online tells the full story.

Craftsmanship

Chris and Jeremy emphasized the importance of craftsmanship in software development. Building something that lasts requires competence and care. The architecture of the system needs to support growth, and the engineering culture must value technical excellence. They also agreed on creating development environments and tooling that enable quick feedback loops and tight iteration cycles.

I was quite surprised at how quickly you had a full VS Code development environment with all the niceties, but then I kind of thought, well, of course Chris would focus on … because without tools that let you create quick iterations, all of your work is going to be slower and more annoying, and more wrong.

Jeremy Howard reflects on how Chris Lattner set up a fast, productive environment when starting his project.

In the video, they also touched on writing meaningful tests and dogfooding their own products. These are practices that continue to serve them well when building great software, a mindset that stands in contrast to the way agentic AI tools have recently reshaped development.

AI in practice: promise vs. reality

Jeremy mentioned that although their company had a clear vision and understood how they wanted to operate, the agentic workflows became so popular that the team could not ignore them. It was oddly reassuring to hear this from people deeply involved in AI, still unsure which parts of the hype truly matter.

I’d think, wow, maybe I’m missing something. They seemed so confident <…> this is the future, just describe what you want, and it builds it. And I started thinking, maybe I’m the old man yelling at clouds here, maybe I’m the one not getting it.

Jeremy Howard

They took it seriously, building real systems around agents and giving them meaningful problems. Yet the results were disappointing. Worst of all, productivity and team morale fell off a cliff.

What caught my attention was how Jeremy explained the reasons for that failure. Agents broke the already productive workflow that helped them produce great software. Instead of quick, focused iteration cycles, the current set of LLM-driven tools often behaves like a slot machine, where you prompt and hope for the right outcome.

I have felt the same tension in my own workflow. While agents can produce at least a great start to a solution, other times, you end up waiting and chasing the result you wanted. In the end, I often realized that designing and developing it myself would have been quicker.

Jeremy Howard summarized the rollercoaster of feelings well:

At first it felt exciting, like magic, you give it a goal, it comes back with code. But after a few days you realize you’re no longer thinking deeply about your own architecture, you’re just waiting for the next turn of the crank.

Jeremy Howard

Chris Lattner also shared how his team experimented with letting agentic tools fix bugs autonomously, and they often ended up patching symptoms while creating deeper issues. His largest overarching concern is delegating the very act of thinking, which can lead to intellectual laziness.

I caught myself going in that direction as well. Even after setting up a seemingly thoughtful agentic loop experience with sub-steps of analyzing and questioning different solutions, I was giving up an opportunity to think deeply about the problem first. I’d end up critiquing the agent’s choices instead of dissecting the problem itself.

Where AI truly helps

Chris Lattner still sees clear advantages in today’s AI tools and estimates a 10-20% improvement in his own productivity. The AI allows him to get familiar with a new codebase quickly, getting out of memorizing the APIs and writing boilerplate. It accelerates development but does not replace his workflow, allowing him to preserve understanding.

However, he points out a potential 10x improvement in some areas, especially around building prototypes or assisting new developers. He believes the benefits may follow a bimodal distribution, modest in everyday work but transformative in early-stage exploration.

Maybe we’ll find out is that there’s a very bimodal distribution, like the people building prototypes <…> or the people who don’t know code being able to get something done that they otherwise couldn’t do. That is actually transformative.

But for other kinds of programming, maybe it’s not actually the thing that you should be aspiring to.

Chris Lattner

What I find revealing is that two things can be true at the same time: we’re dealing with transformative technology that drastically improves some areas of software engineering, while offering only moderate gains in others, still depending on the engineer’s craftsmanship.

Another key idea from their discussion was to treat AI not as a junior engineer cranking out code, but as a senior advisor who teaches and helps improve the codebase. It can analyze, explain, or fix, but it should not replace a developer’s judgment.

When the world zigs, why mastery still matters

Throughout the conversation, the recurring theme was that continuing to build skill and mastery still makes the most sense. We do not know when or whether the tools will be good enough to completely replace all parts of software engineering. Getting fixated or frozen by these thoughts is counterproductive.

Chris Lattner suggests that as everybody adopts these tools, the strong differentiation is developing mastery that allows an engineer to understand things to a depth that others do not. The hype cycles are constantly happening, and when this one resets, you will want to come out of it with stronger expertise and a deeper understanding.

There’s this whole cultural thing happening right now where people are saying, Okay, the LLMs can code, they can design, they can do everything, so why should I even go and learn how to do these things myself?

The problem with that is, you’re now optimizing for being the person sitting there asking questions rather than understanding how the system works or how to make it better.

If I were coming out of school, my advice is don’t pursue that path. Particularly if everybody is zigging, it’s time to zag.

Chris Lattner

Neither of them is against growth or experimentation with these tools. In fact, they are fascinated by them and continue to experiment. But there’s little value in either extreme: ignoring new tools or believing they’ll handle everything. Both lead to stagnation. Either way, you risk falling behind in the value you create.

In a moment when many zig by freezing or outsourcing their thinking to AI, zagging means investing in mastery and understanding.

Read Entire Article