Note from Jeremy: We’re teaching a course starting Nov 3rd on how to build towards software mastery and craftsmanship whilst leveraging AI effectively. Have a look at solve.it.com if you’re interested.
I’ve spent decades teaching people to code, building tools that help developers work more effectively, and championing the idea that programming should be accessible to everyone. Through fast.ai, I’ve helped millions learn not just to use AI, but to understand it deeply enough to build things that matter.
But lately, I’ve been deeply concerned. The AI agent revolution promises to make everyone more productive, yet what I’m seeing is something different: developers abandoning the very practices that lead to understanding, mastery, and software that lasts. When CEOs brag about their teams generating 10,000 lines of AI-written code per day, when junior engineers tell me they’re “vibe-coding” their way through problems without understanding the solutions, are we racing toward a future where no one understands how anything works, and competence craters?
I needed to talk to someone who embodies the opposite approach: someone whose code continues to run the world decades after he created it. That’s why I called Chris Lattner.
Chris and I chatted on Oct 5th, 2025, and he kindly let me record the conversation. I’m glad I did, because it turned out to be thoughtful and inspiring. Check out the video for the full interview, or read on for my summary of what I learned.
Talking with Chris Lattner
Chris Lattner builds infrastructure that becomes invisible through ubiquity.
Twenty-five years ago, as a PhD student, he created LLVM: the most fundamental system for translating human-written code into instructions computers can execute. In 2025, LLVM sits at the foundation of most major programming languages: the Rust that powers Firefox, the Swift running on your iPhone, and even Clang, a C++ compiler created by Chris that Google and Apple now use to create their most critical software. He describes the Swift programming language he created as “Syntax sugar for LLVM”. Today it powers the entire iPhone/iPad ecosystem.
When you need something to last not just years but decades, to be flexible enough that people you’ll never meet can build things you never imagined on top of it, you build it the way Chris built LLVM, Clang, and Swift.
I first met Chris when he arrived at Google in 2017 to help them with TensorFlow. Instead of just tweaking it, he did what he always does: he rebuilt from first principles. He created MLIR (think of it as LLVM for modern hardware and AI), and then left Google to create Mojo: a programming language designed to finally give AI developers the kind of foundation that could last.
Chris architects systems that become the bedrock others build on for decades, by being a true craftsman. He cares deeply about the craft of software development.
I told Chris about my concerns, and the pressures I was feeling as both a coder and a CEO:
“Everybody else around the world is doing this, ‘AGI is around the corner. If you’re not doing everything with AI, you’re an idiot.’ And honestly, Chris, it does get to me. I question myself… I’m feeling this pressure to say, ‘Screw craftsmanship, screw caring.’ We hear VCs say, ‘My founders are telling me they’re getting out 10,000 lines of code a day.’ Are we crazy, Chris? Are we old men yelling at the clouds, being like, ‘Back in my day, we cared about craftsmanship’? Or what’s going on?”
Chris told me he shares my concerns:
“A lot of people are saying, ‘My gosh, tomorrow all programmers are going to be replaced by AGI, and therefore we might as well give up and go home. Why are we doing any of this anymore? If you’re learning how to code or taking pride in what you’re building, then you’re not doing it right.’ This is something I’m pretty concerned about…
But the question of the day is: how do you build a system that can actually last more than six months?”
He showed me that the answer to that question is timeless, and actually has very little to do with AI.
Design from First Principles
Chris’s approach has always been to ask fundamental questions. “For me, my journey has always been about trying to understand the fundamentals of what makes something work,” he told me. “And when you do that, you start to realize that a lot of the existing systems are actually not that great.”
When Chris started LLVM over Christmas break in 2000, he was asking: what does a compiler infrastructure need to be, fundamentally, to support languages that don’t exist yet? When he came into the AI world he was eager to learn the problems I saw with TensorFlow and other systems. He then zoomed into what AI infrastructure should look like from the ground up. Chris explained:
“The reason that those systems were fundamental, scalable, successful, and didn’t crumble under their own weight is because the architecture of those systems actually worked well. They were well-designed, they were scalable. The people that worked on them had an engineering culture that they rallied behind because they wanted to make them technically excellent.
In the case of LLVM, for example, it was never designed to support the Rust programming language or Julia or even Swift. But because it was designed and architected for that, you could build programming languages, Snowflake could go build a database optimizer—which is really cool—and a whole bunch of other applications of the technology came out of that architecture.”
Chris pointed out that he and I have a certain interest in common: “We like to build things, and we like to build things from the fundamentals. We like to understand them. We like to ask questions.” He has found (as have I!) that this is critical if you want your work to matter, and to last.
Of course, building things from the fundamentals doesn’t always work. But as Chris said, “if we’re going to make a mistake, let’s make a new mistake.” Doing the same thing as everyone else in the same way as everyone else isn’t likely to do work that matters.
Craftsmanship and architecture
Chris pointed out that software engineering isn’t just about an invididual churning out code: “A lot of evolving a product is not just about getting the results; it’s about the team understanding the architecture of the code.” And in fact it’s not even just about understanding, but that he’s looking for something much more than that. “For people to actually give a damn. For people to care about what they’re doing, to be proud of their work.”
I’ve seen that it’s possible for teams that care and build thoughtfully to achieve something special. I pointed out to him that “software engineering has always been about trying to get a product that gets better and better, and your ability to work on that product gets better and better. Things get easier and faster because you’re building better and better abstractions and better and better understandings in your head.”
Chris agreed. He again stressed the importance of thinking longer term:
“Fundamentally, with most kinds of software projects, the software lives for more than six months or a year. The kinds of things I work on, and the kinds of systems you like to build, are things that you continue to evolve. Look at the Linux kernel. The Linux kernel has existed for decades with tons of different people working on it. That is made possible by an architect, Linus, who is driving consistency, abstractions, and improvement in lots of different directions. That longevity is made possible by that architectural focus.”
This kind of deep work doesn’t just benefit the organization, but benefits every individual too. Chris said:
“I think the question is really about progress. It’s about you as an engineer. What are you learning? How are you getting better? How much mastery do you develop? Why is it that you’re able to solve problems that other people can’t?… The people that I see doing really well in their careers, their lives, and their development are the people that are pushing. They’re not complacent. They’re not just doing what everybody tells them to do. They’re actually asking hard questions, and they want to get better. So investing in yourself, investing in your tools and techniques, and really pushing hard so that you can understand things at a deeper level—I think that’s really what enables people to grow and achieve things that they maybe didn’t think were possible a few years before.”
This is what I tell my team too. The thing I care most about is whether they’re always improving at their ability to solve those problems.
NoteBuilding Despite Doubt
Building something that lasts 25 years, like LLVM, requires more than architectural thinking. It requires the willingness to pursue your vision even when others don’t see it yet.
When I asked Chris about Mojo’s journey, his passion and dedication couldn’t have been more clear:
“This is my life’s work. This is what I was put here to do: Mojo and MAX, solving AI compute, making it so people can program all the chips and have choice. This is what I’m about, and I haven’t been too shy about that. But on the other hand, that doesn’t mean everybody believes. That’s okay with me. A lot of people haven’t believed for various reasons, whether it be the angry person on Hacker News shaking their fist and asking, ‘Why didn’t I just use Julia?’ or whether it be an employee that’s like, ‘Yeah, actually, I don’t believe in X, Y, and Z.’ It’s like, ‘Okay, cool, this is the wrong place.’…
Trying to make everybody happy is how you get watered-down, committee-driven things, and you can’t really make a big, bold bet if you do that. You have to have a hypothesis, a core belief. It may be directional, may not have all the digits of precision, but you have to stick to it, otherwise you won’t get there.”
Dogfooding
But caring deeply and thinking architecturally isn’t enough if you’re building in a vacuum.
I’m not sure it’s really possible to create great software if you’re not using it yourself, or working right next to your users. When Chris and his team was building the Swift language, they had to build it in a vacuum of Apple secrecy. He shares:
“The using your own product piece is really important. One of the big things that caused the IDE features and many other things to be a problem with Swift is that we didn’t really have a user. We were building it, but before we launched, we had one test app that was kind of ‘dogfooded’ in air quotes, but not really. We weren’t actually using it in production at all. And by the time it launched, you could tell. The tools didn’t work, it was slow to compile, crashed all the time, lots of missing features.”
His new Mojo project is taking a very different direction:
“With Mojo, we consider ourselves to be the first customer. We have hundreds of thousands of lines of Mojo code, and it’s all open source… That approach is very different. It’s a product of experience, but it’s also a product of building Mojo to solve our own problems. We’re learning from the past, taking best principles in.”
The result is evident. Already at this early stage models built on Mojo are getting state of the art results. Most of Mojo is written in Mojo. So if something isn’t working well, they are the first ones to notice.
We had a similar goal at fast.ai with our Solveit platform: we wanted to reach a point where most of our staff chose to do most of their work in Solveit, because they preferred it. (Indeed, I’m writing this article in Solveit right now!) Before we reached that point, I often had to force myself to use Solveit in order to experience first hand the shortcomings of those early versions, so that I could deeply understand the issues. Having done so, I now appreciate how smooth everything works even more!
But this kind of deep, experiential understanding is exactly what we risk losing when we delegate too much to AI.
AI, craftsmanship, and learning
Chris uses AI: “I think it’s a very important tool. I feel like I get a 10 to 20% improvement—some really fancy code completion and autocomplete.” But with Chris’ focus on the importance of craftsmanship and continual learning and improvement, I wondered if heavy AI (and particularly agent) use (“vibe coding”) might negatively impact organizations and individuals.
Chris: When you’re vibe-coding things, suddenly… another thing I’ve seen is that people say, ‘Okay, well maybe it’ll work.’ It’s almost like a test. You go off and say, ‘Maybe the agentic thing will go crank out some code,’ and you spend all this time waiting on it and coaching it. Then, it doesn’t work.
Jeremy: It’s like a gambling machine, right? Pull the lever again, try again, just try again.
Chris: Exactly. And again, I’m not saying the tools are useless or bad, but when you take a step back and you look at where it’s adding value and how, I think there’s a little bit too much enthusiasm of, ‘Well, when AGI happens, it’s going to solve the problem. I’m just waiting and seeing… Here’s another aspect of it: the anxiety piece. I see a lot of junior engineers coming out of school, and they’re very worried about whether they’ll be able to get a job. A lot of things are changing, and I don’t really know what’s going to happen. But to your point earlier, a lot of them say, ’Okay, well, I’m just going to vibe-code everything,’ because this is ‘productivity’ in air quotes. I think that’s also a significant problem.
Jeremy: Seems like a career killer to me.
Chris: …If you get sucked into, ‘Okay, well I need to figure out how to make this thing make me a 10x programmer,’ it may be a path that doesn’t bring you to developing at all. It may actually mean that you’re throwing away your own time, because we only have so much time to live on this earth. It can end up retarding your development and preventing you from growing and actually getting stuff done.
At its heart, Chris’s concern is that AI-heavy coding and craftsmanship just don’t appear to be compatible:
“Software craftsmanship is the thing that AI code threatens. Not because it’s impossible to use properly—again, I use it, and I feel like I’m doing it well because I care a lot about the quality of the code. But because it encourages folks to not take the craftsmanship, design, and architecture seriously. Instead, you just devolve to getting your bug queue to be shallower and making the symptoms go away. I think that’s the thing that I find concerning.”
“What you want to get to, particularly as your career evolves, is mastery. That’s how you kind of escape the thing that everybody can do and get more differentiation… The concern I have is this culture of, ‘Well, I’m not even going to try to understand what’s going on. I’m just going to spend some tokens, and maybe it’ll be great.’”
I asked if he had some specific examples where he’s seen things go awry.
“I’ve seen a senior engineer, when a bug gets reported, let the agentic loop rip, go spend some tokens, and maybe it’ll come up with a bug fix and create a PR. This PR, however, was completely wrong. It made the symptom go away, so it ‘fixed’ the bug in air quotes, but it was so wrong that if it had been merged, it would have just made the product way worse. You’re replacing one bug with a whole bunch of other bugs that are harder to understand, and a ton of code that’s just in the wrong place doing the wrong thing. That is deeply concerning. The actual concern is not this particular engineer because, fortunately, they’re a senior engineer and smart enough not to just say, ‘Okay, pass this test, merge.’ We also do code review, which is a very important thing. But the concern I have is this culture of, ‘Well, I’m not even going to try to understand what’s going on. I’m just going to spend some tokens, and maybe it’ll be great. Now I don’t have to think about it.’ This is a huge concern because a lot of evolving a product is not just about getting the results; it’s about the team understanding the architecture of the code. If you’re delegating knowledge to an AI, and you’re just reviewing the code without thinking about what you want to achieve, I think that’s very, very concerning.”
Some folks have told me they think that unit tests are a particularly good place to look at using AI more heavily. Chris urges caution, however:
“AI is really great at writing unit tests. This is one of the things that nobody likes to do. It feels super productive to say, ‘Just crank out a whole bunch of tests,’ and look, I’ve got all this code, amazing. But there’s a problem, because unit tests are their own potential tech debt. The test may not be testing the right thing, or they might be testing a detail of the thing rather than the real idea of the thing… And if you’re using mocking, now you get all these super tightly bound implementation details in your tests, which make it very difficult to change the architecture of your product as things evolve. Tests are just like the code in your main application—you should think about them. Also, lots of tests take a long time to run, and so they impact your future development velocity.”
Part of the problem, Chris noted, is that many people are using high lines of code written as a statistic to support the idea that AI is making a positive impact.
“To me, the question is not how do you get the most code. I’m not a CEO bragging about the number of lines of code written by AI; I think that’s a completely useless metric. I don’t measure progress based on the number of lines of code written. In fact, I see verbose, redundant, not well-factored code as a huge liability… The question is: how productive are people at getting stuff done and making the product better? This is what I care about.”
Underlying all of these concerns is the belief that AGI is imminent, and therefore traditional approaches to software development are obsolete. Chris has seen this movie before. “In 2017, I was at Tesla working on self-driving cars, leading the Autopilot software team. I was convinced that in 2020, autonomous cars would be everywhere and would be solved. It was this desperate race to go solve autonomy… But at the time, nobody even knew how hard that was. But what was in the air was: trillions of dollars are at stake, job replacement, transforming transportation… I think today, exactly the same thing is happening. It’s not about self-driving, although that is making progress, just a little bit less gloriously and immediately than people thought. But now it’s about programming.”
Chris thinks that, like all previous technologies, AI progress isn’t actually exponential. “I believe that progress looks like S-curves. Pre-training was a big deal. It seemed exponential, but it actually S-curved out and got flat as things went on. I think that we have a number of piled-up S-curves that are all driving forward amazing progress, but I at least have not seen that spark.”
The danger isn’t just that people might be wrong about AGI’s timeline - it’s what happens to their careers and codebases while they’re waiting. “Technology waves cause massive hype cycles, overdrama, and overselling,” Chris noted. “Whether it be object-oriented programming in the ’80s where everything’s an object, or the internet wave in the 2000s where everything has to be online otherwise you can’t buy a shirt or dog food. There’s truth to the technology, but what ends up happening is things settle out, and it’s less dramatic than initially promised. The question is, when things settle out, where do you as a programmer stand? Have you lost years of your own development because you’ve been spending it the wrong way?”
Chris is careful to clarify that he’s not anti-AI – far from it. “I am a maximalist. I want AI in all of our lives,” he told me. “However, the thing I don’t like is the people that are making decisions as though AGI or ASI were here tomorrow… Being paranoid, being anxious, being afraid of living your life and of building a better world seems like a very silly and not very pragmatic thing to do.”
Software craftsmanship with AI
Chris sees the key as understanding the difference between using AI as a crutch versus using it as a tool that enhances your craftsmanship. He finds AI particularly valuable for exploration and learning:
“It’s amazing for learning a codebase you’re not familiar with, so it’s great for discovery. The automation features of AI are super important. Getting us out of writing boilerplate, getting us out of memorizing APIs, getting us out of looking up that thing from Stack Overflow; I think this is really profound. This is a good use. The thing that I get concerned about is if you go so far as to not care about what you’re looking up on Stack Overflow and why it works that way and not learning from it.”
One principle Chris and I share is the critical importance of tight iteration loops. For Chris, working on systems programming, this means “edit the code, compile, run it, get a test that fails, and then debug it and iterate on that loop… Running tests should take less than a minute, ideally less than 30 seconds.” He told me that when working on Mojo, one of the first priorities was “building VS Code support early because without tools that let you create quick iterations, all of your work is going to be slower, more annoying, and more wrong.”
My background is different - I am a fan of the Smalltalk, Lisp, and APL tradition where you have a live workspace and every line of code manipulates objects in that environment. When Chris and I first worked together on Swift for TensorFlow, the first thing I told him was “I’m going to need a notebook.” Within a week, he had built me complete Swift support for Jupyter. I could type something, see the result immediately, and watch my data transform step-by-step through the process. This is the Brett Victor “Inventing on Principle” style of being close to what you’re crafting.
If you want to maintain craftsmanship while using AI, you need tight iteration loops so you can see what’s happening. You need a live workspace where you (and the AI) are manipulating actual state, not just writing text files.
At fast.ai, we’ve been working to put this philosophy into practice with our Solveit platform. We discovered a key principle: the AI should be able to see exactly what the human sees, and the human should be able to see exactly what the AI sees at all times. No separate instruction files, no context windows that don’t match your actual workspace - the AI is right there with you, supporting you as you work.
This creates what I think of as “a third participant in this dialogue” - previously I had a conversation with my computer through a REPL, typing commands and seeing results. Now the AI is in that conversation too, able to see my code, my data, my outputs, and my thought process as I work through problems. When I ask “does this align with what we discussed earlier?” or “have we handled this edge case?”, the AI doesn’t need me to copy-paste context - it’s already there.
One of our team members, Nate, built something called ShellSage that demonstrates this beautifully. He realized that tmux already shows everything that’s happened in your shell session, so he just added a command that talks to an LLM. That’s it - about 100 lines of code. The LLM can see all your previous commands, questions, and output. By the next day, all of us were using it constantly. Another team member, Eric, built our Discord Buddy bot using this same approach - he didn’t write code in an editor and deploy it. He typed commands one at a time in a live symbol table, manipulating state directly. When it worked, he wrapped those steps into functions. No deployment, no build process - just iterative refinement of a running system.
Eric Ries has been writing his new book in Solveit, and the AI can see exactly what he writes. He asks questions like “does this paragraph align with the mission we stated earlier?” or “have we discussed this case study before?” or “can you check my editor’s notes for comments on this?” The AI doesn’t need special instructions or context management - it’s in the trenches with him, watching the work unfold. (I’m writing this article in Solveit right now, for the same reasons.)
I asked Chris about how he thinks about the approach we’re taking with Solveit: “instead of bringing in a junior engineer that can just crank out code, you’re bringing in a senior expert, a senior engineer, an advisor—somebody that can actually help you make better code and teach you things.”
How do we do something meaningful?
Chris and I both see a bifurcation coming. “It feels like we’re going to have a bifurcation of skills,” I told him, “because people who use AI the wrong way are going to get worse and worse. And the people who use it to learn more and learn faster are going to outpace the speed of growth of AI capabilities because they’re human with the benefit of that… There’s going to be this group of people that have learned helplessness and this maybe smaller group of people that everybody’s like, ‘How does this person know everything? They’re so good.’”
The principles that allowed LLVM to last 25 years – architecture; understanding; craftsmanship – haven’t changed. “The question is, when things settle out, where do you as a programmer stand?” Chris asked. “Have you lost years of your own development because you’ve been spending it the wrong way? And now suddenly everybody else is much further ahead of you in terms of being able to create productive value for the world.”
His advice is clear, especially for those just starting out: “If I were coming out of school, my advice would be don’t pursue that path. Particularly if everybody is zigging, it’s time to zag. What you want to get to, particularly as your career evolves, is mastery. So you can be the senior engineer. So you can actually understand things to a depth that other people don’t. That’s how you escape the thing that everybody can do and get more differentiation.”
The hype will settle. The tools will improve. But the question Chris poses remains: “How do we actually add value to the world? How do we do something meaningful? How do we move the world forward?” For both of us, the answer involves caring deeply about our craft, understanding what we’re building, and using AI not as a replacement for thinking but as a tool to think more effectively. If the goal is to build things that last, you’re not going to be able to outsource that to AI. You’ll need to invest deeply in yourself.
.png)

