Chess engines didn't replace Magnus Carlsen, and AI won't replace you

4 hours ago 1
October 22, 2025

I've been thinking about how Magnus Carlsen talks about using chess engines to train. He doesn't let them play for him during the game itself (that would be cheating), but after the game? That's when the real learning happens: he reviews his games with an engine, finds mistakes, discovers better moves he didn't see.

Lately I've realized that's exactly the relationship I've developed with coding assistants. I don't let them commit straight to main (that would be reckless). But after they generate code? That's when the learning happens. Code review becomes like chess post-game analysis where I'm dissecting what the LLM produced, finding the subtle mistakes, learning new patterns I hadn't considered.

How Magnus uses chess engines (and what developers can learn)

After DeepMind's AlphaZero beat Stockfish, Carlsen studied those games deeply. In his own words:

I have become a very different player in terms of style than I was a bit earlier, and it has been a great ride.Magnus Carlsen, Interview on learning from AlphaZero, Chess24, 2020

According to his coach, the change came from wild ideas AlphaZero uncovered: sacrificing pieces for long-term advantage, pushing the rook pawn aggressively, using the king as an active fighter.Peter Heine Nielsen, Carlsen's Coach on AlphaZero, ChessBase, 2020 Things human experts thought were unsound, but the engine showed they can work.

The key thing: Carlsen isn't blindly memorising engine moves. He's learning why those moves work by reviewing them. He still plays the game himself, but with a broader vision. The engine is a coach, not an autopilot.

That's the parallel I see with coding assistants. They can crank out solutions in seconds, show me approaches I didn't consider. But if I just accept suggestions without thought, I'm basically hoping Stockfish in my assReference to the 2022 Hans Niemann cheating scandal. After 19-year-old Niemann defeated Magnus Carlsen at the Sinquefield Cup, wild theories emerged that he used vibrating anal beads to receive chess moves. Chess commentator Eric Hansen joked about the theory, which Elon Musk then amplified. Niemann offered to play nude to prove he wasn't cheating. Chess.com later found "no determinative evidence" of in-person cheating. will make me a winner. Spoiler: I'll mishear move 37 and blunder catastrophically.

The sweet spot is using the LLM to augment my judgment. Let it show me options, then I decide what to do with that information.

Code review: the developer's post-game analysis

Code review becomes your post-game analysis. Magnus reviews his games with engines to learn from their superior analysis. You review LLM code to ensure it's actually correct. Both demand expertise: Magnus needs it to understand why the engine's moves work; you need it to distinguish code that looks good from code that actually makes sense.

When I have an LLM generate code, I don't merge it right away. I review it with the same healthy skepticism I'd apply to a human contributor. Much more, actually - humans inventing imaginary API endpoints is much rarer. (Though I've done it myself before the LLM era more times than I'm ready to admit, confidently coding against an endpoint I was sure existed.)

LLM code needs a human eye for things machines aren't good at. Does this actually fit our requirements? Is it idiomatic? Did it consider the edge cases?

Code review is the gate to the codebase where nothing ships until a human is willing to take responsibility. You catch the mistakes, ensure consistency, sanity-check the diff. Same things you'd do for a junior developer's big PR.

the work shiftedTime estimates based on personal experience, as of October 2025

Using review as opportunity for learning

Magnus once said he doesn't fear computers because he uses them to train, so he can face humans even stronger. I'm not worried about coding assistants replacing me. I'm using them to level myself up.

The option to copy-paste without understanding has always existed (StackOverflow, now LLMs). The tool doesn't decide whether I learn or not. I do. Will I lose problem-solving skills by blindly accepting every suggestion? Absolutely. But that's on me.

From "looks good" to "makes sense"

After months of working with ai coding tools, i've noticed a shift in approaching problems. I'm less afraid to attempt ambitious things. Not because the LLM will magically do it for me, but because I know I have a sparring partner that will catch my blunders and occasionally point out a good shortcut.

Right now, I can talk to a computer and it talks back.See my post "Coding with LLMs: We can talk to computers now and we're upset about it" And yeah, it drives me a little nuts at times. But like any good sparring partner, it's making me better.

Chess engines changed how the game is played and studied. People worried it would "solve" chess and kill the game. Instead, chess is more popular than ever.Chess.com membership doubled from 100M (Dec 2022) → 200M (Apr 2025) Engines didn't replace players; they made the best players even better and made high-level play more accessible to everyone.

The same pattern repeats with coding assistants. People worry that autonomous agents will take our jobs. But what we're actually seeing is how they augment us and change the way we approach software development, though mainly for those whose stack is widely covered by LLM training data. True AGI is still far away,Andrej Karpathy, Interview with Dwarkesh Patel, Oct 2025: "It will take about a decade to work through all of those issues." (about agents) but these tools are changing the game now. And for those willing to put in the review time and keep learning, they're changing it for the better.

Read Entire Article