Having kids is weird. Here are these human beings, with their own interests, likes and dislikes, and absolutely no knowledge. I mean yes, over time they build up experiences, but they start out with literally nothing. You have to teach them how to go to the bathroom, how to use a knife and fork, how to sit in a seat, etc., etc. All the things that you’ve forgotten there was any need to learn in the first place.
And they learn! It’s so much fun to watch, and continuously surprising, especially when they pick up skills you never learned. My kids are into crafts, and are constantly creating friendship bracelets, constructing incredible origami sculptures, and crocheting cute animals. As an adult with a lifetime of experiences, I try to think about how I can help them learn faster, or learn to enjoy the things I enjoy. This is mostly a fool’s errand – generally speaking, they like what they like, and are completely uninterested in my perspective. #parenting
A couple of days ago, I had an interesting conversation with one of my kids. They had been creating two different friendship bracelets, and had run out of beads for both. So they asked AI to create a pattern that combined the two, and voilà! They had a working design.
And it was interesting – I could see the logic of what they’d done. They didn’t want to have to do something difficult and confusing, where they had no confidence. So they asked the magic box to do it for them. All parents should recognize this situation – enough kids are using the magic box to write their essays that schools are now requiring students to write all essays in class. A Spanish teacher gave F’s to a bunch of kids who used Google Translate to do their homework (pro-tip: if you haven’t learned preterite yet, don’t use the past tense in your answers). Graduate students – who have famously bad writing skills – are suddenly turning in grammatically correct papers. (note: the only one of these I’m making up is none of them)
But in the alternate universe without AI, my kid would have had to figure out for themselves how to merge the two bracelets. It might not have been perfect, but the next time it would have been a little better, and the next time after that it would have been a little better, and so on. That’s how skill acquisition works. But that didn’t happen.
So what I told them was: “You can use AI to do something for you, and it may do it faster. It will also make you dumber.”
There’s a well-known model of skill acquisition, in which a learner progresses through multiple stages – beginner, advanced beginner, competence, etc., until they achieve mastery. The problem with this model, and with skill acquisition in general (again – ask any parent), is that learning deep skills is hard, and takes thousands of hours of directed practice. Much of that practice, especially in the beginning, is tedious, difficult, and emotionally punishing. It challenges our idea of ourselves as capable, intelligent, and special. But if you want to master a skill, there’s no other way.
Having a bodyguard doesn’t make you a martial arts master. Using Google Translate doesn’t make you fluent in another language. Pushing a button and generating an image doesn’t make you an artist. Knowing how to use your music app doesn’t make you a rock star.
“But Dan, in the old days people had to know how to take care of horses, and now we can all drive our cars to the mechanic when there’s a problem.” That’s true. But driving to the mechanic isn’t a skill – you’ve delegated that skill to an external agent, whom you expect to have actual skills, not just a fast search engine.
“But Dan, I’m just using AI to do things that aren’t core to my job, like unit tests.” Unit tests aren’t core to your job? And even if that were the case, there’s going to come a time when you need to understand why those unit tests are failing, but you won’t know how they work or what they’re supposed to do – any more than you know what the mechanic is doing in the garage.
“But Dan, I spent years learning the skill. Now I can use higher order tools to generate business value, and use the skills to make sure the AI gets things right.” Except it doesn’t work that way, does it? Just like lifting weights at the gym, when you complete a task, you aren’t just completing a task, you’re building and reinforcing a skill, and building and reinforcing knowledge about the code base. And just like sitting on the couch eating potato chips, when you ask someone else (a person, a technology) to do something instead of doing it yourself, your mental muscles atrophy when you stop using them. Just ask any manager.
So. You can use AI, and get to an answer faster, but it will make you dumber. That’s the choice. You can’t both use AI to complete a task, and get better at the core skill behind the task. Using AI is an active choice to let the deep skills you’ve developed for years atrophy, in favor of a hoped-for productivity gain.
As I was finishing up this blog post, I ran across a study of AI productivity gains in which open-source developers randomly completed tasks with or without AI. It’s definitely worth a read.
Methodology
To directly measure the real-world impact of AI tools on software development, we recruited 16 experienced developers from large open-source repositories (averaging 22k+ stars and 1M+ lines of code) that they’ve contributed to for multiple years. Developers provide lists of real issues (246 total) that would be valuable to the repository—bug fixes, features, and refactors that would normally be part of their regular work. Then, we randomly assign each issue to either allow or disallow use of AI while working on the issue. When AI is allowed, developers can use any tools they choose (primarily Cursor Pro with Claude 3.5/3.7 Sonnet—frontier models at the time of the study); when disallowed, they work without generative AI assistance. Developers complete these tasks (which average two hours each) while recording their screens, then self-report the total implementation time they needed. We pay developers $150/hr as compensation for their participation in the study.
Core Result
When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
The whole premise of AI tools is that they provide a productivity improvement. I.e., that engineers who use AI will handily out-perform engineers who don’t. While this is just one study, and it’s hard to know how generalizable its results are, it’s worth recognizing that your own perception of your productivity gains might be completely off base.
.png)
