I want to be clear: I’m a software engineer who uses LLMs ‘heavily’ in my daily work. They have undeniably been a good productivity tool, helping me solve problems and tackle projects faster. This post isn’t about how we should reject LLMs and progress but rather my reflection on what we might be losing in our haste to embrace them.
The rise of AI coding assistants has brought in what many call a new age of productivity. LLMs excel at several key areas that genuinely improve developer workflows: writing isolated functions; scaffolding boilerplate code like test cases, configuration files, explaining unfamiliar code or complex algorithms, generating documentation and comments, and helping with syntax in unfamiliar languages or frameworks. These capabilities allow us to work ‘faster’.
But beneath this image of enhanced efficiency, I find myself wondering if there’s a more troubling affect: Are we trading our hard-earned intelligence for short-term convenience?
What the studies show
Research consistently points to concerning trends in how AI usage affects our cognitive abilities. Studies using brain imaging technology found that ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels” [Study]. A comprehensive survey of 319 knowledge workers revealed that higher confidence in AI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking [Study]. Meanwhile, research involving 666 participants found a significant negative correlation between AI tool usage and critical thinking scores [Study], with cognitive offloading identified as a primary driver of this decline. These studies collectively suggest that while AI can boost immediate productivity, frequent use may reduce our inclination to engage in deep, reflective thinking, particularly among users who show higher dependence on AI tools and lower critical thinking scores.
What does this look like in practice for developers? Detailed in Addy Osmani’s ‘Avoiding Skill Atrophy in the Age of AI‘. One engineer with 12 years of experience confessed that AI’s instant help made him “worse at his own craft.” First, he stopped reading documentation. Why bother when an LLM can explain it instantly? Then debugging skills waned, stack traces and error messages felt daunting, so he just copy-pasted them into AI for a fix. “I’ve become a human clipboard”, blindly shuttling errors to the AI and solutions back to code.
The hallucination problem
Beyond skill atrophy lies another critical issue: AI reliability. LLMs hallucinate frequently, producing confident sounding but incorrect information. They generate plausible looking code that contains subtle bugs, suggest outdated practices, or make security compromising recommendations.
When we blindly trust AI output without verification, we’re not just risking immediate bugs we’re systematically degrading our ability to catch these errors. The very skills we need to validate AI-generated code are the ones that atrophy from disuse.
The shifting expectation landscape
Expectations are evolving rapidly under AI’s influence. Smaller teams are now responsible for broader scopes of work, with the implicit assumption that AI will handle much of the heavy lifting. This could create a dangerous feedback loop: as teams become more dependent on AI to meet these expanded expectations, they have even less time to develop and maintain core skills.
The pressure to ship faster with AI assistance can lead to a culture where understanding code becomes secondary to producing it. Developers now find themselves in environments where asking AI is not just acceptable but expected, potentially stunting their growth trajectory.
Shortcuts not breakthroughs
When we consistently choose the path of least resistance offered by AI, we miss opportunities to discover novel approaches or develop the kind of deep expertise that leads to breakthrough innovations.
We’re not becoming 10× developers with AI, we’re becoming 10× dependent on AI. Every time we let AI solve a problem we could’ve solved ourselves, we’re trading long term understanding for short term productivity.
Critical questions
This brings me to the fundamental questions I have found myself asking:
Are we using AI to our own detriment? When our intelligence regresses as we become more dependent on these tools, are we ultimately making ourselves less valuable and less capable?
Is the emphasis on AI over-reliance failing to invest in team longevity? Engineers thrive in environments that promote learning and growth. If we’re optimizing for short term productivity gains while systematically undermining the conditions that create truly skilled developers, what does our industry look like in 5-10 years?
I currently feel like the choices we make on AI reliance (individually or collectively) will determine whether it becomes the tool that elevates our profession, individual purpose and growth or becomes the crutch that ultimately diminishes it.
.png)

