AI Can Help You Code Faster – But at What Cost

2 days ago 1

I had a pretty frightening experience the other day. One that forced me to rethink how I code.

I’ve been writing software in one form or another for about five years now. I started as a data analyst, moved into computer-vision research, and eventually landed in full-stack engineering.

I don’t pretend to be a super-star programmer. My learning curve has been one of grit: teaching myself the basics, applying them on the job, and hacking away on side projects. I’ve built both back- and front-end stacks and feel comfortable shipping product in either environment.

Generated image
“Image of a spacious office in the style of a 90s arcade game” - generated by Sora in 2025.

However, when tools like ChatGPT and Claude burst onto the scene two years ago, everything changed for me. Until then, I’d always felt a step behind more seasoned devs - spending large amounts of time in technical docs and on Stack Overflow. Suddenly every answer I needed was just a prompt away.

My productivity skyrocketed. I combined my natural curiosity with ChatGPT’s instant feedback to tackle problems quickly that previously took me days to figure out. Recently in my career, I was tasked with building a cryptocurrency-payments gateway that integrated three different exchange APIs, all in Golang. My Go skills were in their infancy at the time, but with an on-demand mentor in my editor, I shipped a scalable service that now handles thousands of transactions a month.

My process looked like this:

  1. Outline the feature and request possible best practice solutions.

  2. Feed detailed context to ChatGPT around my requirements and use cases.

  3. Iterate on its suggestions, validating everything against docs and community best practices.

Most of the time its guidance checked out; when it was outdated, I caught it in review.

Fast-forward to 2025. Fully agentic IDEs like Cursor can generate or refactor multiple files at once from a single prompt. In Agent Mode I write far less code yet close far more tickets. Cursor Tab watches my keystrokes and uses my codebase for context. It offers whole-file transformations - hit Tab a few times and the refactor is done. Worse (or better) yet: it’s rarely wrong.

That reliability lulled me into trust. I stopped scrutinising the output the way I would if I’d typed every line myself. The solutions were usually good enough to pass reviews and tests, bar one or two nitpicks on style.

I had become the kind of engineer who used to drive me nuts: the PR reviewer who skims, leaves superficial comments, and rubber-stamps anything that isn’t obviously broken. When enough of us rely on generative tooling, the collective ability to critique software erodes. That’s exactly what happened to me.

A few days ago I attempted a coding challenge on a site similar to LeetCode that banned the use of generative AI tooling. The task - an order-book price calculation - was something I’d solved countless times back when I was a quant trader at Invictus Capital. Easy, I thought.

Except I couldn’t recall basic TypeScript syntax for a loop. Thirty stressful minutes and many open Google and Stack Overflow tabs later, I hacked together a clumsy solution that used to take me five. What’s worse, the code was awful. I’d forgotten JavaScript’s built-in array helpers entirely. If you don’t use it, you lose it.

I failed the challenge. The shock sent me into AI rehab: no autocomplete, no auto-code, just me and the keyboard. Within hours I noticed duplicated logic, shaky error handling, and needless abstractions in my own side-project codebases - issues I’d glossed over when a machine wrote the boilerplate.

So is strict control worth it when AI can do a “good enough” job? I think the more cognition you outsource for short-term velocity, the more your raw skill atrophies in the long run.

Maybe that’s fine. When farmers swapped hand-plows for tractors, it made sense to master the tractor, not preserve hand-plowing techniques and practices. For society and most companies, focusing on the productivity-boosting skill keeps you employable.

But what happens when you hit a problem your toolset doesn’t cover? Generative AI will still reply - its training rewards confident answers - but in the dark corners, the high-level abstractions, the long-horizon design decisions, someone has to think. I want that someone to still be me.

I’m not abandoning generative AI. I still lean on it for brainstorming and for reviewing code after I’ve written the first draft. I’ve also adopted CodeRabbit to great effect, which runs local commit reviews and flags issues before I even open a PR - perfect for solo projects where you’re the only one pushing, and catching any little mistakes that could have big implications.

What I am saying is this: don’t hand over so much brainpower today that you can’t think crisply tomorrow. Even a short break from autocomplete has cleared my head and reminded me why I fell in love with software engineering in the first place - continuous learning and the thrill of wrestling with tough problems.

Maybe AI can make you a 10× coder. Just make sure it doesn’t make you a 0× thinker.

Discussion about this post

Read Entire Article