I've been obsessed with large language models for a year and a half now. But something changed in the last few weeks. With the release of Claude Code and a few of these other agent updates, I'm changing the way I code in a way that I haven't for the last 10 years.
Until recently, my Cursor Tab and Cursor Agent mode usage was through the roof. Now it's almost zero. The only thing I'm using is tab auto-complete, and the rest of my time is spent inside of Claude Code.
This in itself is another mindset shift. Although this one is less radical and more of an iterative update.
This isn't just me talking back and forth all day, doing the same thing as other agent modes. This last year has been an evolution, with Claude Code being the latest shift to how I do my job.
I first have Claude Code pull an issue number from my Linear MCP, grab all of the details, and attempt the task start to finish.
When I have a task that isn't well-defined, I'll quickly think in my head: "Instead of editing 4 things, I can just tell Claude 'hey, add x everywhere that is required', and it is done immediately, faster than I could find everything."
Most recently, I was having to manually create an SVG which was used inside a picture and picture player that was regenerated on a loop every few seconds.
This is one of the only ways to generate a picture and picture (think of the youtube video pop-out) in React Native.
To match the design, I have to manually move X and Y values around and re-test it afterwards, since the video player is regenerating an svg on a loop.
At one point, I needed to add a brand new element to the very top of the SVG and push everything down. This would've been annoying to find and replace because every Y value is slightly different for every element in this SVG.
Thankfully, I can just tell the AI to move everything down 40 pixels, and every element is instantly updated.
Between this mindset shift of thinking constantly, how can I have an AI do this task? Am I slower than an AI would be? AI is constantly on my mind while coding.
Well-defined tasks inside of our linear task management system lead to me starting and finishing tasks much quicker too.
Random parts of my day always involve checking GitHub PRs across multiple repos, which ones were active, which one needs reviews, and filling out PR descriptions that are well-defined.
Now, whenever I'm done making changes, all I have to do is say, "Push this up for me." It immediately grabs the change list, thinks of a perfectly defined description that matches everything that I changed and creates the pull request for me. In addition, it will move my task to "In review" inside of our task manager.
I think the most powerful part is that Claude Code's agent loop is incredibly smart. Every time it has a mistake and can't figure something out, it ends up trying something else and then figuring it out.
I have everything set up inside of Dev Containers so I can run everything on dangerous mode. If you’ve never used them, it’s just a docker container mounted to your IDE essentially. Which means a bad command like `rm -rf /` can’t do anything to hurt your main computer.
I can run everything without having to confirm it every step of the way. Normally each step Claude code takes it wants you to confirm. Cursor Agent and Cline work the same way. When inside of a container with limited network access, it can speed up usage dramatically by enabling auto approve.
I also have custom prompts inside of Claude code. One of them I just paste an issue number and it takes it to completion and pushes up the pull request. Another one takes the issue and describes all the context that I might need to complete the issue. If there's something very complicated that I may not want the AI to attempt, maybe because it’s something that can be done many different ways, I can start here.
Instead of having to search files and try to remember how something is built or learn the APIs, I can have Claude gather all that information for me in a fraction of the time.
I also have another common prompt that pulls in everything from our task manager and everything that I've opened or closed recently inside of GitHub and will compile a stand-up message for me.
This is something else that would take small parts of my day in the past, keeping bullet points of things I'm doing. Sometimes things aren't always in the issue management system that I end up having to do. Usually those things all end up at least going through GitHub, so now it's able to compile from both of those sources.
One command and I get a stand-up prompt that I can paste into Slack. Now imagine if Slack had an official MCP server. I could automate everything. You may be wondering, "What if it makes a slight mistake?" Even still, I could have it send to myself or send a message that I have to approve first before the team sees. There's a lot of useful ways I can see these things going in the near future.
This is one of the more powerful parts. Since I can write shortcut prompts for Claude Code that say things like “pull everything from linear, start the task, checkout the issue branch, write a git commit.”
I am no longer spending time in my web browser in these platforms. As the integrations get better, I can take action in all of them without viewing them myself.
Combine this with Github’s VS Code extensions, I can see how everything looks in my editor, fully. I find that all of these things start to add up.
With these changes I've been describing above, I've realized that I code a lot less. In actuality, when I do code, it's small changes here or there because most of the structured bigger work, I have the AI do in step-by-step tasks.
I don't give it massive projects unless I'm trying to just have fun with it. If I have a large task, I will give it one step of the task to do, and then I will review and iterate in case it isn't correct.
For a lot of feature work, including adding new things that are CRUD related, this is perfect.
I also take a bigger step back when architecting things. Thinking "how can I set up a foundation that is very very easy to extend?" Something that might show up in Django's documentation or React's documentation as the correct standardized approach.
Most times Claude Opus struggles doing certain things, it's often due to our codebase having poor code quality in that area.
Yes, all this stuff is incredibly awesome, but it's not perfect either. One of the biggest things I'm struggling with is how to manage my time between these tasks.
I've tried having multiple terminal windows (4 on one screen) across multiple repos. When I do this, I try to fire off a task on all 4 and jump between them.
I find that this actually doesn't work as well as I thought it would because I'm having to context switch a little too much.
What would make more sense is 4 windows all on the same repo doing 4 tasks in that one repo.
Taking a bit more time before switching between each one and fully validating each one first.
It can be difficult when you realize how fast you can do certain tasks, especially easy tasks that come in and are described really well by project management.
And it can be a tricky balance to make sure you don't burn out trying to do too many at once or switch between things too fast.
It can't be understated how much more time is spent reviewing things. When there are simple tasks, I really don't have to review much, but when things are medium-sized, you really need to review and possibly iterate more with the AI. At which point you might be wondering, "Hmm, should I have done this myself? Would that have been faster?"
But I think this is a solvable problem. Most of this comes down to having a better test set up for your projects so that you can have more confidence when code changes are made. Also, setting up good abstractions so that adding on with AI is very straightforward and doesn't cause a mess of code.
A lot of the negativity I see about AI slop is really because people let AI start and finish everything from the beginning to the end of their app. If they actually had good foundations developed by a senior engineer who understands how to architect things, I don't think this is as much of an issue.
Overall, I'm just amazed that most of the random tasks I get, especially bug fixes, I can send through Sentry's MCP for more details, start an issue, and complete it all for me. Even when it doesn't complete it all perfectly, it's done all of the project management-related work for me in a fraction of the time.
I know some people are going to say either I’m full of crap, or I don’t do any “real work”
Lately I have had some straight forward app development work.
I still help make the decisions on which libraries and frameworks to initially setup in our applications, but right now we are going through a feature and bug fix cycle without adding new systems to our stack.
When you’re in this zone, this new way of working can be a lot faster.
I’m not having to iterate and change as much as you may expect. Especially if you’re somebody who hasn’t tried Claude code yet.
There seems to be a misconception from some people, who maybe last used AI months ago, or use an Agent mode here or there, that it can’t possibly take things end to end yet.
Try Claude Code, you may be proven wrong 😊