A lot has been written about Claude Code In recent weeks when it comes to AI coding agents. It’s understandable why, since it's the most efficient agent currently available.
While a lot of people share their very complex setups to maximise its true potential, I argue that adopting (and keeping) a beginner’s mindset and setup when configuring your agent is probably the hidden productivity hack most people are missing. However, some disclaimers are needed before we start.
I can’t show you, you have to embrace it and experience it for yourself, but, I can tell you my experience: working in a modern, well-structured, Springboot Java monorepo has been 100% solved.
Note it’s not 80, 90, 95 or 99% but 100% solved. This doesn’t mean that the model solves tickets with 100% accuracy in one shot, but rather that setting up your environment, the overhead and the whole going totally off the rails that was common last year is not a problem anymore. The tool just works like a breeze.
In terms of code quality output by the model, there’s not a lot to say: these agents have all the codebase and files as context and they excel at pattern matching and recognition so the generated code will be virtually indistinguishable from any other in the codebase, provided you follow industry standards and have a sensible structure in place (see the modern and well-structured above).
Simple is best. Accurate context is king.
You’ll see a lot of people on X and LinkedIn going crazy and doing insane things with CC. From running up to 6-8 parallel instances, to creating endless MCP servers, hooks and agents. I think these people are missing a fundamental key piece in the puzzle of working with agents: they’re still LLMs under the hood. This means that limited context windows, hallucinations and context rot are real issues. Don’t let a fancy interface fool you, an agent is still an LLM, so all rules that were important before become even more critical now. Essentially, in my view so far, setting yourself up for success means keeping things simple and straightforward. This is easier said than done especially when customisation is so appealing and when creating a subagent or hook to do just that one thing really well is so tempting. Resist that urge. Keep it simple.
Every person will want to get different results from using coding agents, so the first thing to do is deciding why you want to use them and what you want to get out of them. Some people want to write zero lines of code manually. Some want to use it for reviews, others as a sparring partner for insights, and yet others to write documentation. The way you want to leverage the tool will dictate how you approach it and how you should set it up.
I want to use it to accelerate my own productivity and translate ideas in my head into reality as soon as possible, with as little friction and effort as possible. YMMV. It just turns out that if you look at a task in a ticket, it can be treated as my own idea that I want to make into a reality so this works extremely well in the setting of crunching through tickets. Go figure.
So, how do I set myself up for success where success is defined as completing real world tasks as efficiently as possible? I’ve been a big fan of what I like to call two modes:
An execution mode: in this mode the goal is to get Claude to actually work through a Merge Request end-to-end, just like I would manually.
A critique/planner mode: in this mode, the goal is to either, plan a task, where the output is a markdown file highlighting a detailed step by step plan for implementing something, or to evaluate or critique an existing change on aspects like code reusability, maintainability or if the current abstractions are well defined or can be improved.
Each mode has different purposes: one needs to really emulate the real engineering process of understanding the existing environment and code, grasp the scope of the requested change and knowing where and how to make it in the codebase.
The other mode is different, in the sense that the model is free to explore the codebase at will, write draft diffs, try different approaches to a particular problem, and be more in “thinking mode”. I’ve found this particularly helpful for measuring how solid the current abstractions in the codebase are or if something “doesn’t quite fit” and you can’t really put your finger on it.
For both modes, my secret sauce is context engineering, which in my particular case is a very fancy way of saying: leverage all the built-in tools that Claude Code already has and give it the context of your particular codebase and business (high level context) as well as the context of the actual task you want to solve (low level context). This context level distinction merits a small subsection of its own.
Daniel Kahneman wrote the excellent book “Thinking Fast and Slow” about how there are essentially two distinct modes of thought: a faster, more intuitive one and a slower, carefully planned, deliberate one. The analogy is slightly far-fetched but it actually works very very well in my experience: run the /init slash command in the root of your repository for giving Claude Code a high level mental model of “the world”. The command does a great job by itself but you might want to review it just to make sure that critical knowledge such as how to compile code and run unit tests are embedded there. This command will generate a file called CLAUDE.md in the directory where you run it. The contents of this file will be what would analog to “System 1”. It’s a high level view, meant to ground the model to come into your world, very little deliberation or technical details about the task that you really want to accomplish while using these agents.
For those details, we must switch into “System 2” mode. This means that you’ll create a file at the “package level” of the task you want to accomplish ( if you need an api endpoint maybe this is an endpoints package, etc.) called CONTEXT.md and this will be your most important file.
In there you’ll want to add details almost as if you’d be writing a ticket for a junior developer to pick up. When you think you have the right level of detail, go one level deeper to increase your chances of success. Rely heavily on todo lists and bullet points (more on this later) and be as specific as possible, refer to files you want to bring in context using the @ symbol and describe the high level goal of the task to ground the model even further. It’s the level of work upfront that you’ll gain back by the code writing itself!
With the context in place, the different modes are easy to setup and rely on very simple guiding prompts:
For the execution mode: “Read the contents of the @CLAUDE.md and @CONTEXT.md files and let’s work on the task described there. Focus on writing simple code that uses the right abstraction level and follows the existing codebase patterns. Work TDD style, by writing and executing tests as you go.”
For the planner/critique mode: “Read the contents of the @CLAUDE.md and @CONTEXT.md files and evaluate the current implementation that can be seen on files @… and @… DO NOT WRITE ANY CODE. Considering the current implementation and codebase structure, write a detailed markdown report called ANALYSIS.md where you add snippets of a potentially better architecture and a table detailing pros and cons of the current implementation. Focus on abstraction levels and implementation details such as design patterns and structure.”
This is all you need. If you want to be fancy, you can encapsulate these two prompts as custom slash commands and you have a super efficient workflow that uses Claude Code in a one-shot fashion to both plan and/or work through any task of any complexity.
The goal is to keep the context as high-value as possible and rely on the underlying models to do the heavy lifting!
This field moves at neck breaking speeds and it’s hard to merely be able to keep the pace with the shipping speed of Anthropic.
However, while I haven’t used any of these and will definitely experiment with it, I’m very bullish on context rot and the more leeway you give the agent, the more room for error. This, and keeping in mind that Claude Code already has MANY built in tools, such as bash commands, file reading and writing, a dedicated todo writing tool (which is why you should write a lot of todo lists) and potentially some more, makes it less of an urgency to explore this, but I’ll definitely will and report back!
Vibe Responsibly!!
.png)


