When coding entire projects with LLMs, have you found any coding techniques that improved the quality of LLM output?
I've coded a dozen of micro projects using Claude models, actually mind-blown how fast you can test your ideas, in much less time. While remaining very usable, the quality seems to fall off after ~10 KLOC (much earlier if the entire project is a single file JS app). Then I need to somewhat start paying attention.
So it got me thinking, should I code the same way as on any codebase where I'd be working with a team, or is there something that helps LLMs scale, while the codebase remains grokkable?
.png)
![Chaldean Aramaic Words [pdf]](https://news.najib.digital/site/assets/img/broken.gif)
