One interesting take that I've heard a few times from engineers on YouTube, HackerNews and Reddit is that AI agents have taken the joy out of software engineering for them. Among other things, I've heard people mention feeling disconnected from their output, and missing the pleasure of problem solving. The first is legitimate, the second is a side effect of other problems we should fix, and both are exacerbated by the way software engineering organizations plan and distribute work.
Being Disconnected From Your Labor
I totally get this, and it's not great. It's a weird feeling to spelunk through what you view as "your" codebase. Thankfully (at least for me), I've found if you stick with a codebase and refine it for long enough, that feeling goes away over time. I liken it to collage art; if you just cut a page out of a magazine, you have no ownership, but as you assemble a new image from parts, your intention and vision emerge and the creation becomes uniquely yours.
Unfortunately, for those of you dealing with this at work, you probably aren't able to build that level of ownership with the code. You're probably being shuffled around to different tasks, having to try and comprehend a codebase growing faster than you're comfortable with, while you can't put the time into one piece of code to really feel that creative connection.
The culprit here is outdated engineering organizational patterns. AI works best when engineers have "ownership" over slices of code, for a number of reasons:
- It gives them time to build a sense of authorship and familiarity with the code. Because of that sense of ownership, they'll be stricter about what gets merged and how things are done, and if something breaks they'll be more able to fix it.
- AI magnifies the velocity drag from communication latency. I've found you can often build things and test them faster than you can have a conversation with your team about whether you should build and test something. If decisions need to go through a committee, you lose most of the velocity you gained from using AI in the first place.
Missing the Pleasure of Problem Solving
As I mentioned, this is basically an engineering organization culture problem. If AI is solving small problems for you, in theory that should free you to think about more and bigger problems. While there might be some people that just lack the imagination to come up with new problems to solve, I'm sure the vast majority of engineers are experiencing this because they've been constrained in what problems they can solve by their organizations. I know many organizations that are actively hostile to people doing work that hasn't made its way through a committee, been planned and assigned by a PM. In that case, if AI handles your weekly work, you're stuck getting handed awful stuff like assisting with QA or grinding on test coverage.
It doesn't have to be like this though. AI is really good at empowering individuals to solve problems end to end. Imagine, instead of a "factory floor" model of a software engineering team where team members are treated as replaceable cogs, you have an artisanal model where people have real ownership over the stuff they create. In this scenario, if the AI solves your mundane problems, that frees you to put more energy on hard technical stuff, domain details or responding to user feedback. Because you're empowered, the AI becomes an accelerant rather than a competitor.
To me, this is the real irony of developers rebelling against generative AI. The end result of this technology is going to be tremendous working empowerment for the people that learn to use these tools as part of a vertically integrated skill set. This is definitely going to happen, because letting high agency people run free with AI is going to be an order of magnitude more productive than trying to lock step consensus, and orgs that adopt this model are going to stomp their competitors. This isn't a free lunch though, I expect the number of positions for developers without some useful domain expertise or good people skills to drop to near zero.
Calligraphers and Storytellers
You might be wondering, what's the deal with the title? How are calligraphers and storytellers relevant? It all came about as I tried to understand the class of programmers that state AI removes all the joy from their work. This suggests these programmers don't derive any satisfaction from solving problems at all, but rather are only motivated by the enjoyment of immersion in code. For these programmers, even using AI to accelerate a side project is a violation. The analogy of calligraphers and storytellers helped me understand the psychology of this vocal anti-AI segment.
Calligraphers as a rule aren't concerned with the meaning of what they're creating; their art is characters and words. Whether it's the Magna Carta or a McDonalds menu, they can derive a similar level of pleasure from the result of their craft. Calligraphers live a world of craft; if you gave a calligrapher a printer and a laptop with a folder full of script fonts and told them to get cracking, it would destroy their soul even as it multiplied their output.
Storytellers care about content. They could tell their story, write it, create a video, a comic, etc; the medium is mostly immaterial. A true storyteller will create a story from words clipped out of magazines if they have to. This isn't conjecture, Kurt Vonnegut created a novel from notes on scraps of paper reconstructed out of order (so it goes). Storytellers live in a world of imagination to the extent that you constrain their imagination (say by forcing them to ghost write someone else's story) that is also soul destroying.
It's interesting to note that most engineering organizations are set up with a few storytellers at the top and a lot of calligraphers under them. This makes sense in the old scheme of things, where executing on ideas is expensive, and calligraphers have to be coordinated to avoid waste. This scheme always had problems; storytellers don't make good calligraphers, and thus don't tend to get promoted, even though they tend to be much more impactful than calligraphers in higher level positions. The negatives are magnified when AI is included in the picture, for reasons discussed.
To continue the literary analogy, AI is like a printer driving the value of calligraphers to near zero. The best calligraphers have cachet and can survive on marquee consignments, but the typical calligrapher has to subsist on poverty wages or get a day job.
This analogy serves a second purpose. Just like calligraphy, software engineering as a career path will be consigned to the annals of history. This is going to happen sooner than you think. You might be able to buy a few more years by getting very good at working with an arcane piece of code, hardware or a language that isn't well represented in internet training data, but that's only going to forestall the inevitable by a few years.
How to Stay Relevant
If you're reading this and thinking "okay, so what do I actually do?" The good news is that the skills you need to develop are probably more accessible than learning to code was in the first place. Domain expertise is the most valuable thing you can cultivate. Pick a problem space: fintech, healthcare, logistics, science, developer tools, whatever interests you, and become genuinely knowledgeable about it. Understand the regulations, the workflows, the pain points, the economics. The ability to translate messy real-world problems into technical solutions becomes exponentially more valuable when the actual implementation is faster.
Community building and facility with human interactions are another high value skill you can cultivate. Humans will always be the fundamental drivers of resource allocation, and we're hard wired for interaction with other humans, so in any domain where trust matters, human-human interactions will remain the primary motive force. There's a lot of forms this could take; many engineers balk at having to take on sales or marketing responsibilities, but you could build community currency with videos and social media presence, take on more product duties or work with users on experience.
Architectural wisdom is also a mini-moat. Understanding how systems fit together, what patterns scale and what trade-offs matter will become more important as more systems are built. This area is resistant to full automation because there's a tremendous amount of real world context involved in good architecture, and unlike code, architecture is compact, so making that context explicit isn't fundamentally less work than designing the actual architecture, and it can often be more work.
The pattern here is that you want to be "vertically integrated." Be a self starter who can find problems, prototype solutions, and work with users to validate those solutions. Communication and collaboration skills matter more, not less. If you're operating with more autonomy, you need to be able to articulate what you're building and why, coordinate with stakeholders, and bring people along. The "lone wolf who just wants to code" archetype has no place in this future.
The irony is that many engineers already have seeds of these skills but have been discouraged from developing them by organizations that wanted them to "stay in their lane." If AI frees you from implementation constraints, the question becomes: what problem do you want to solve?
.png)


![The Violators by Censor Design – C64 2025 [video]](https://www.youtube.com/img/desktop/supported_browsers/chrome.png)