Back in the 2020s, programming was truly rewarding. Creating fancy GitHub repositories that people could use as tools to make their lives easier was a thing. But now, AI writes most of the code, and the thrill of crafting something from scratch has quietly faded.
Press enter or click to view image in full size
I was always thinking if humanity’s next mission is creating a decentralized AI. But then, I also realized developing AI in companies’ eyes has become too rewarding, like how we feel when we buy a new phone.
In 2025 alone, AI-related companies garnered $5.7 billion in January, accounting for 22% of overall venture funding, and by October, venture capitalists had poured $192.7 billion into AI startups.
Back in the 2020s, creating a tool was like having a magic wand. Now it’s not ours — we’re sharing it with AI. The same projects often lead to worse outcomes now.
Recent research echoes this concern. A large-scale bibliometric study (Park et al., Nature, 2023) found that papers and patents have become less disruptive over time — less likely to push science or technology in new directions. The trend is debated, but the decline appears consistent across fields.
This Nature research was about papers and patents, not code. But watching AI produce the same program in minutes that took you days? Same hollow feeling. It’s not that originality vanished — it just moved upstream. The value shifted from writing elegant functions to knowing which functions matter. From syntax to judgment. From implementation to asking why we’re building this at all.
Good news is, there are still meaningful ways to share work between humans and AI — ways that remind us why we started building in the first place. Think of it as an evolution:
<developer_evolution version="2.0"><phase id="1" action="identify_broken_systems" reward="market_gap"/>
<phase id="2" action="validate_with_humans" reward="real_problem"/>
<phase id="3" action="prototype_with_ai" reward="10x_speed"/>
<phase id="4" action="test_edge_cases" reward="reliability"/>
<phase id="5" action="ship_iterate_measure" reward="user_trust"/>
<skill multiplier="true">taste > implementation</skill>
<skill multiplier="true">system_thinking > syntax</skill>
<truth>ai_handles_code || you_handle_decisions = competitive_edge</truth>
</developer_evolution>
It may look like alien code, but I’m not here to scare you. Let me walk you through what it means, step by step.
Phase 1–2: Find What’s Actually Broken
Stop building solutions looking for problems. Start with broken systems that frustrate real people. Talk to them. Not through surveys — actual conversations. AI can’t do this part. It doesn’t know what keeps people up at night.
Phase 3: Let AI Do the Heavy Lifting
Once you know the problem, use AI to prototype fast. What took weeks now takes days. Don’t feel guilty about it — feel smart. You’re not cheating; you’re focusing on what matters.
Phase 4: Break Things (Before Users Do)
AI-generated code looks perfect until it meets reality. Feed it weird inputs. Test the edges where AI assumptions fail. This is where your human paranoia pays off.
Phase 5: Ship, Watch, Learn
Launch small. Measure everything. User trust is earned in iterations, not grand releases. AI can help you build faster, but only humans know when something feels right.
The New Skills That Matter
Taste beats implementation now. Knowing what not to build is worth more than knowing every framework. System thinking — seeing how pieces connect — matters more than perfect syntax.
The Bottom Line
AI handles the code. You handle the decisions. That’s not a demotion; it’s a promotion. You’ve moved from typist to architect. The code was never the point anyway — solving problems was.
In 2025 we faced a clear reality: AI isn’t replacing most people outright, but those who learn to work with it will outpace those who don’t (Noy & Zhang, 2023; Brynjolfsson et al., 2025). Rigorous studies show that generative AI can boost productivity and output quality in knowledge work, with especially large gains for less-experienced workers and in open-ended tasks like writing and ideation (Noy & Zhang, 2023; Brynjolfsson et al., 2025; Doshi et al., 2024). It’s no surprise, then, that managers will prize teams who leverage AI well — an inference consistent with these findings. Still, collaboration doesn’t eliminate the need for human judgment and taste: people choose the right problems, sense when solutions “feel” right, and earn trust through iteration. And creativity research shows that human oversight helps counter AI’s tendency toward sameness (Doshi et al., 2024). The future isn’t humans versus AI; it’s humans deciding, AI accelerating, and craft setting the bar.
.png)


