November 3, 2025·3740 words
I’ll start off by saying that I am not at all an AI doomer - by any stretch. I don’t believe AI will completely wipe out the need for skilled product managers, engineers, data scientists, UX designers, or many, many other positions. It also won’t wipe out the need for actually caring about craft. You know, the opposite of whatever this is:
Amazon Chime somehow left an impression on me as the worst software I’ve ever used. Go figure.
I see the stuff Large Language Models (LLMs) generate daily, and boy do they still need a lot of intelligent human interfacing in the process. This might, of course, be a point in time constraint. Model capabilities can evolve non-linearly and we might just see one or more variants that will be able to perform a range of tech-adjacent tasks completely independently. Especially if guided properly. Today’s not that day.
But you know what - even if and when that day comes, I still see LLMs as idea implementation vehicles and not a replacement for creativity, agency, and taste. They are not a substitute for craft and actually knowing what you’re talking about, which is what I wanted to write down some ideas on.
If I wanted you to have one key takeaway from this entire essay - AI slop is not a replacement for a trifecta of domain knowledge, product sense, and engineering skills. Congratulations, you can stop reading this post. Or go on to learn more about my rationale.
Oh, and while I don’t believe that LLMs will replace us wholesale, all signs point in the direction of a major role expectation re-shuffle. That transition won’t take weeks, but it’s happening already and my goal is to get you prepared for it.
Role Flattening #
In my more-than-a-decade career, I’ve spent some time thinking about career moats and even interviewed Cedric Chin, the guy who coined the term for me, about it.
The idea behind a career moat is simple - it’s a unique combination of hard-to-acquire skills and talents that set you up for long-term career security and growth. This combination does not answer the “How easy will it be for me to continuously keep my job?” question and instead focuses on “How easy will it be for me to find my next job?” The former can set up perverse incentives ("Nobody on the team knows how this works other than me.") while the latter aligns them with the market ("Nobody in the world does this set of things better than me.").
So, naturally you want to establish a career moat for yourself if you want to accelerate your career growth and build in some resilience. The problem is that career moats are hard to build - they require time and intentional focus. The challenge is amplified by current LLM advancements, where some pieces of your existing tech-related moat might not be as durable.
Let’s put this into practical terms. If you’re someone who is absolutely killing it at UX design, can easily put that design into code with a prototype, and then be able to clearly outline the customer scenarios that you need to tackle before working with engineering teams - you have a pretty good collection of skills that can set you apart in a sea of candidates. It’s a nice combo, but is not exactly a moat anymore. Putting designs into code is now easier than ever - in the past few weeks alone, I managed to translate several Figma designs into working web-based prototypes in hours, not days or weeks. Best of all, I didn’t need to dive deep into any of the web frameworks that underpinned those. I had a job to do and I did it.
A lot of skills that previously required a specialized person (e.g., ability to put designs into prototypes) became and will continue to become more and more commoditized. Computers start getting better than us at many jobs (remember accounting before Excel), but believe me - it’s OK.
Isn’t skill commoditization a bad thing? Wouldn’t this mean that what previously required a few folks on a team now requires one person?
Not necessarily. This idea does get to the crux of my thesis, though - while skill commoditization is more common, it’s not a replacement a few timeless traits that will be relevant regardless of AI advancements, and these can actually strengthen the aforementioned career moat.
With a lot of the CRUD-like development capabilities becoming within reach of anyone with a keyboard and $20 to toss for a monthly subscription, this also means that anyone can start bringing their software ideas to life. But just like the invention of the 3D printer didn’t magically destroy the manufacturing market, LLMs won’t destroy the tech one. Just because we have robotic surgery machines doesn’t mean we no longer need surgeons that understand the innate workings of the human body. They can just do things differently with their expertise, skill, and experience. Ring a bell?
I talked about this before, by the way - the right lens to look through here is that the AI agents and LLM-based systems are merely augmentations that allow you to reach for more. While they are getting better quickly, they still need to lean on your skills to build things the right way. If you have no clue about authentication systems, a LLM isn’t magically going to create intrusion-safe code for you that you can ship to production. It might work but that’s way different than work securely and not expose your data.
All this is to say that expertise and an array of “augmentation” skills are still relevant. The specific spectrum of skills that will make you successful, however, is different than what we saw in the past decade. This will be uncomfortable if you haven’t thought too much about it before.
The toolbox of capabilities that democratizes the end-to-end of the product development process is significantly more accessible than it was before. There are tools that can help think through specifications for projects, start implementing the code, automate the testing and validation process, and then push and monitor things in production. Not bad for the past year alone, right?
What’s also changing is the set of well-defined roles in the industry. I see a trend towards role flattening, a phenomenon where well-defined role responsibilities become significantly blurrier. What was once a clear-cut set of priorities for a discipline becomes a much broader set of responsibilities, and nothing showcases that better than the emergence (or reemergence, depending on the field) of the “product engineer” role.
A product engineer, at its most basic, is a software engineer building products. They do similar work to software engineers: writing code and shipping features. Usually, they write fullstack code with a focus on the frontend. What makes them unique is their focus on creating a product for users. They care about building a solution to problems that provides value to users. They must be empathetic to users, and this means caring about user feedback and usage data.A few years back, “product engineer” would’ve gotten a confused look from me. Today? I am one. Not kidding, by the way - it’s what I do at Microsoft.
It does actually say Principal Product Engineer on my employee card.
The shift is quite significant compared to traditional product and product-adjacent roles. I don’t wait for engineering resources - I start building. I don’t wait for engineers to implement spec changes - I open the PR myself. I don’t wait for designers to tweak Figma mocks - I spin up a branch, pull the style guidelines via MCP, generate component variations with Claude Code, and push it for review to GitHub, where Copilot Coding Agent does a first pass. Need a prototype in a framework I’ve never touched? Done in hours. A/B test analysis with anomaly detection? Handled. Pattern recognition across thousands of feedback entries? Easy.
The lifehack here is to use all the AI tools to scale myself across what used to be hard role boundaries. And best of all, I’m not special here in any way - anyone with enough agency and curiosity can do this now.
If you’re stuck in the old ways of waterfall processes, fixed role responsibilities, waiting for permission to execute, or sling code over the wall just because it compiled on the first run - I’ve got bad news for you. There’s a reason that with the advent of vibe coding we are yet to see a Cambrian explosion of software businesses. As it turns out, writing code is not the bottleneck - there’s more to software than just auto-generating buckets of Python files.
The New Skill Stack #
Let’s revisit the topic of commoditization of skills. If many tech-related capabilities become easily available via LLMs, what does it mean for all of us that work in the field? If writing code becomes easier without prior knowledge, like that of web frameworks, what should one lean into studying and applying?
First of all - nothing will change when it comes to understanding of technology from first principles. Literally zero. Don’t let YouTube gurus fool you, because they are full of it. AI doesn’t obviate the need for being able to actually understand how things work.
There is no automated replacement for having deep knowledge of the fundamentals. You need to be able to spot that the produced implementation is bullshit because it copied seventeen CSS files in different folders when it could be one. In the same vein, you should be able to spot that returning the expected two-factor authentication code in the JSON body of the request to get said code is an asinine approach. So - learn how the things you care about actually work, it ain’t going away.
Second, you need to build or reinforce a new set of skills. Just like in Mortal Kombat you win through using combos, your career moat depends on you being able to wield skill combos. These are applicable to any career track in tech. Being proficient in all of them is what I call being a full-stack person. Treat them as complementary to the deep expertise you should develop first.
%%{init: {'theme':'dark'}}%%
mindmap
root(("Skill
Differentiators"))
Creativity and Taste
Critical Thinking
Communications
Cross-Domain Knowledge
AI Augmentation
Product Sense
Execution Speed
Learning Agility
Systems Thinking
Agency
You might not at all be surprised to learn that you should be developing these skills regardless of LLM advancements. They are entirely company- and team-agnostic and just as relevant if you’re starting your own SaaS business or working for a mega-corp.
For every skill, I also include a sample set of questions you can ask about whatever you’re building. It’s not exhaustive - treat it as a starting point.
Creativity and Taste #
LLMs can generate thousands of idea variations, but it can’t tell you which one is right for every scenario. Not only that, but because LLMs are trained on vast amounts of existing content that means that the bulk of the output will just be a rehashing of what already exists. That’s why all the AI slop looks the same and every single vibe-coded website has the same rounded corners and purple-blue hues.
LLMs are predictive text on steroids and not a replacement for a human brain. What will set you apart is developing an eye for good design, quality code, understanding what resonates with actual humans, and knowing when something just feels off.
It’s the difference between technically correct and genuinely compelling. Smartphones existed before the iPhone, and yet the iPhone easily became the go-to choice for a massive part of the population because of the choices Apple made. LLM-generated code might work, but only someone with creativity and taste can determine whether it actually solves a scenario in a way that delights and solves the underlying problem.
Questions to ask #
- Does the solution feel intuitive and delightful to use?
- What makes this approach more elegant than alternatives?
- How does this implementation align with user expectations?
- Is the design tasteful?
- Does the implementation reflect strong opinions about the product?
- Is the code intuitive and maintainable?
- Is the code structured in a way that allows easy iteration?
Critical Thinking #
LLMs excel at pattern matching. They’re really good at looking at a massive existing corpus of data and then create “cookie cutters” for similar data.
They’re not omnipotent, though. Even the most advanced models struggle big time with context-dependent judgment. Critical thinking means knowing which problems to solve, understanding trade-offs, and recognizing when conventional wisdom doesn’t apply. It’s all about asking better questions, not just finding faster answers. Accuracy is much more important than the speed out the output.
Questions to ask #
- What problem are we actually trying to solve?
- Did we specify the problem in enough detail for the LLM to dissect it?
- What are the second-order effects of a decision or implementation?
- What assumptions am I making that could be wrong about the architecture or user flow?
- Are we painting ourselves into a corner with this design down the line?
Communications #
The ability to translate complexity into clarity becomes more valuable as LLMs handle the bulk of the very boring, monotonous, and repetitive tasks. You already know that model output is highly dependent on the quality of the input - if you provide crisper requirements, you will get better results. All those writing courses and nagging teachers that emphasize clarity of thought will be finally paying off.
In a very concrete example, GitHub Spec Kit is a project that my team built that introduces the concept of Spec-Driven Development (SDD). Its entire premise is “better specifications yield better products.” The cost of upfront planning results in LLMs being able to better “understand” the intent and then generate the code that best reflects it. It’s not vibe coding, because you can’t vibe code yourself to a good product.
Questions to ask #
- Can I explain my concept in enough detail that avoids assumptions?
- Am I able to document the user stories and scenarios that the product will solve?
- Am I able to ask the questions that will avoid ambiguity and confusion about the project?
- Are there unknown unknowns that I haven’t discovered, and how do I go about finding them?
Cross-Domain Knowledge #
You don’t need to be an expert in everything, but understanding how the pieces fit together matters today more than ever. You should know enough about frontend, backend, design principles, infrastructure, and data analysis to be able to have an informed conversations and make sound decisions. That’s the new baseline.
When I talk about this point, people tend to get all up in arms - “But I am not a frontend engineer, why do I care?” Because someone who is not a frontend engineer but knows just enough about frontend development and knows how to set up a REST API with the help of Claude Code is going to be running laps around you.
Questions to ask #
- Do I understand the constraints and trade-offs across the stack?
- Can I have a productive conversation with specialists in adjacent domains?
- Where are the integration points that typically cause friction?
- What are the components that I need for this project to begin with?
- What are the state of the art tools that I need to be using to be productive?
- Are there any architectural choices I should not be making because that’s going to paint me into a corner in the future?
- Is the design I am building actually good?
AI Augmentation #
This isn’t about mastering every single tidbit about AI, ML, and transformers. It’s not about reading every paper on ArXiV either. You must know when and how to leverage existing AI tools effectively.
In practice, having this skill means being able to understand what LLMs do well, where they fall short, and how to chain them together to improve your output. It’s easier said than done because we’re in the period of AI development where it’s either all doom-and-gloom or “AGI is right around the corner,” when the reality is much more nuanced. Showing a LLM into the process is not a panacea.
Questions to ask #
- Which parts of this work can this model handle well enough?
- Which model is appropriate for the task that I am trying to accomplish?
- Am I replacing creative work or routine work with this LLM workflow?
- How do I validate that the LLM output is right?
- Are there agents that I delegate tasks to that will make me more efficient?
- What’s the right level of human oversight for this task?
Product Sense #
Understanding what to build and why separates tinkerers from product experts. I’ve seen people that call themselves product managers that can’t piece two pieces of customer feedback together - don’t be that. Product sense means having empathy for users, recognizing patterns in feedback, and prioritizing ruthlessly the things that will actually move the needle for the product.
To put it bluntly, it’s the skill that prevents you from building technically impressive things that nobody wants.
Questions to ask #
- What’s the actual customer problem here?
- Am I synthesizing feedback or just following it blindly?
- Do I ask users for features or patterns that I can use to build the actual solution?
- Do I understand the target audience?
- What’s the smallest version that delivers real value?
- How does the work ladder up to the broader strategy?
Execution Speed #
Ideas are cheap; shipping is hard. Yes, even with vibe coding. Again - look around. Where are the thousands of new SaaS businesses that vibe coding supposedly unlocked? There are not nearly as many as you’d think, and that is because doing things end-to-end is hard. And yet - those that can use modern AI tools to move faster will be at an advantage.
Execution speed is about maintaining velocity while keeping in mind why the hell you’re doing this to begin with — knowing when to move fast and ship a MVP versus slow down and do it right. Bias for action compounds over time, and you will need to develop a sense when to push the pedal to the metal.
Question to ask #
- Am I bikeshedding on irrelevant things?
- If this ships today, what’s missing?
- Is the decision a one-way or two-way door?
- What do I need to learn to make better future decisions?
- Will my customers be happy with what I delivered?
- Is this incrementally moving me to the end-goal?
Learning Agility #
The half-life of technical knowledge keeps shrinking - in every sense. Web development frameworks become lost in the sands of time faster than you can say “I want pizza.” Learning agility means being able to quickly absorb new concepts, recognize patterns across domains that are applicable in new tools, and adapting your mental models to things that will never be constant. Forget about memorizing frameworks. You need transferable understanding.
Questions to ask #
- What’s the underlying principle I can apply elsewhere?
- How does this relate to patterns I’ve seen before?
- What’s the fastest way to validate my understanding?
- Are there blind spots in my knowledge that will unlock new insights?
- What am I missing, and who do I talk to for me to discover this?
Systems Thinking #
As AI handles more isolated tasks, I want to see more people capable of seeing the bigger picture. There are so many people that are highly focused on a set of tasks that they forget about the fact that there’s more to product work than building one subsystem.
Systems thinking means understanding how components interact, anticipating cascading effects, and recognizing feedback loops. It’s the skill that prevents local optimizations that hurt global outcomes.
Questions to ask #
- How does this change ripple through the system?
- What are the dependencies I’m not seeing?
- Where might this create unintended consequences?
- What dependencies will potentially derail my product down the line?
- Is it possible that my target audience will be different once I ship?
Agency #
Read up on high agency. No, really - stop reading this blog post, click the link, and spend thirty minutes reading through George’s writing. To me, high agency boils down to a “We’ll figure this out - let’s get to work” attitude. No excuses, no waiting for permission, no futzing around with things that don’t actually have any material impact on the work. High agency people get shit done, and low agency people always find a reason why thing don’t work.
Learn to be the bulldozer - no matter the obstacle, you can plow through and pave a way for others.
Questions to ask #
- Do I just start doing things or do I wait for someone to bless my aspirations?
- Do I dwell on the past or do I move forward?
- Do I assume that I will figure things out or do I look for reasons things won’t work?
- Am I waiting for others to teach me or do I learn things myself?
T-Shaped Is Dead, Long Live Pi-Shaped #
T-shaped used to mean one deep spike plus a thin layer of everything else. That was perfectly fine when roles were somewhat rigid and the handoffs were clean. PMs do one thing, engineers another, and so on.
Today, work is much more… messy. Loops are tight and the fastest path from idea to impact crosses multiple disciplines without the bandwidth to necessarily allocate a professional from each discipline to the task. I bet that the durable advantage is now much more π-shaped: two deep spikes sitting on a broad base. Allow me to elaborate on this totally-not-obvious explanation.
- Broad base: cross-domain fluency — you can speak frontend, backend, data, design, and delivery well enough to make decisions and ship.
- Spike 1: product sense — you have taste, judgment, and the ability to turn fuzzy customer problems into clear, valuable outcomes, while having a deep attention to detail.
- Spike 2: engineering craft — the skill to make it real, safely and simply, under production constraints.
AI lowers the cost of “type it and it runs.” It does not lower the cost of choosing the right thing, shaping it well, and operating it in the wild. That’s on you. Treat models as multipliers, not managers. They accelerate execution while you own intent, architecture, and accountability.
The future is full-stack. Get onboard.
.png)


