In the years between the two World Wars, France built The Maginot Line - a line of fortifications stretching along its eastern border.
The Maginot Line was an unshakable defence; a masterpiece of military design comprising underground bunkers and armored gun turrets, with rail lines connecting garrisons.
It was built to prevent the kind of trench warfare that had devastated Europe barely a generation earlier.
It was, by all accounts, a perfect answer to the wrong question.
Because in May 1940, German forces launched a blitzkrieg, completely avoiding the fortified zones along the Maginot Line, and moving in through the lightly defended Ardennes Forest, which the French had considered impassable.
The Maginot Line stood tall through the attack. Unconquered, but also utterly useless.
The Maginot Line had succeeded as engineering solution - it was impenetrable. But it had failed as a strategic response to a reconfigured system of warfare. The perfect answer to the wrong question.
This solution of delaying an enemy advance was no longer relevant in a war where coordination, mobility, and surprise had replaced static trench lines.
The Germans had built a new system of warfare around the technological developments that had happened since WW1. Every vehicle had two-way radio communication, enabling coordination between armored columns, dive bombers, and advancing infantry. Blitzkrieg was an entirely new system of coordination.
And France, staring proudly at its defenses, missed the shift entirely.
The Maginot Line, to put it lightly, was…
True, but utterly useless!
AI won’t take your job but someone using AI will.
It’s the kind of line you could drop in a LinkedIn post, or worse still, in a conference panel, and get immediate Zombie nods of agreement.
Technically, it’s true.
But, like the Maginot Line, it’s also utterly useless!
It doesn’t clarify anything. Which job? Does this apply to all jobs? And what type of AI? What will the someone using AI do differently apart from just using AI? What form of usage will matter vs not?
This kind of truth is seductive precisely because it feels empowering. It makes you feel like you’ve figured something out. You conclude that if you just ‘use AI,’ you’ll be safe.
In fact, it gives you just enough conceptual clarity to stop asking the harder questions that really matter:
How does AI change the structure of work?
How does it restructure workflows?
How does it alter the very logic by which organizations function?
And, eventually, what do future jobs look like in that new reconfigured system?
The problem with ‘AI won’t take your job but someone using AI will’ isn’t that it’s just a harmless simplification.
The real issue is that it’s a framing error.
It directs your attention to the wrong level of the problem, while creating consensus theatre.
It directs your attention to the individual task level - automation vs augmentation of the tasks you perform - when the real shift is happening at the level of the entire system of work.
The problem with consensus theatre is that the topic ends right there. Everyone leaves the room feeling smart, yet not a single person has a clue on how to apply this newly acquired insight the right way.
True, but utterly useless!
Like the Maginot Line, it gives you a false assurance that you’ve figured the solution. But you’re stuck looking at the problem through the wrong frame.
The solution may work in that old frame.
But when the frame shifts - when AI changes the whole landscape of knowledge work, much like the German blitzkrieg - your answer, even if true in the older frame, is now utterly useless.
The short comic strip below should help summarise:
True, but utterly useless!
Let’s unpack this in more detail by looking at 8 fallacies that are inherent to this statement and its interpretation of the future of work.
Here’s a summary of the 8 fallacies that explain why this rather simplistic meme-worthy line might be true in one frame, but utterly useless in reality:
Fallacy #1: The automation vs augmentation fallacy
Fallacy #2: The productivity gains fallacy
Fallacy #3: The static jobs fallacy
Fallacy #4: The ‘me vs someone-using-AI’ competition fallacy
Fallacy #5: The workflow continuity fallacy
Fallacy #6: The neutral tools fallacy
Fallacy #7: The stable salary fallacy
Fallacy #8: The stable firm fallacy
Let’s dig in and explore these fallacies, with parallels from the transformation of basketball and cricket, as well as interesting job dilemmas faced by dockworkers, sessions musicians, and typists!
“Just learn the right tools, and you’ll stay ahead.”
The entire idea of AI won’t take your job but someone using AI will is based on the fallacy of task-based thinking.
The assumption of task-based thinking is that AI will affect your job in one of two ways: automation or augmentation. Automation is AI performing the task for you. Augmentation is you performing the task better with AI. If you want to avoid the first, be prepared to take up the latter.
AI (automation) won't take your job, but someone using AI (augmentation) will.
It’s a way of thinking that stays stuck in the wrong frame of local optimization: Do this task better using AI.
What it misses is that in most systems, the value of the task itself is about to change. When the system evolves, tasks that were once critical may no longer be relevant. Not because they are done poorly, but because they no longer create any advantage.
Think of the impact of containerization on ports. The arrival of the intermodal shipping container did not simply automate loading and unloading of ships. It restructured the economic logic of trade, making some ports like Liverpool irrelevant and changing the fortunes of others like Singapore.
A dockworker who might have worried about cranes taking his job suddenly realised that the entire port had lost out. What really played out was much larger than just port automation. It was a new logic of which ports made sense in the new system and which ones didn’t.
The automation vs augmentation fallacy leaves us stuck analysing our jobs when the entire system around it is changing (as we will explore further through the other fallacies below).
AI doesn’t simply replace tasks. It reshapes the architecture of the system in which those tasks used to make sense.
The same thing happened with the German blitzkrieg. The French believed the task of military defense required fortifications. The Germans saw that victory now required a new form of coordination, between tanks and air support using radio communication, to disorient the enemy.
The problem had been reframed.
How well AI performs the task doesn’t matter as much as how the system gets restructured because of AI. With that, you now need to re-evaluate when that task is needed, who does it, and in what sequence.
AI adoption in organizations doesn't deliver results
not because people/organizations don’t adopt AI,
but because they use it solely to optimize within an older frame
that is rapidly becoming irrelevant.
AI is doing to knowledge systems what the container did to logistics and what blitzkrieg did to warfare: it’s shifting the architecture of advantage.
And yet, the dominant response remains task-oriented.
Because, on LinkedIn and in conference panel discussions, the world is described in binary terms, without further nuance - those using AI vs those not using AI.
“AI helps me get more done, so I’ll be more valuable.”
If a tool helps you do more work in less time, that sounds like progress.
That’s only true if the system of work - the workflow, the organization, the business model - remain stable.
Ironically, in the midst of structural uncertainty, productivity gains are often redistributed in unexpected ways. And in some cases, they produce the opposite of their intended effect.
This is because value capture can easily be
decoupled from value creation
during periods of system redesign.
In the apparel industry, factory productivity has increased dramatically over the past two decades, particularly in fast fashion. Automation, better forecasting, and integrated logistics - all work together to allow companies to compress cycle times and push more units through the system. But the value of higher productivity hasn’t flowed to the workers or even to the factories.
Value creation has been decoupled from value capture.
Instead, the value created by faster workers is captured almost entirely by aggregators who define this new system of work - firms like Shein - that manage coordination across the system. They decide what gets made, when, and in what volume. As labor productivity rises with each new improvement, the bargaining power of labor falls further through task reallocation and system redesign.
Companies that adopt AI for task acceleration will soon realize (if they haven’t already) that when tools are widely available and easily replicated, productivity becomes a commodity.
And in a commoditized environment, surplus doesn’t flow to the worker, it flows to the coordinating layer that determines the logic of the system. As more value shifts to coordinating work, the workers using these tools become even more interchangeable and commoditized.
Economists have long understood this through the lens of Ricardian rents. In any system, value accrues not to the most productive participant, but to the one controlling the scarcest complementary asset. In agriculture, it was land. In tech, it’s often those who control coordination or distribution, typically by using data strategically to allocate work or to match supply with demand.
The pie grows as productivity increases, but it doesn’t get sliced evenly.
If you’d like to read more about the productivity gains paradox, have a look at the article below:
“Jobs are fixed constructs. AI just changes how we do them.”
Basketball used to be organized around five fixed positions: point guard, shooting guard, small forward, power forward, and center.
Each role was a predictable bundle of tasks: the point guard managed the offense and set up plays, the center patrolled the paint and protected the rim, and so on. These roles signified where you stood and what zone you operated in. They were well-positioned components in a tightly organised system. Success came from mastering your slice of the court.
Once analytics was introduced to basketball, it changed the entire logic of play. It became clear that three-pointers are more efficient than mid-range shots, and that spacing creates better offence.
With this new evidence, the old roles didn’t map well onto the new priorities. The game began to reward capabilities that cut across the old positions.
In short, traditional position-based roles got unbundled and players played more fluid roles involving the same skills, but rebundled around the team’s play.
Much like the shift in basketball, most jobs are not fixed units. They are artifacts of organizational design, built around coordination problems. A job exists because a system needs someone to manage a specific set of interactions, decisions, or dependencies. When the system changes, the logic of the job can collapse completely, even if the individual tasks remain intact.
The fallacy of static jobs persists because it’s cognitively efficient. It offers a clear anchor in a shifting environment. Job titles serve as focal points. They make organizational complexity manageable.
But it’s grossly misleading. It encourages workers to optimize for role continuity (you doing your job vs someone using AI doing your job) when they should be preparing for role redefinition.
Will my job be automated?” is the wrong question.
“Will this role still exist in the new system?” is the question that really matters.
To understand how rebundling plays out with AI, here’s a more detailed post:
This post is based on ideas from my upcoming book Reshuffle.
Reshuffle is now available for pre-orders. All pre-orders leading up to the launch date are at 70% off. (Launching June 2025)
Pre-orders are Kindle only.
Hardcover, paperback, and audiobook versions will be available at launch.
“I’m racing my peers to master AI.”
Consider what happened to typists in the late twentieth century. Typists were dedicated specialists, who competed on the basis of accuracy, speed, and formatting skill. Errors required retyping entire documents and every revision was costly in time and labor.
When word processors first came out, typists started believing in the equivalent of “Word processors won’t take your job, but someone using a word processor will.”
And that was true, but not at all in the way that they had expected.
Typists had assumed reskilling would solve the problem.
But the real problem was that the core constraint - expensive document editing - had been removed. Document editing was suddenly cheap. And with that, the basis of competition was no longer relevant. The main skill for which they were paid was suddenly irrelevant.
Typing became embedded across all workflows. From a specialized task requiring specialized skills, it became a basic task that everyone could perform.
The typists weren’t outcompeted by better typists. They were displaced by a new system design in which typing no longer justified a full-time role.
This is the hallmark of a frame shift: the logic of competition itself is restructured. You don’t lose because you were replaced by the new technology or even by someone using the new technology. Both automation and augmentation frames become irrelevant.
You lose because the environment stopped rewarding the thing you were all along racing to perfect.
This happens whenever the value of a skill collapses. The English longbowman was once the decisive military asset. Training took years, and the skill was rare and lethal. But when gunpowder entered the battlefield, skill mattered less than the ability to reload and fire en masse.
Archers didn’t lose to other archers using the latest technology; they lost to an entirely different model of warfare.
Understanding the constraint that makes you valuable is key. In the case of typists, it was the high cost of document editing. When that constraint is removed, so is your value.
Conversely, as you apply your skills to new workflows that emerge, new constraints can also prevent you from getting the benefits of those skills.
Some of the fastest sprinters in the world often struggle in the 4x100m relay. The issue isn’t task performance - these athletes are the fastest in the world. The issue is workflow constraints. They are just not good at passing the baton and lose vital micro-seconds in those 3 transitions.
The event eventually wasn’t won by the fastest team. It was won by the team of slightly slower sprinters who moved the baton smoothly .
“AI will make our process faster.”
If you watch cricket, you’d have noted the shift in decision making with technology - as more decisions increasingly move from on-field umpires to TV umpires (or third umpires). This shift in the locus of decision shifts two important things (1) who holds power and (2) the sequence of play. More appellate power comes back to players and play gets disrupted in new ways to make way for better decisions.
The someone-using-AI fallacy suggests that your job in some form (or at least your skills today) will remain relevant because the overall workflow will remain stable. Automation would have taken your place in the workflow. Augmentation helps you retain it.
This is the fallacy of workflow continuity: the belief that the steps will remain the same, even as the system evolves.
This fallacy ignores the possibility that in many cases, workflows get reimagined. And when they do, the new workflows may never require the skills you bring to the table, whether you use AI alongside or not.
Beyond individuals, even firms fall for this fallacy, and focus on applying AI to. optimize their old processes into irrelevance. They invest in tools that reduce headcount per workflow stage, but miss the larger opportunity: to restructure the system entirely.
Improving a process that AI will soon eliminate is a misallocation of resources.
The real advantage is not in making existing workflows faster, but in being first to build the new ones that won’t need those steps at all.
If you’d like to read more on this,
“AI is just a tool, how we use it is up to us.”
Tools are rarely neutral. They contain default settings and preferences, and often redistribute organizational power.
Microsoft Excel is a great example. People who mastered Excel, inevitably began to hold disproportionate influence through the 1990s, a period when process optimization was the rage. Decisions that previously were made using gut instinct now were now modeled and simulated.
This shifted organizational gravity toward those who could control the cells on Excel. They held decision rights in the organization.
Any tool that shifts the locus of decision support ends up doing this.
Langdon Winner’s famous question, Do artifacts have politics?, was aimed precisely at this kind of illusion. When New York’s parkway overpasses were designed too low for buses to pass under, the result was a de facto exclusion of low-income residents (many of whom relied on public transit) from accessing certain beaches.
Was that political? Winner argued yes, because the design encoded a social consequence.
As AI gets adopted, it affects organizational power in two ways.
The first is by shifting decisions.
AI can be framed as an assistant, a co-pilot, a helper. It’s sold as augmentation, not reallocation. But it often restructures how decisions are made, and by whom.
In any organization, different groups compete for control over key decisions. When a new tool changes who gets to inform and make decisions, it shifts the internal power map. And many jobs lose power through this process.
The second is by shifting execution.
Again, AI is somewhat unique here. Unlike most other technologies which are primarily assistive, AI is also agentic, in that it can make choices and allocate resources towards achieving a specific goal. As a result, AI may work alongside you (augmentation) but still take over important parts of the workflow execution from you. Gradually, your position in certain teams becomes weaker as agentic execution substitutes more of what you provided to some of these teams.
So the fallacy of neutral tools - that tools only serve their users - doesn’t stand.
Thanks for reading Platforms, AI, and the Economics of BigTech! This post is public so feel free to share it.
“I still have a job, so I’m fine.”
More work doesn’t always mean more pay.
In fact, very often, it means the opposite.
Alongside the rise of digital music streaming, the volume of music production has exploded. And more tracks mean more recording sessions, which should mean more work for sessions musicians.
Yet, the economics don’t follow.
Streaming brought down revenue per play. Music was already an industry with power law outcomes, those royalty structures skewed further, favouring a few headline artists.
Session musicians, ironically, now have more work than ever, but the rates no longer justify the effort. And the recognition doesn’t scale with output either as more songs get lost in the big Spotify algorithmic sea. Music also becomes more formulaic as artists rush to capture the listener’s interest in the first 15 seconds of a stream.
So yes, people are listening to more music than ever and the experts on LinkedIn are talking about Jevons Paradox, but none of that money flows back to the sessions musicians.
People assume that as long as their role remains intact, their relevance, income, and career trajectory are safe.
The problem, often, is not so much that the job disappears as much as the fact that its value goes down.
The sessions musician example is interesting because the system changed in a way that devalued their job. Superior musical talent could no longer easily be linked to better royalties.
The link between higher expertise and higher pay had been decoupled.
We see the same effects play out with AI where the link between higher expertise and higher pay breaks down.
But in this case, it is an outcome of tool augmentation. In general, tools that augment average skilled workers to perform at par with high skilled workers have a flattening effect. Expertise and pay get decoupled.
This problem is further exacerbated with AI because of the learning advantage of AI. The more you use AI, the more you train it to become capable of doing things that you get paid to do today. As AI becomes more capable, your own job fragments further and what remains of it may increasingly not justify the pay you used to command.
This is a case of augmentation (someone using AI) leading to an adverse outcome where you continue to retain the job but no longer command the skill premium.
Economic indicators show nothing has changed
as most of them measure continuity, not commoditization.
And organizations, eager to avoid disruption, often reinforce this illusion by keeping roles in place even as they start changing what those roles mean and how much they get paid.
If you’d like to read the detailed teardown of this argument, have a look at this post:
“We’re integrating AI into our strategy.”
Most companies today talk about technology adoption in the language of addition. AI will be bolted on top. An integration here, a pilot somewhere else. With little thought given to the strategy, the structure, or the organizational workflows.
This is the fallacy that intelligent firms - firms looking to use intelligence in the existing frame - typically make. Same as before, just more intelligent.
This framing misunderstands the nature of architectural change. It assumes the organization is a container that can absorb transformation without being reshaped by it.
They treat AI like a feature upgrade, instead of a new operating system.
In reality, when a technology changes how decisions are made and how coordination happens, it reorganizes the firm.
This is the final fallacy.
The fallacy of the stable firm is appealing because it promises transformation without re-organization. It suggests you can keep your structure, your culture, your business model, and simply inject a new capability.
A useless truth is a statement that feels right, sounds intelligent, and circulates easily (especially on LinkedIn).
The problem, though, is that it gives you just enough reassurance to make you stop asking deeper questions. It acts like curiosity anaesthesia - you were probably struggling with a hairy problem, to which you not have what looks like a clean answer.
In many ways, a useless truth is worse than something that’s clearly false. It lacks nuance and is dangerously easy to misinterpret.
Once repeated enough, a useless truth becomes a mechanism for easy consensus - that sea of nodding heads at a panel discussion.
Think of the last time you talked about AI and someone responded with “AI won’t take your job but someone using AI will.”
Because its meaning is vague, people gravitate toward the most obvious interpretation. That if you start using AI, you’ll ensure that the ‘someone using AI’ who takes your job is actually you.
The danger of a useless truth is this form of false closure. It lets people feel they’ve solved something important, when in fact, it locks them into - what is most likely - error-prone execution.
These useless truths create what we might call passive consensus, particularly in organizations that face pressure to act amid uncertainty. Everyone agrees, no one disagrees. It preserves ambiguity while preventing thoughtful design of the right solutions.
The history of military power is often told through its weapons; the shift from bows to muskets, cavalry to tanks, machine guns to nuclear warheads. And so is the story of technology’s impact on work.
But the ‘better tools’ framing of history is a fallacy.
What matters in each instance is better alignment between
(1) the design of the new system (the game) and
(2) the environment in which that system works (the playing field).
The tools are a mechanism for designing better systems.
Shifts in advantage are rarely about better tools, they’re about who understands the new logic of coordination first.
Battles shifted from close-range engagements to long-range coordination or from individual skill to scalable execution.
And the victors rarely won because they knew how to use the new weapons well, but because they understood how to re-organize the system around them to create a new logic of combat.
The ‘someone using AI’ fallacy often leaves us with little more than the next Maginot Line - true, but utterly useless!
If you’ve made it all the way through, now is a good time to share this further!
This post is based on ideas from my upcoming book Reshuffle. (Launching June 2025)