Against the Protection of Stocking Frames

2 hours ago 2

I think it’s long past time I start discussing “artificial intelligence” (“AI”) as a failed technology. Specifically, that large language models (LLMs) have repeatedly and consistently failed to demonstrate value to anyone other than their investors and shareholders. The technology is a failure, and I’d like to invite you to join me in treating it as such.

I’m not the first one to land here, of course; the likes of Karen Hao, Alex Hanna, Emily Bender, and more have been on this beat longer than I have. And just to be clear, describing “AI” as a failure doesn’t mean it doesn’t have useful, individual applications; it’s possible you’re already thinking of some that matter to you. But I think it’s important to see those as exceptions to the technology’s overwhelming bias toward failure. In fact, I think describing the technology as a thing that has failed can be helpful in elevating what does actually work about it. Heck, maybe it’ll even help us build a better alternative to it.

In other words, approaching “AI” as failure opens up some really useful lines of thinking and criticism. I want to spend more time with them.


Right, so: why do I think it’s a failure? Well, there are a few reasons.

The first is that as a product class, “AI” is a failed technology. I don’t think it’s controversial to suggest that LLMs haven’t measured up to any of the lofty promises made by their vendors. But in more concrete terms, consumers dislike “AI” when it shows up in products, and it makes them actively mistrust the brands that employ it. In other words, we’re some three years into the hype cycle, and LLMs haven’t met any markers of success we’d apply to, well, literally any other technology.

This failure can’t be separated from the staggering social, cultural, and ecological costs associated with simply using these services: the environmental harms baked into these platforms; the violent disregard for copyright that brought them into being; the real-world deaths they’ve potentially caused; the workforce of underpaid and traumatized contractors that are quite literally building these platforms; and many, many more. I mention these costs because this isn’t a case of a well-built technology failing to find its market. As a force for devastation and harm, “AI” is a wild success; but as a viable product it is, again, a failure.

And yet despite all of this, “AI” feels like it’s just, like, everywhere. Consumers may not like or even trust “AI” features, but that hasn’t stopped product companies from shipping them. Corporations are constantly launching new LLM initiatives, often simply because of “the risk of falling behind” their competitors. What’s more, according to a recent MIT report, very nearly all corporate “AI” pilots fail.

I want to suggest that the ubiquity of LLMs is another sign of the technology’s failure. It is not succeeding on its own merits. Rather, it’s being propped up by terrifying amounts of investment capital, not to mention a recent glut of government contracts. Without that fiscal support, I very much doubt LLMs would even exist at the scale they currently do.

So. The technology doesn’t deliver consistent results, much less desirable ones; what’s more, it extracts terrible costs to not reliably produce anything of value. It is fundamentally a failure. And yet, private companies and public institutions alike keep adopting it. Why is that?

From where I sit, the most consistent application of LLMs at work has been through top-down corporate mandate: a company’s leadership will suggest, urge, or outright require employees to incorporate “AI” in their work. Zapier’s post on its “AI-first” mandate is one recent example. At some point, the company decided to mandate “AI” usage across their organization, joining such august brands as Shopify, Duolingo, and Taco Bell. But in this post from the summer, Zapier’s global head of talent talks about how the company’s expanding the size and scope of that initial mandate. Here’s the intro:

Recently, we shared our AI adoption playbook, which showed that 89% of the Zapier team is already using AI in their daily work. But to make AI transformation truly sustainable, we have to start at the beginning: how we hire and onboard people into Zapier to build this future with us.

I’ve written before about the problems with “adoption” as a success metric: that “usage of a thing” doesn’t communicate anything about the quality of that usage, or about the health of the system overall. But despite that, Zapier’s moved beyond mandated adoption, and has begun changing its hiring and onboarding practices — including how it evaluates employee performance. How does an “AI” mandate show up in a performance review? I’m so glad you asked:

Starting immediately, all new Zapier hires are expected to meet a minimum standard for AI fluency. That doesn’t mean deep technical expertise in every case — but it does mean showing a mindset of curiosity toward AI, a demonstrated willingness to experiment with it, and an ability to think strategically about how AI can amplify their work.

[…]

We map skills across four levels, keeping in mind that AI skills vary and are heavily role-specific.

  • Unacceptable: Resistant to AI tools and skeptical of their value
  • Capable: Using the most popular tools, with likely under three months of hands-on experience
  • Adoptive: Embedding AI in personal workflows, tuning prompts, chaining models, and automating tasks to boost efficiency
  • Transformative: Uses AI not just as a tool but to rethink strategy and deliver user-facing value that wasn’t possible a couple years ago

There’s an insidious thing nestled in here.

Andy Bell and Brian Merchant have both documented tech workers’ reactions to “AI” mandates: what it feels like to have parts of your job outsourced to automation, and how it changes what it feels like to show up for work. I’d recommend reading both posts in full; it’s possible you’ll see something of your own feelings mirrored in those testimonials. And those stories track with my own conversations with tech workers, who’ve shared how difficult it is to talk openly about their concerns at work. I’ve heard repeatedly about a kind of stifling social pressure: an implicit, unstated expectation that “AI” has to be seen as good and useful; pointing out limitations or raising questions feels difficult, if not dangerous.

But this Zapier post is the first example I’ve seen of a company making that implicit expectation into an explicit one. Here, the official policy is that attitude toward a technology should be used as a quantifiable measurement of how well a person aligns with the company’s goals: what the industry has historically (and euphemistically) referred to as culture fit. At this company, you could receive a negative performance review for being perceived as “resistant” or “skeptical” of LLMs. You’d be labeled as unacceptable.

I mean, look: on the face of it, that’s absurd. That is absurd behavior. Imagine screening prospective hires by asking their opinions about your company’s hosting provider, or evaluating employees for how they feel about Microsoft Teams. Just to be clear, I fully believe evaluations like these have happened in the industry — hiring and performance reviews are both riddled with bias, especially in tech. But this is the first time I’ve seen a company policy explicitly state that acceptance of “AI” is a matter of cultural compliance. That you’re either on board with “artificial intelligence,” or you’re not one of us.

This is where I think approaching “AI” as a failure becomes useful, even vital. it underscores that the technology’s real value isn’t improving productivity, or even in improving products. Rather, it’s a social mechanism employed to ensure compliance in the workplace, and to weaken worker power. Stories like the one at Zapier are becoming more common, where executive fiat is being used to force employees to use a technology that could deskill them, and make them more replaceable. Arguably, this is the one use case where “artificial intelligence” has delivered some measure of consistent, provable results.

But here’s the thing: this is a success only if tech workers allow it to be. I’m convinced we can turn this into a failure, too. And we do that by getting organized.

 — okay, yes, I know. I am the person who thinks you deserve a union. But it’s not just me: from game studios to newsrooms, many workers are unionizing specifically because they want contractual protections from “artificial intelligence.” Heck, the twin strikes in Hollywood weren’t about banning “AI,” but giving workers control over how and when the technology was employed. I think at minimum, we deserve that level of control over our work.

With all that said, you don’t have to be unionized to start organizing: to have conversations with your coworkers, to share how you’re feeling about these changes at work, and start talking about what you’d like to do about those changes, together. It really is that simple.

That isn’t to say organizing is easy, mind: it involves having many, many conversations with your coworkers, and looking for shared concerns about issues in the workplace. And, look: I’m writing this post at a time where the labor market’s tight, when there’s so much pressure to not just adopt LLMs but to accept them unquestioningly. In that context, I realize that inviting coworkers to share some thoughts about automation can feel difficult, if not dangerous. But it’s only by organizing — by talking and listening to each other, and acting together in solidarity — do we have a chance at building a better, safer version of the tech industry.

“Artificial intelligence” is a failure. Let’s you and I make sure it stays that way.


Read Entire Article