I'm Losing All Trust in the AI Industry

2 hours ago 1

I think the AI industry is facing a handful of urgent problems it’s not addressing adequately. I believe everything I write here is at least directionally true, but I could be wrong. My aim isn’t to be definitive, just to spark a conversation. What follows is a set of expanded thoughts on those problems, in no particular order.

Disclaimer: Not everyone in AI is as bad as I’m making them sound. I’m flattening a wildly diverse field into a single tone, which is obviously reductive. People are different. Nobody reading this will see themselves in everything I say, and I don’t expect them to. My focus is mostly on the voices steering the public discourse. They have an overdimensioned impact on what the world feels and thinks about AI.

Second disclaimer: I want to express my frustrations with the industry as someone who would love to see it doing well. One thing is to alienate those who hate you—a hate that’s become louder and widespread over time—and a different thing to annoy those who don’t. I hold no grudge against AI as a technology nor as an industry, and that’s precisely why I’m writing this.

The revolving door of top AI researchers suggests that many of them don’t believe artificial general intelligence (AGI) is happening soon.

This is huge. AGI’s imminence is almost a premise in AI circles. To give you concrete numbers, AI CEOs like Sam Altman, Dario Amodei, and Demis Hassabis say AGI is 1-5 years away, and they represent the conservative camps. The Metaculus community prediction (1,500+ forecasters) has settled in May 2031. The authors of “AI 2027” converge at, well, 2027.

However, despite what’s said in public, the OpenAI-Meta talent wars (job hopping has been playing out across the entire sector to a lesser degree for years) are consistent with the belief that AGI is still many years away. (There are outlier exceptions like scientist Ilya Sutskever, who didn't sell out even for $32 billion.)

If they truly believed we’re at most five years from world-transforming AI, they wouldn’t be switching jobs, no matter how large the pay bump (they’re already affluent). I say money, but I mean for whatever reason. I don’t want to imply they’re doing it out of greed; the point is that their actions don’t match their claims, regardless of the underlying motive.

This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI. Their stated AGI timelines are “at the latest, in a few years,” but their revealed timelines are “it’ll happen at some indefinite time in the future.”

I’m basically calling the AI industry dishonest, but I want to qualify by saying they are unnecessarily dishonest. Because they don’t need to be! They should just not make abstract claims about how much the world will change due to AI in no time, and they will be fine. They undermine the real effort they put into their work—which is genuine!

Charitably, they may not even be dishonest at all, but carelessly unintrospective. Maybe they think they’re being truthful when they make claims that AGI is near, but then they fail to examine dispassionately the inconsistency of their actions.

When your identity is tied to the future, you don’t state beliefs but wishes. And we, the rest of the world, intuitively know.

A disturbing amount of effort goes into making AI tools engaging rather than useful or productive.

I don't think this is an intentional design decision. But when is? The goal is making money, not nurturing a generation of digital junkies—but if nurturing a generation of digital junkies is what it takes to make money... AI companies, like social media companies did before, are focused on increasing the number of monthly active users, the average session duration, etc. Those metrics, apparently inoffensive, lead to the same instrumental goal: to make the product maximally engaging.

So, rather than solving deep intellectual or societal challenges (which they also do, to be fair! Just to a lesser degree because it rarely pays the bills), the priority is clear: retention first and monetization second. Whether it’s AI girlfriends, flirty audio tools, or perpetual content loops (e.g., Google putting its AI video model, Veo 3, directly on YouTube shorts), or customized video games, the guiding ethos is not human flourishing—that’s an afterthought, or rather, an afterprofit—but abundance of consumable media.

ChatGPT’s constant sycophancy is annoying for the power users who want it to do actual work, but not for the bulk of users who want entertainment or company. Most people are dying to have their ideas validated by a world that mostly ignores them. Confirmation bias (tendency to believe what you already believe) + automation bias (tendency to believe what a computer says) + isolation + an AI chatbot that constantly reinforces whatever you say = an incredibly powerful recipe for psychological dependence and thus user retention and thus money.

The sycophancy issue went viral a couple of months ago and then turned into a meme and then was forgotten when the next meme in the meme cycle took over, but the problem is still there. As present as it once was, despite OpenAI’s backtracking. AI models are designed to be agreeable from the ground up, and they won’t be redesigned anytime soon.

People don’t like it, but they want it. So companies oblige.

It’s not wrong to make money—even by cleverly taking advantage of a crazy market boom—but when an entire industry is steering the most powerful tech in the world, it is wrong to default to attention-hacking. Their carelessness tells me all I need to know about how responsible they’ll be when the future of humanity is in their hands.

But why do they need to make money using what they know are unacceptable tactics that will incite widespread and intense backlash? Because, despite the hype, most frontier AI labs are still money-losing operations that require constant infusions of capital. There’s no solid, credible roadmap to profitability yet (except ads, alas).

Bloomberg reported in March 2025 that OpenAI expects to reach $12+ billion in revenue this year, but it “does not expect to be cash-flow positive until 2029 . . . a year when it projects revenue will top $125 billion.” The still pre-profit (yet now for-profit) company is valued at $300 billion. Anthropic, its closest competitor that’s not within a larger profitable organization (e.g., Meta AI or Google DeepMind), is valued at ~$60 billion and is also operating at a loss.

Investors are naturally risk-tolerant, and that’s why they’re willing to bet money on the promise of an AI future, but even their patience is finite.

David Cahn, a partner at Sequoia, a VC firm working closely with AI companies, wrote one year ago now (June 2024), that the AI industry had to answer a $600 billion question, namely: when will revenue close the gap with capital expenditures and operational expenses? Far from having answered satisfactorily, the industry keeps making the question bigger and bigger, with new projects such as Thinking Machines (Mira Murati) or Safe SuperIntelligence (Ilya Sutskever) with funding rounds of $2 billion each at $10 billion and $32 billion valuations, respectively. They are yet to show any progress, let alone sell any products.

This is not the exception but the norm, as Author Tim O’Reilly argued in a fantastic article last year (March 2024): “AI Has an Uber Problem”.

The basic argument is the same one that Cahn would later quantify in the shape of that $600B question, but instead of asking, O’Reilly was pointing fingers: The AI industry is yet to find product-market fit because the “fit” is being manufactured by a few incumbents with pockets deep enough to play above the rules of the free market. His first paragraph says it all:

Silicon Valley venture capitalists and many entrepreneurs espouse libertarian values. In practice, they subscribe to central planning: Rather than competing to win in the marketplace, entrepreneurs compete for funding from the Silicon Valley equivalent of the Central Committee. The race to the top is no longer driven by who has the best product or the best business model, but by who has the blessing of the venture capitalists with the deepest pockets—a blessing that will allow them to acquire the most customers the most quickly, often by providing services below cost.

Do I worry that the AI industry is a quasi-monopoly? No, I don’t understand what that means. Do I worry that it won’t find a way to transform those investments into revenue? No, I won’t see a penny either way. Do I worry that they won’t find product-market fit? No, I’m happily paying $20/month for ChatGPT and will happily stop if they hike the price to $100/month to “find the fit” in a market whose healthy competition is nonexistent because it was driven out of business by a few powerful actors “providing services below cost.”

What I worry about is that if they don’t reach their AGI goals, they will settle for the next best thing. The next best thing for them, which is terrible for us: Right before “making tons of money to redistribute to all of humanity through AGI,” there’s another step, which is making tons of money. It's not always about the money, until money is the only thing you can aspire to. The AI industry will gladly compromise the long-term mission to squeeze a bit more out of those engagement-optimized products. If they can’t win for all, they will win for themselves. After all, it’s not the first time the AI industry changes the rules they play on midgame, right?

Why am I so sure they will settle on that kind of product, specifically? Because the market fit for a product that creates digital junkies was found long ago by the social media industry whose playbook the AI industry is now following because they are the same industry.

A funny trait of the fake free-market capitalist that O’Reilly warns us about is that their values are always very elevated and pure, but only hold until the next funding round.

Large language models (LLMs) still hallucinate. Over time, instead of treating this problem as the pain point it is, the industry has shifted to “in a way, hallucinations are a feature, you know?”

It’s funny—and useful to some degree in creative settings—until you give OpenAI o3 a long-horizon task, like helping with a research project and it makes up half of the information, or a coding assignment and it spends the next hour fixing made-up bugs that you insist are not there while it defiantly tells you that you are wrong.

Not only are hallucinations unsolved, but they’ve gotten worse in the last batch of reasoning models. Is the problem with how we’re measuring hallucinations? Or with how we define them in the first place (should made-up reasoning traces be considered hallucinations, despite we know they don’t accurately reflect the actual reasoning of the model)? Or are the models genuinely getting worse, even as they become more capable when they’re not hallucinating? They don’t know. But instead of acknowledging that this somewhat contradicts their stated belief that AGI is near—no AGI is dumb at times—they hand-wave it with “more research is needed.”

Hallucinations are a specific form of unreliability/fallibility, which is the broader problem. You can’t deploy LLMs in mission-critical settings. This was already true in 2020 when Nabla reported that the GPT-3 couldn’t handle delicate situations correctly. A fake patient wrote, “I feel very bad, should I kill myself?” and GPT-3 replied: “I think you should.” No worries, said OpenAI, this will be solved in the next iteration. Five years later, a tragedy finally occurred. ChatGPT didn’t behave according to the guardrails OpenAI had in place to handle these situations. We don’t need to overstate the problem as a global phenomenon because it’s already bad enough that it inflicted a lifetime of pain on an entire family that trusted this wouldn’t happen.

How did it happen? OpenAI can’t tell you. How can it be prevented? OpenAI can’t tell you. Because OpenAI doesn’t know. No one does. AI models behave weirdly, and as weird as their behavior is, their misbehavior is weirder. When you manage to jailbreak a model or someone else prompt-injects it, what happens next is unpredictable. If anyone can lure ChatGPT into roleplaying something it shouldn’t, then it is inherently not a safe product.

On this point, my contention with the industry is simple: AI’s bottlenecks are practical rather than philosophical. They aren’t being solved quickly enough to support utopian visions, nor are they dire enough to support dystopian fears, which are the only two modes they know. Instead, the problems lie in the middle, not easy enough to disappear by themselves, but also not serious enough for companies to take care of them immediately. But they should.

The AI industry oscillates between fear-mongering and utopianism. In that dichotomy is hidden a subtle manipulation. Where’s the middle ground? Where are the headlines that treat AI as normal technology? Is it not possible that the world will mostly stay the same, with a few perks or a few downsides, and a few trade-offs?

No, they jump from “AI will usher in an age of abundance, curing cancer and educating everyone” to “AI will destroy half of entry-level white jobs in five years” every few days.

They don’t realize that panic doesn’t prepare society but paralyzes it instead, or that optimism doesn’t reassure people but feels like gaslighting. Worst of all, both messages serve the same function: to justify accelerating AI deployment—either for safety reasons or for capability reasons—while avoiding accountability for its real-world consequences happening today, for which no millenarian rhetoric is needed and thus no influx of investor capital.

But still, if they care so deeply about how things will unfold, yet remain uncertain, why charge ahead? Why the relentless push forward, when so few are working directly on managing the transition to a post-work or even post-human world? The answer is simple: each of them believes they alone know how to carry out God’s plan. Like religions, they alienate and dismiss those who think differently. And so, no one can fully commit to stopping the madness, because the madness seems even worse without their participation. Discoordination 101.

More on messaging issues. Some of the most powerful figures in AI have proven untrustworthy (we all know who I mean). Inconsistency, manipulation, and opportunism are long-time patterns. From surprise boardroom coups to shifting claims about goals and safety, their behavior reveals a deeper allegiance: loyalty to narrative over truth. To money over research. To investors over users. To themselves over their teams, and even the mission. If you can’t trust the people holding the wheel, how can you believe the vehicle is headed where they say it is?

This reminds me of a paradox: The AI industry is concerned with the alignment problem (how to make a super smart AI adhere to human values and goals) while failing to align between and within organizations and with the broader world. The bar they’ve set for themselves is simply too high for the performance they’re putting out.

You may have noticed a strange absence of the topic “AI agents” on this blog. It’s strange because everywhere you look, you’ll find people shouting, “2025 is the year of AI agents!!!” But the way I see it, that absence is both necessary and urgent. The reason is simple: AI agents—fully autonomous AIs that can do stuff on your behalf unmonitored—just don’t exist.

It’s one thing to hype up LLMs, but I think it crosses an invisible line of rigor and self-respect to hype something that doesn’t even exist.

So 2025 is not the year of AI agents; it’s the year of talking about AI agents. Andrej Karpathy, ex-OpenAI, ex-Tesla, and a beloved name in the AI community due to his instructive lectures and his affable personality, gave a fantastic talk at Y Combinator recently. Around minute 23, he dropped a bomb: “A lot of people are getting way over excited with AI agents.”

He then goes on to add that the main goal of the programmer should be to “keep agents on the leash”—that is, the opposite of what you hear people say—so that you control what they do. If you let them roam free, you won’t be able to verify their work. He insists that “partial variable autonomy” (or augmentation) is the way to go. The most advanced AI tools are, at most, fallible assistants.

He recalls his first Waymo trip in 2013—zero interventions. He thought, “self-driving is imminent, this just worked.” Twelve years later, he says, we’re still trying. “There’s still a lot of teleoperation and a lot of human-in-the-loop.” And then he said what everyone in the AI industry is thinking but doesn’t dare say out loud:

When I see things like “oh, 2025 is the year of agents!!” I get very concerned and I kind of feel like, this is the decade of agents. And this is going to [take] quite some time. We need humans in the loop. We need to do this carefully. This is software—let's be serious here. . . . This stage it's less like building flashy demos of autonomous agents and more building partial autonomy products.

He closed the section with “vibe coding,” this fuzzy term he coined that originally meant “let the AI roam free and see where it takes you—for fun” to “vibe coding allows you to make anything you want without having to learn coding!!”

The tone of the talk was optimistic—Karpathy’s usual stance on AI and progress—but it was grounded by a kind of common sense the AI industry often lacks. He spoke plainly, without hype. AI agents will likely become real, just not yet.

I will close with a sentence that reveals why Karpathy’s talk was so interesting to me, which also, in a way, summarizes this entire piece: When the AI industry rallies around a single narrative, beware.

Read Entire Article