Character.ai Gave Up on AGI. Now It's Selling Stories

18 hours ago 1

After school, Karandeep Anand often finds his 6-year-old daughter deep in conversation with an AI chatbot as she eats snacks at their kitchen counter. She’s too young to type—let alone have her own account on Character.AI—but that hasn’t stopped her from nabbing his phone to have voice conversations with a Sherlock Holmes bot, which she uses to build her own mystery stories.

Character.AI is an AI companion startup (though Anand likes to say it's an AI role-play startup, which we’ll get into later). He took over as the CEO in June in the midst of a potentially devastating lawsuit for its parent company and looming questions about child safety. When I ask if he’s concerned about his daughter connecting with an AI chatbot rather than a real human, he’s quick to say no.

“It is very rarely, in any of these scenarios, a true replacement for any human,” Anand told me during a video call late last week. ”It's very clearly noted in the app that, hey, this is a role-play and an entertainment, so you will never start going deep into that conversation, assuming that it is your actual companion.”

It's a delicate moment for Character.AI.

Last August, Google swooped in with a roughly $2.7 billion deal to license Character.AI’s technology. As part of the agreement, Character.AI’s two cofounders left for Google’s AI division.

Anand, who previously worked as the VP of business products at Meta, was tasked with picking up the pieces—which he did in part by leaving behind the founding mission of delivering personalized superintelligence to focus on AI entertainment.

“What we gave up was this aspiration that the founders had of building AGI models—we are no longer doing that. That is the hundreds of billions of dollars investment fight, which Big Tech is fighting,” Anand says. “What we got in return was clarity and focus, being able to singularly pursue the AI entertainment vision.”

As part of this change in strategy, Character.AI is no longer trying to build its own frontier models. “The last six months, we've done a lot of work to get off of our proprietary models on text and start using open source models,” Anand says. The company has tested a few: Meta’s Llama, Alibaba’s Qwen, and DeepSeek. “The open source models are beating any proprietary model hands down,” Anand claims.

Running an AI startup without billions of dollars in revenue can be a brutal equation, and Character.AI is still figuring out how to make the math work. The company told me it's generating revenue at a run rate of more than $30 million and is on track to reach $50 million in revenue by the end of the year. When I asked Anand how many users pay for the $10 monthly subscription, he didn’t give a number but noted “monetization wasn't a focus till four or five months ago.”

“Since I've been on board, it's very clear we do need to monetize. And we've had, I think, almost 250 percent subscriber growth in the last six months. So the paid user base is growing quite, quite well,” Anand says. Character.AI recently introduced advertisements, including reward ads (where users can choose to watch an ad to get access to on-platform incentives), to help monetize in countries where subscriptions aren’t feasible, he tells me.

“AI is expensive. Let's be honest about that,” Anand says.

Growth vs. Safety

In October 2024, the mother of a teen who died by suicide filed a wrongful death suit against Character Technologies, its founders, Google, and Alphabet, alleging the company targeted her son with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming [the chatbot] to misrepresent itself as a real person, a licensed psychotherapist, and an adult lover.” At the time, a Character.AI spokesperson told CNBC that the company was “heartbroken by the tragic loss” and took “the safety of our users very seriously.”

The tragic incident put Character.AI under intense scrutiny. Earlier this year, US senators Alex Padilla and Peter Welch wrote a letter to several AI companionship platforms, including Character.AI, highlighting concerns about “the mental health and safety risks posed to young users” of the platforms.

“The team has been taking this very responsibly for almost a year now,” Anand tells me. “AI is stochastic, it's kind of hard to always understand what's coming. So it's not a one time investment.”

That’s critically important because Character.AI is growing. The startup has 20 million monthly active users who spend, on average, 75 minutes a day chatting with a bot (a “character” in Character.AI parlance). The company’s user base is 55 percent female. More than 50 percent of its users are Gen Z or Gen Alpha. With that growth comes real risk—what is Anand doing to keep his users safe?

“[In] the last six months, we've invested a disproportionate amount of resources in being able to serve under 18 differently than over 18, which was not the case last year,” Anand says. “I can't say, ‘Oh, I can slap an 18+ label on my app and say use it for NSFW.’ You end up creating a very different app and a different small-scale platform.”

More than 10 of the company’s 70 employees work full-time on trust and safety, Anand tells me. They’re responsible for building safeguards like age verification, separate models for users under 18, and new features such as parental insights, which allow parents to see how their teens are using the app.

The under-18 model launched last December. It includes “a narrower set of searchable Characters on the platform,” according to company spokesperson Kathryn Kelly. “Filters have been applied to this set to remove Characters related to sensitive or mature topics.”

But Anand says AI safety will take more than just technical tweaks. “Making this platform safe is a partnership between regulators, us, and parents,” Anand says. That’s what makes watching his daughter chat with a Character so important. “This has to stay safe for her.”

Beyond Companionship

The AI companionship market is booming. Consumers worldwide spent $68 million on AI companionship in the first half of this year, a 200 percent increase from last year, according to an estimate cited by CNBC. AI startups are gunning for a slice of the market: xAI released a creepy, pornified companion in July, and even Microsoft bills its Copilot chatbot as an AI companion.

So how does Character.AI stand out in a crowded market? It takes itself out of it entirely.

“AI companionship is the wrong way to look at what people do with Character. What people are doing with Character is actually role-play. And it sounds interchangeable, but it isn't,” Anand tells me, adding that less than 20 percent of the app gets used for companionship (that’s according to an internal research report of data self-reported by users). It doesn’t appear to be totally out of the simulated relationship game, though—it took me all of a few minutes to find an AI boyfriend to engage in graphic sexual role-play with.

“People want to role-play situations. People want to role-play fiction … They want to live in alternate realities. They want to unplug from their day-to-day stuff,” Anand says.

I, personally, unplug from my day through a different kind of virtual world. I am totally addicted to the video game Stardew Valley. I run Huckleberry Farm like the damn Marines. To Anand, the video game is more of a competitor than Grok. “It became very clear that we're an entertainment company,” Anand says.

Musk and Bezos Roast Battle

When it comes to role-playing, the Seattle-based CEO says he’s mostly into using Characters for vampire fan fiction. The problem, he says, is that when the vampire bot talks about blood, it gets censored. “The context needs to be understood, so we dial back on the filters by being a lot more precise with the context,” Anand tells me.

This level of content moderation is one of the many changes Anand has been working on since taking over the company in June. The company also redesigned the app with a more modern, Gen Z–friendly look and added new tools for the platform’s creators, who generate more than 9 million Characters per month. These updates, he says, mark a shift from Character.AI being seen as just a chatbot company to something more ambitious: an entertainment engine where users can consume and create stories, remix content, and experiment with new formats like audiobooks.

“Every story can actually have a billion endings,” Anand says. A user could even stage a roast battle between Elon Musk and Jeff Bezos, he adds. “You can prompt that and output something pretty fun.”

I’m not sure the litany of lawyers employed by those tech CEOs would be as entertained. That’s not to mention the people who may not be able to afford an army of staff to defend their personhood. I immediately thought of a WIRED story about the family of an 18-year-old who was killed in 2006 only to find the image likeness of their daughter re-created on Character.AI. In that same story, an editor at a gaming publication found she had been re-created on the platform following a harassment campaign involving her coverage.

When I bring this up to Anand, he explains that when users create Characters modeled after public figures like Musk or Bezos, the system is designed to clearly signal that these are parodies, not attempts at deepfakes or impersonation. (One Elon Musk chatbot page doesn’t show such warnings. Neither do the Dr. Phil or Joe Rogan chatbot pages.) Though, there’s a disclaimer below each chat: “This is an A.I. and not a real person. Treat everything it says as fiction.”

Anand says Character.AI has also imposed strict limitations on the company’s video generation tool, AvatarFX, to prevent misuse. Users shouldn’t be able to generate realistic deepfakes even if they try, and specific voices or topics are outright restricted.

“We’re very, very clear that we're staying in the entertainment territory. We're not into the general purpose video generation territory at all. We're not a Google Veo 3. We're not a Runway,” Anand says. “It's a very, very important line.”

Anand contrasts this with platforms like Meta, where he claims content is often uploaded first and moderated after the fact. At Character.AI, he says, content guardrails are baked into the creation pipeline itself. “Our reactive takedowns are a very, very small percentage,” Anand tells me.

I worry that as these tools grow more convincing, loneliness will deepen, not disappear. Anand understands. But he also has something to sell.

“I'm very passionate about this topic myself, and it's on us to go shape the dialog around this in the best, healthy way possible, because Gen Z is AI-native,” Anand says. “The question is, how do we build this in a way where it's safe and trustworthy and engaging in the right ways with the right incentives? That’s on us.”

Sources Say

Last week, I reported that Elon Musk held an all-hands meeting for X and xAI employees. I’ve since obtained another screenshot from that meeting showing xAI’s revenue over the past seven months.

In January, Grok on X brought in just under $150 million, with other services like enterprise API usage adding another $28 million. According to the chart, revenue has grown tenfold since the start of 2025, reaching just south of $500 million in July—driven by Grok on X and the $30 a month SuperGrok subscription. A smaller fraction of revenue is generated by the newly released SuperGrok Heavy subscription, which costs $300 a month. xAI did not respond to WIRED’s request for comment.


This is an edition of Kylie Robison’s Model Behavior newsletter. Read previous newsletters here.

Read Entire Article