The case for moving AI down the stack

Over on the Nieman Lab site, Amy Ross Arguedas reports on the various ways news publishers are using AI to personalize their offerings:
Some newsrooms have integrated AI tools into their websites that allow audiences to automatically summarize news articles (e.g. Aftonbladet in Sweden) or change news text to audio (e.g. the Miami Herald in the U.S.). Argentinian newspaper Clarín uses a tool called UalterAI to offer a variety of additional analyses, including key quotes and figures, as well as a glossary for technical terms. Beyond personalized selection and formats, newspapers like The Washington Post have been trialing tools that can answer complex queries based on their own archives.Although I’m always in favor of experimentation, I don’t think most of these ideas will succeed as long-term products. I’m particularly bearish on interfaces that embed chatbots (whether they’re called that or not) in news websites. I’d like to explore why I think that — even in a world where AI becomes a core part of how we use computers, which is far from a given.
Many publishers hope that we’ll go visit their websites or open their apps and interact with their AI-powered interfaces. This mimics the existing mindset where they hope people will visit their homepages and read a bunch of their articles, in the same way a person might have read through a magazine or a newspaper in the past. In this world, the publisher is a destination in themselves.
In reality, of course, that’s not how people largely consume information. Instead, they have some aggregator that is the backbone for how they learn about the world. That might be a dedicated app like Apple News, but it’s more likely to be social media:
- You open your social app (or apps) of choice
- You are presented with a feed and you learn about all kinds of information that is relevant to you through relevant connections to, or algorithmic recommendations of, people you might find interesting and trustworthy
- Sometimes you might click through to a single article to read more
- Then you’ll return to your feed.
If a publisher is particularly interesting to you, you might follow them so you can discover more of their stuff through your feed. Almost nobody is visiting homepage after homepage. Readers almost universally read content from a central feed of information.
Email newsletters are, at their heart, another version of this model. By subscribing to a publisher’s newsletter, you’re adding their content to your reverse-chronological feed of information. Sometimes you’ll open a newsletter message — and then you’ll return to your inbox feed. Newsletters are also not destinations.
And apps? Most people use them for the notifications. Which are feeds on your mobile device that you dip into and then dip out of.
Publisher websites and apps are not destinations in themselves and no amount of AI will make them so.
But also: AI that lives at the app or website level is inherently siloed — and, as such, is not very useful. Instead, it’s worth considering what happens when you move AI down the stack.
If AI lives in the browser, as it does in products like Dia (and soon in Chrome), you can query not just one information source, but every information source you visit through that browser. If I access ProPublica and the Washington Post and half a dozen blogs, I can ask my browser questions about how the story is reported across all of them, and to summarize the story based on information it gleans from each one. Dia can already do this, as well as a bunch of other stuff that some publishers are trying to bake into their own websites.
If we go down a level and AI lives in the operating system, as it’s beginning to in the Microsoft, Google, and Apple ecosystems, you can query everything that’s happening not just in your browser, but in every app you use. In the case of Apple products, this will largely happen on-device, allowing you to query privately. It can be localized and more personalized. And you’ll interact with it in one place, rather than having to visit destination after destination. It’s both easier and more powerful for the user.
Given this, I don’t think it makes any sense to embed these features at the publisher level. Instead, publishers are better off considering how they might embrace emerging standards like Model Context Protocol (MCP) into their offerings so that their information can be consumed. They might also think about charging for access to premium information and talking to The Browser Company, Anthropic, Google, OpenAI, et al about how those payments could be deeply embedded in the places where people will use AI. There are real opportunities for monetization here — but again, not on your homepage, not with your website as the central destination.
What’s neat about this model is that it re-establishes a kind of open web ethos that we’ve been missing for decades — which will enable much more than AI. As Anil Dash recently pointed out:
The rise of MCP gives hope that the popularity of AI amongst coders might pry open all these other platforms to make them programmable for any purpose, not just so that LLMs can control them.Like Anil, I’m not saying that AI is inherently great, or that it works as advertised, or that the ethical considerations that underly everything from its training sets to its resource use aren’t real. This is not, in other words, an endorsement of AI-everywhere. But if you’re accepting all that and making a bet that AI will be somewhere, I wouldn’t put money on it being on your website, and I certainly wouldn’t put money on it being something that magically makes people want to visit your website every day. On the other hand, there may be a shift towards publishers contributing to open feeds of data that we interact with using agents and prompts.
My proposal is this: you should consider what’s actually the most useful experience for the user, rather than what furthers your own interests, and make a bet on that, instead.