ScreenshotThere’s both a fundamentally important, and profoundly worrying video featuring Matthew Prince posted to Twitter/X. Prince’s company CloudFlare handles around 20% of all web traffic, which serves to underscore the validity of what he stresses.
There’s a machine-generated transcript of Prince’s comments I created here.
Key points in Prince’s submission
Content crawling to traffic ratios
- Prince presents quantitative evidence demonstrating the systematic degradation of value exchange between search platforms and content creators over the past decade. The data reveals that Google’s scraping-to-visitor ratio deteriorated from 2:1 ten years ago to 18:1 currently, with the most dramatic acceleration occurring in the six months following AI Overview implementation. This represents a measurable erosion of the foundational economic relationship that sustained web-based content creation for decades.
- Cross-platform analysis reveals consistent patterns of extraction without compensation across major AI systems. OpenAI’s ratio expanded from 250:1 to 1,500:1 within six months, whilst Anthropic’s ratio grew from 6,000:1 to 60,000:1 over the same period. These metrics demonstrate that increased user trust in AI responses correlates directly with reduced engagement with original source material, creating a systematic displacement of traditional content consumption patterns.
Unprecedented disruption
- Prince identifies the collapse of three primary revenue mechanisms that historically sustained content creation: subscription sales, advertising revenue, and reputation-based value creation. The transition from search-based traffic generation to AI-mediated content summarisation eliminates the click-through behaviour that enabled these monetisation strategies. This represents a fundamental restructuring of digital information economics rather than merely a technological evolution.
- He documents how current AI query resolution rates approach 90% without generating any source traffic, effectively severing the connection between content creation and economic reward. This disconnection threatens the incentive structure that motivates original content production, potentially creating a systemic collapse in information generation across the digital ecosystem.
Perils, and problems of agreements with AI companies
- Prince critiques existing licensing arrangements between publishers and AI companies as strategically flawed due to their non-exclusive nature. Publishers who charge some AI companies whilst permitting free access to others undermine their collective bargaining position and ensure deteriorating terms in future negotiations. This analysis reveals how individual publisher strategies inadvertently weaken the entire sector’s economic leverage.
- The proposed solution emphasises creating artificial scarcity through coordinated content access restrictions. Prince advocates leveraging infrastructure providers’ position to implement collective action strategies that would force AI companies into more equitable compensation arrangements. This approach leverages CloudFlare’s network architecture, and presents the possibility of gatekeeping capabilities at scale (against AI bots, and LLM crawlers) that can be deployed strategically to restore some balance in content extraction relationships.
Knowledge as value
- Prince proposes transitioning from attention-driven metrics to knowledge-advancement-based compensation models. This framework would differentiate payment based on content’s contribution to specific AI knowledge domains rather than generic traffic generation. The model draws inspiration from historical patronage systems whilst incorporating contemporary digital distribution efficiencies demonstrated by platforms like Spotify.
- He envisions specialised AI systems that would evaluate and compensate content based on its value to particular knowledge applications. This approach would incentivise targeted content creation that addresses genuine information gaps rather than competing for undifferentiated audience attention. Such a system could potentially redirect creative energy towards substantive knowledge production rather than sensationalist content optimised for emotional engagement, which connects to the points below – and what he ends by noting.
Pitfalls of an attention economy
- Prince connects the current attention-economy model to broader patterns of political polarisation, and democratic dysfunction. He argues that compensation structures based on emotional engagement rather than information quality contribute to the amplification of populist rhetoric and conspiratorial thinking. T/his analysis positions content economics as a contributing factor to contemporary challenges in democratic discourse and social cohesion.
- The proposed alternative framework aims to address systemic problems beyond digital content economics by realigning incentive structures with knowledge advancement rather than emotional manipulation. Prince suggests that compensating creators for genuine information contributions could help counter the proliferation of misinformation and polarising content that currently dominates attention-driven platforms.
Implications for civic media, and citizen journalism
Matthew Prince and I first exchanged emails in 2014 – around the time Project Galileo was launched. Project Galileo is CloudFlare’s defensive infrastructure initiative providing enterprise-grade DDoS protection to civil society organisations, independent media outlets, and free expression advocates facing targeted cyber-attacks designed to suppress content dissemination. In a blog post penned almost exactly 11 years ago, I noted that the Centre for Policy Alternatives was as a global launch partner. I positioned Project Galileo – in a context where a culture of impunity around murders of, and threats to journalists, and the risk to the freedom of expression was extremely high – as a strategic intervention against network attacks (like DDoS) which attempted to censor, silence, and erase critical discourse, and investigative reporting. As I told Dan Gillmor for an article he wrote for Slate, with Project Galileo came “the peace of mind that comes from knowing no matter what content goes up on the site, those who may find it inconvenient for wider, public scrutiny can’t now easily lean on DDoS attacks as a means of censorship or blackmail.”
Prince’s sobering analysis presents particularly grave implications, and unprecedented challenges for civic media, and human rights journalism, in a context where the shuttering of USAID has had existential consequences for critical media platforms in austere, violent contexts that are reliant on precarious funding models, and with limited commercial viability. I know this from the experience of having started, and curating Groundviews for around 15 years. The dramatic shift in content-to-traffic ratios fundamentally undermines the already fragile economics of public interest journalism. For civic media initiatives, and non-profit newsrooms that depend on audience engagement metrics to demonstrate impact or engagement to donors and foundations, this rapid collapse in organic traffic threatens their ability to justify continued funding.
The situation becomes even more dire when considering that, as Prince stresses, AI platforms (including Google’s own AI search overviews) extract value from their content whilst providing minimal attribution or compensation, effectively strip-mining the knowledge these organisations produce without contributing to their sustainability. This is a new digital colonialism that rapaciously extracts the most valuable content, and commentary without any compensation or appreciation of the risk, time, effort, and money involved in its production.
Citizen journalism, and civic media platforms also face unique vulnerabilities in this new landscape, as they typically lack the institutional resources to negotiate compensation agreements with AI companies or implement sophisticated technical barriers against content scraping. These platforms often rely on volunteer contributors who are motivated by the prospect of bearing witness to what would otherwise be forgotten or erased, reaching critical audiences, and creating impact – which is precisely the incentive structure that Prince identifies as completely, and rapidly collapsing. When AI systems synthesise, and present information without directing users to original sources, citizen journalists lose both the audience connection that validates their work, and any potential pathway to sustainable income (including through traffic to accounts, and websites). This erosion is particularly damaging for platforms like Groundviews, Vikalpa or Maatram in Sri Lanka that continue to document human rights violations, corruption, and other issues where the act of bearing witness to inconvenient truths depends on knowing that testimony reaches, and influences public discourse.
For subscription-based investigative reporting organisations working in austere or violent contexts, the implications extend beyond economics to fundamental questions of safety and operational viability. These outlets often depend on international readership and support, as local audiences may lack either the means to pay or safe methods to access content. When AI systems aggregate and present their investigations without context or proper attribution, it not only undermines their subscription model but potentially endangers sources and journalists by stripping away careful editorial decisions about how information is presented and contextualised.
Furthermore, Matthew Prince‘s proposed solution of creating scarcity through blocking AI scrapers presents a dilemma: whilst it might preserve some economic value, it could also limit the reach of crucial human rights documentation and investigative work that serves the public interest. The challenge becomes how to preserve the economic foundations that enable this vital journalism whilst ensuring that evidence of atrocities and corruption remains accessible to those who need it most.
To this end, Prince communicates a really interesting idea around how domain-specific LLMs can be served content that’s of significant value to its users, and thus with adequate compensation to, and the attribution of original content producers. This sounds great, but the issue of making archival material (like the millions of words on Groundviews that bear witness to two decades of sociopolitical developments in Sri Lanka in a manner no other website domestically, or internationally did) available to only specific LLMs isn’t a trivial issue, and risks adding even more work to individuals already burdened with managing journalism ventures that are being driven to closure.
Ideas to help civic media in the age of AI
Thought of a few ideas for Cloudflare and Matthew Prince to consider, based on the comments made, my experience of actively creating, and curating civic media over decades, and what a world defined by volatility, and violence will need much more by way of bearing witness through public interest, and investigative journalism. I’ve also kept in mind the sheer scale of CloudFlare’s network, and operations including, but not limited to what it already does for entities at risk through Project Galileo.
- Selective AI access tiers: CloudFlare could create a “public interest content” designation allowing civic media to block commercial AI scrapers whilst permitting access to verified human rights organisations, academic researchers, and international courts—preserving economic value whilst ensuring critical documentation remains accessible for justice and accountability purposes.
- Granular, Geo-differentiated scraping controls: Implement region-specific AI blocking that protects revenue in markets where subscriptions are viable, whilst allowing broader AI access in authoritarian contexts where direct readership might endanger users. This can enable critical information to spread through AI channels where traditional access poses risks or structural impediments like blocks or throttling.
- Attribution architectures for AI: Develop a mechanism that tracks when AI systems use investigative journalism to answer queries about corruption, human rights violations, or governance issues, then channels micropayments back to originating outlets based on the civic value of information rather than raw traffic metrics.
- Protected source documentation: Create encrypted pathways for citizen journalists to upload evidence and testimony that remains invisible to AI scrapers but accessible to verified news organisations and human rights groups. This can help preserve the economic value of exclusive content whilst protecting contributor safety.
- Crisis response exemptions: Establish rapid-response protocols, leveraging the trusted, global network that Project Galileo already enjoys, that temporarily lift AI restrictions during humanitarian emergencies, protests, or conflicts, allowing critical citizen journalism to achieve maximum reach when public interest outweighs economic considerations, with automatic compensation mechanisms activated post-crisis.
Published by Sanjana
I study the causes, effects, and impact of information disorders on democracy, institutions, and social cohesion. With over 20 years of experience in peacebuilding, civic media, and digital security, from the Global South and across five continents, I have a deep understanding of how social media and politics interact and influence each other, especially in conflict-affected and contexts with a democratic deficit (which now includes countries in the Global North). My work is driven by an interest in promoting democratic governance, human rights, and media freedom, as well as a curiosity for exploring the potential and challenges of new technologies for social change. I have a PhD in Social Media and Politics from the University of Otago. I am Sri Lanka's first TED Fellow, and also had fellowships from Ashoka, and Rotary World Peace. I am also the founder, and former editor of Groundviews, Sri Lanka's first, and award-winning citizen journalism website. Additionally, I am a Special Advisor at the ICT4Peace Foundation, where I study use, and abuse of technologies in peacekeeping and peacebuilding. I completed doctoral studies at the University of Otago, New Zealand, looking at the symbiotic relationship between offline unrest and online instigation of hate and harm in Sri Lanka and, in the aftermath of the Christchurch massacre in 2019, facilitated by leading research based on New Zealand's first ever Data for Good grant by Twitter. View all posts by Sanjana
Published 28/06/2025
.png)

