"Algocracy" and Democracy: Questions

3 weeks ago 2

Credits

Andrew Sorota works on democratic innovation and AI and is the head of research at the Office of Eric Schmidt.

In 1950, the same year Alan Turing unveiled his famous test for machine intelligence, Isaac Asimov imagined something even more unsettling than a robot that could pass for human. In his story “The Evitable Conflict,” four vast super-computers known as “the Machines” silently steer the planet’s economy through an era of unprecedented peace and prosperity.

When they appear to make costly blunders, sabotaging the plans of a few powerful conspirators working to undermine their authority, World Co-ordinator Stephen Byerley learns the truth: the “errors” are no errors at all, but deliberate, tidy sacrifices meant to preserve global stability. The Machines have concluded that the surest way to keep humanity from harm is to keep humanity from making certain decisions.

Byerley grasps this truth with a blend of relief and dread, knowing that while the Machines will continue to stave off conflict, the affected citizens will never learn why their schemes failed or how they might seek redress. The Machines will keep their motives secret; transparency, too, is a risk to be managed.

Asimov cast the scene as a distant prophecy, yet the future he sketched is already seeping into the present. We often talk about artificial intelligence as a looming catastrophe or an ingenious convenience, oscillating between apocalyptic nightmares of runaway superintelligences and glittering futures of frictionless efficiency. Deep-fake propaganda, economic displacement, even the possibility of existential doom: these capture headlines because they are dramatic, cinematic, visceral.

But a quieter danger lies in wait, one that may ultimately prove more corrosive to the human spirit than any killer robot or bioweapon. The risk is that we will come to rely on AI not merely to assist us but to decide for us, surrendering ever larger portions of collective judgment to systems that, by design, cannot acknowledge our dignity.

The tragedy is that we are culturally prepared for such abdication. Our political institutions already depend on what might be called a “paradigm of deference,” in which ordinary citizens are invited to voice preferences episodically — through ballots every few years — while day-to-day decisions are made by elected officials, regulators and technical experts.

Many citizens have even come to defer their civic role entirely by abstaining from voting, whether for symbolic meaning or due to sheer apathy. AI slots neatly into this architecture, promising to supercharge the convenience of deferring while further distancing individuals from the levers of power.

Modern representative democracy itself emerged in the 18th century as a solution to the logistical impossibility of assembling the entire citizenry in one place; it scaled the ancient city-state to the continental republic. That solution carried a price: The experience of direct civic agency was replaced by periodic, symbolic acts of consent. Between elections, citizens mostly observe from the sidelines. Legislative committees craft statutes, administrative agencies draft rules, central banks decide the price of money — all with limited direct public involvement.

This arrangement has normalized an expectation that complex questions belong to specialists. In many domains, that reflex is sensible — neurosurgeons really should make neurosurgical calls. But it also primes us to cede judgment even where the stakes are fundamentally moral or distributive. The democratic story we tell ourselves — that sovereignty rests with the people — persists, but the lived reality is an elaborate hierarchy of custodians. Many citizens have internalized that gap as inevitable.

Enter machine learning. Algorithms excel precisely at tasks the layperson finds forbidding: sorting mountains of data, detecting patterns no human eye can see, quantifying risk in probabilistic terms. They arrive bearing the shimmering promise of neutrality; a model is statistical, so it feels less biased than a human. The seduction is powerful across domains, from credit scoring to determining who gets access to public services.

In the Netherlands, for instance, an early use case saw the government deploying automated systems to track welfare benefits with minimal human intervention. (It is notable that this experiment led to more than 20,000 families being falsely accused of fraud and helped contribute to the entire Dutch government’s resignation in 2021.) Faced with backlogs and budget constraints, officials grasp for anything that looks objective and efficient. Soon, the algorithm’s recommendation becomes the default, then the rule. Over time, the human intermediary becomes an impotent clerk who seldom overrides the machine, partly because the institution discourages deviation and partly because the clerk has forgotten how.

“The risk is that we will come to rely on AI not merely to assist us but to decide for us, surrendering ever larger portions of collective judgment to systems that, by design, cannot acknowledge our dignity.”

What vanishes in these moments is more than discretion; it is the encounter in which one person acknowledges another as a decision-worthy being. In the late 20th century, Francis Fukuyama revived an argument ultimately owed to Hegel: Liberal democracy is the most stable form of government because it satisfies the fundamental human thirst for recognition — the desire to be seen and affirmed as free and equal.

Whether or not history truly “ended” with the fall of the Berlin Wall, the insight about recognition remains profound. People do not demand merely material comfort or security; they demand that the social order look them in the eye and admit: “Your voice counts.” When that recognition fails to materialize — when individuals perceive that their fates are determined elsewhere, by elites who will never sit across from them — resentment grows. Contemporary populism is the political face of that resentment. It rails against distant technocrats, against faceless bureaucracy, against any system that patronizes rather than engages. It depicts electoral democracy, with its long channels of mediation that seldom reach the average citizen, as an empty ritual.

AI threatens to deepen this very wound. If the elected official is distant, the algorithm is an abyss. You cannot argue with a neural network’s hidden layers or cross-examine a random forest. Decisions that shape your life — how resources are allocated, which priorities are funded — become technical outputs optimized for efficiency, not political choices settled through public debate.

Even if we could make AI systems perfectly transparent, capable of explaining their reasoning in lucid prose, this does not cure the underlying democratic deficit; a decision explained is still a decision imposed. Without a clear path for recourse, human agency dissolves into statistical abstraction. For the citizen seeking recognition, there is no one to confront, no accountable face on the other side of the counter.

Even the possibility of reciprocity disappears, because the system is constitutionally incapable of respecting or disrespecting anyone; it simply optimizes. In this vacuum, anger can only turn outward indiscriminately, feeding conspiracy theories and demagogic narratives that blame shadowy technocrats, ethnic minorities or transnational plots.

The relationship between AI and democracy, however, is not fated to be antagonistic. Whether algorithms shrink or expand the public’s role depends less on the code itself than on the social contracts wrapped around it. Our existing social contracts were forged on the heels of the Enlightenment, as thinkers sought to erect constitutional and normative scaffolding to civilize raw power and align it with collective reason.

Hobbes’ fear of unfettered, natural chaos yielded to Locke’s primacy of the consent of the governed, Montesquieu’s framework of separation of powers, and Rousseau’s notion that legitimate authority must always remain answerable to the general will. These arrangements were designed to restrain the worst impulses of human governors while still harvesting the best of deliberation. They came into being at a time when the power of human reason to perfect society and nature seemed nearly limitless.

Now, however, for the first time in human history, we face the existence of a non-human cognitive actor whose speed, scale and analytical capacities already outstrip our own in narrow fields and will only continue to improve in future years. The shift to a world with superhuman intelligence demands something different than reactive jumps to impede AI progress; it calls for a deeper rethinking of how power and authority operate, where algorithmic systems should make decisions, where they shouldn’t and what mechanisms should exist to help people understand, challenge and override those decisions when necessary.

The central guiding question must be whether we treat AI as a substitute for collective judgment or as an instrument that enlarges the scope for human deliberation. At stake is nothing less than whether human judgment and human dignity retain operational value in the very systems that govern us.

Used well, AI can slash the logistical costs that once confined serious deliberation to narrow circles. Automatic translation, live transcription and real-time summarization enable diverse groups of citizens to debate common problems without sharing a room or the same native language. LLMs can transform technical briefings into plainer prose and cluster thousands of comments from virtual town halls into intelligible and actionable themes. AI facilitators can help forge consensus among polarized groups online, equalizing speaking times and surfacing overlooked voices before the discussion closes. In other words, the same machinery that powers prediction markets can be repurposed to make deliberation scalable, searchable and understandable to the broader public, transforming the way governments make decisions and the role of the average citizen.

“If the elected official is distant, the algorithm is an abyss. You cannot argue with a neural network’s hidden layers or cross-examine a random forest.”

Taiwan offers a glimpse of this future. The open source vTaiwan platform uses machine learning to analyze thousands of public comments on policy proposals, identifying areas of consensus and highlighting remaining disagreements. Rather than generating its own policy recommendations, the AI helps citizens and policymakers understand the structure of public opinion and focus discussion on genuinely contested issues. The platform has facilitated successful policymaking on contentious topics in Taiwan, like ride-sharing regulation and digital rights, enabling outcomes that enjoy broad public support, largely because citizens participated meaningfully in their creation.

A less techno-centric model of democratic innovation can be seen in Ireland, where citizens’ assemblies are convened regularly but have yet to harness the power of AI. These bodies bring together groups of randomly selected citizens to deliberate on complex issues and make recommendations to the government. Participants receive expert briefings, listen to stakeholders and engage in structured deliberation to reach consensus. The process is slow and sometimes onerous, but it has produced thoughtful policy outcomes that were unlikely to be achieved through traditional political channels, most notably a referendum repealing the Eighth Amendment of the Irish Constitution that previously limited access to abortion in the country. These assemblies also tend to be expensive to run and confined to relatively small groups, which has so far kept them on the periphery of the democratic landscape.

AI could help change that by scaling the number of citizens’ assemblies and connecting them to the broader public, thereby bolstering their legitimacy and reach. We saw one nascent version of this last year in Deschutes County, Oregon, where AI was used to record, synthesize and analyze the deliberations of a typically closed-door civic assembly on youth homelessness. With the consent of assembly members, highlights from these small-group recordings can be shared with the public, adding a new layer of transparency to the process and allowing outside citizens to see more clearly what actually comprised the deliberations.

AI could also help improve the quality of deliberation itself. For example, DeepMind’s Habermas Machine demonstrated in a 2024 Science study that an LLM could more effectively find common ground among divided groups than human mediators. Crucially, such tools aim to augment collective decision-making without replacing the essential human work of judgment and compromise.

These scenarios of governing with AI instead of being governed by it may sound cumbersome precisely because they are designed to reinsert friction where technocracy — or “algocracy,” government by algorithm — chases it away. But friction is not inherently bad. In politics, it is often the handrail that prevents a stumble into passivity.

Liberal democracy’s original genius was not merely the ballot box; it was the creation of multiple forums — town meetings, juries, local councils, civic associations — where citizens encountered each other as equals capable of persuasion and compromise. Many of those forums have withered under the pressures of mass society, mass media and now mass data, and with them, so too has the fabric of liberal democracy.

Obeying In Advance

In his well-known book “Bowling Alone,” published in 2000, Robert Putnam began charting the arc of withering American forums, noting declines in league sports, union membership and civic clubs throughout the late 20th century as globalization kicked into overdrive. Three decades later, the metrics have only worsened: local newspapers shutter weekly, worship attendance continues to thin, and the archetypal “meet-cute” has been replaced by a seemingly endless number of online dating apps.

The attrition of face-to-face venues for collective life did not start with the microchip, but algorithms and digital networks have accelerated the erosion. The result is a surplus of individual exposure to information and a deficit of shared context or mutual understanding of how decisions are made and who makes them. Reviving those shared spaces, even in digital form, would be messy, slower than letting code decide everything on its own. But it is also the only path that preserves the promise Fukuyama celebrated: that each person can be both author and audience of the laws that govern them.

None of this denies AI’s power to make certain decisions more efficient and less biased, improving the way government functions and potentially even saving lives. Nor does it trivialize the more dramatic, headline-grabbing risks of AI. It is entirely possible that future systems might acquire capabilities hazardous to humanity, that autonomous weapons could proliferate unchecked or that deep-faked misinformation could destabilize elections.

“AI will not, by itself, extinguish or redeem democracy. It will elevate whichever habits we choose to cultivate.”

Indeed, some of those things are already happening. But if the subtler problem of deference is left unaddressed, societies will grow ill-equipped to confront those larger perils — the muscle of civic agency will have already atrophied. People habituated to letting machines decide the mundane will struggle to reassert control when the stakes turn existential.

In the 20th century, Hannah Arendt’s renowned writing on the “banality of evil” revealed how Nazi administrative machinery depended not on ideological fervor but on bureaucratic compliance — civil servants like Adolf Eichmann who processed deportation orders with the same dutiful efficiency they brought to tax collection or municipal planning.

The system’s horror lay partly in how it transformed moral choices into technical procedures, making collaboration feel like competent administration rather than complicity in genocide. According to Arendt, in fact, Eichmann’s gravest crime was his failure to think for himself.

The Soviet Union, Arendt’s other locus of totalitarian analysis, followed a similar trajectory, albeit one that extended through the end of the Cold War. By the 1970s, many Soviet citizens had developed what psychologists call “learned helplessness” in the face of bureaucratic systems that rendered individual agency meaningless. This was a new, deeper form of political repression. It represented the internalization of procedural thinking that made independent judgment feel impossible or irrelevant. When Mikhail Gorbachev initiated a more open, consultative government in 1985 with glasnost, many citizens struggled to engage constructively, having lost familiarity with democratic deliberation and compromise.

The historian Timothy Snyder has argued that the path to tyranny is often paved by individuals who “obey in advance,” anticipating what authoritarian leaders want and preemptively meeting them halfway to avoid conflict. In the age of AI, this phenomenon appears poised to occur at an algorithmic scale, as individuals modify their behavior for a world shaped by omniscient machines.

We are already seeing preliminary signs of this impulse. A July Pew Research Center study found that when Google precedes its search results with AI‑generated summaries, users open far fewer links and often end their search right there. While convenient, by accepting the first synthesized response, we also implicitly accept what the algorithm has deemed important. More likely than us abruptly waking to a world of AI rule, the danger of deference by design is that we will continue to streamline our habits of inquiry and judgment to suit the technology’s parameters, until the habits themselves are emptied of agency. In that sense, AI is less a sudden usurper than the logical culmination of a political culture that has been hollowing out democratic publics and handing off judgment, piece by piece, for decades.

The road ahead, therefore, forks. Down one path lies the continuing consolidation of decision-making power in algorithmic platforms owned by corporations or agencies whose internal logics are obscure to the public. Citizens, numbed by convenience and resigned to complexity, perform citizenship as a spectator sport, casting ballots that merely reshuffle the supervisory committee overseeing an automated empire.

Down the other path lies a conscious effort to embed participation and contestability into every major system that touches communal life, accepting slower throughput and periodic gridlock as the price of freedom. The first path is lubricated by efficiency and the myth of objective expertise; the second is rocky, contentious and labor-intensive — yet it is the only route that keeps alive the foundational democratic claim that the governed never surrender the right to govern.

AI will not, by itself, extinguish or redeem democracy. It will elevate whichever habits we choose to cultivate. If we preserve the paradigm of deference, AI will become the ultimate bureaucrat, inscrutable and unanswerable. If we cultivate habits of shared judgment, AI can become an extraordinarily powerful amplifier of human insight, a tool that frees time for deliberation rather than replacing it. The decision between those futures cannot be delegated; it belongs to us as humans. How we make it may be the most important act of civic recognition we can offer one another in this new age of thinking machines.

Read Entire Article