For my family & in loving memory of my father,
Paul Andre Tindall.
We proceed from an actual economic fact.
The worker becomes all the poorer the more wealth he produces, the more his production increases in power and size. The worker becomes an ever cheaper commodity the more commodities he creates. The devaluation of the world of men is in direct proportion to the increasing value of the world of things. Labour produces not only commodities; it produces itself and the worker as a commodity —and this at the same rate at which it produces commodities in general.
This fact expresses merely that the object which labour produces —labour’s product— confronts it as something alien, as a power independent of the producer. The product of labour is labour which has been embodied in an object, which has become material: it is the objectification of labour. Labour’s realisation is its objectification. Under these economic conditions this realisation of labour appears as loss of realisation for the workers; objectification as loss of the object and bondage to it; appropriation as estrangement, as alienation
Part 1

The alienation of the worker in his product means not only that his labour becomes an object, an external existence, but that it exists outside him, independently, as something alien to him, and that it becomes a power on its own confronting him. It means that the life which he has conferred on the object confronts him as something hostile and alien.
Karl Marx, Economic & Philosophic Manuscripts, p. 29, (1844)The Automatic Subject
Learning to be Machine
In works like Subway (1950), Government Bureau (1956), Lunch (1964), and Landscape with Figures (1965-66), George Tooker renders the terrain of capitalist alienation as solemn liturgy —a world where subjectivity is flattened, agency nullified, every gesture reduced to uniform compliance. Whether enclosed within claustrophobic geometry, queued for processing, or stationed in waiting-rooms, corridors, and cafés, his figures are somnambulant, expressionless, ensnared inside architectures of passivity and control.
View Renoir’s Luncheon of the Boating Party (1881) or Dance at Le Moulin de la Galette (1876) alongside the café scene of Tooker’s Lunch, and, while they frame similar subjects, the contrast between the inner worlds they suggest, and the affect of public and communal space upon its occupants, could hardly be more stark.
Together these views span both a temporal and class divide. Renoir’s figures are leisured bourgeoisie —freely luxuriating in the rhythms of social pleasure and sensual immediacy— while Tooker’s subjects are cogs in a bureaucratic apparatus, their time segmented and stolen, their gestures pre-empted by institutional choreography.
Renoir paints bourgeois subjectivity in full bloom; Tooker renders the aftermath of its extraction. His paintings are not simply depictions of the loneliness of crowds, but renderings of the existential angst wrought by the structural indifference of bureaucratic life under 20th-century Capital. The body is present but passive, the gaze unfocused, the self cast into a procedural artefact. Figures are repeated, displaced in space but held within the same moment as if time itself has been put on hold —this is extraordinary rendition through eternally postponed administrative address. Their breath forever held, yet still their blank faces scream across time.
Tooker captures the affective toll of systems designed to impose the cold brutal regimen of machinic process upon the warm beating pluralities of human life. These are spaces at once terminal and liminal, where intention is made inert and agency dissolves into the hum of fluorescent lights and ticking clocks —each second machined to be indistinguishable from the last. They reveal a sense of managed abandonment, a slow-motion violence rendered banal through the excruciating tedium of endless repetition. More than half a century later, they appear as both nightmarish flashbacks of mid-century destituency and eerie premonitions of the recursive alienations of today.
A collection of speculative fiction written in the early 1990s by Greg Egan titled, Axiomatic, contains short stories that land like forgotten masterpieces from a modern day Borges. Where Tooker paints the architecture of alienation in still life, Egan maps the schizophrenic horror of its internalisation —the moment its logic slips beneath the skin, embeds in the mind, and begins to simulate the self.
The following short passage is the opening of the tenth entry in that collection: Learning To Be Me. The allegory Egan weaves here took up its permanent residence in my head almost 30 years ago. While retaining much of its punch it feels increasingly familiar, imagine however, if you will, encountering this in the context of the mid ‘90s:
I was six years old when my parents told me that there was a small, dark jewel inside my skull, learning to be me.
Microscopic spiders had woven a fine golden web through my brain, so that the jewel’s teacher could listen to the whisper of my thoughts. The jewel itself eavesdropped on my senses, and read the chemical messages carried in my bloodstream; it saw, heard, smelt, tasted and felt the world exactly as I did, while the teacher monitored its thoughts and compared them with my own. Whenever the jewel’s thoughts were wrong, the teacher — faster than thought — rebuilt the jewel slightly, altering it this way and that, seeking out the changes that would make its thoughts correct.
Why? So that when I could no longer be me, the jewel could do it for me.
I thought: if hearing that makes me feel strange and giddy, how must it make the jewel feel? Exactly the same, I reasoned; it doesn’t know it’s the jewel, and it too wonders how the jewel must feel, it too reasons: “Exactly the same; it doesn’t know it’s the jewel, and it too wonders how the jewel must feel…”
And it too wonders —
(I knew, because I wondered)
— it too wonders whether it’s the real me, or whether in fact it’s only the jewel that’s learning to be me.
Both Tooker’s and Egan’s visions now echo across the terrain of our accelerating automimetic hallucination. There is more to say of Egan’s dark jewel and of the terminal alienation he so affectively renders, but first, let us trace the roads that brought us to this recursive estrangement.
[Capital’s] valorisation is therefore self-valorisation [Selbstverwertung].
Karl Marx, Capital, Vol. I, (1867)
The perfect crime is not the literal murder of reality. It is the murder of the other through simulation.
Jean Baudrillard, The Perfect Crime, (1995), p. 92.The Ecstatic Collapse of Alterity
Hallucination as Interface
In early February 2025, Andrej Karpathy —formerly Director of AI at Tesla and a founding member at OpenAI— posted the following on X:
There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
Karpathy refers here to the practice of ‘authoring’ software through natural language alone. Along with a growing number of others, he apparently believes LLMs are now capable not only of next-token predicting all of the code, but of reaching a final working program without the Vibe Coder even having to read, let alone write, a single line of it.
The word ‘vibe’ first appeared in the 1940s as a shortening of vibraphone. Yet, the meaning for which it has been used so frequently in recent years, stems from a clipped form of ‘vibration’, popularised by the Beach Boys’ song Good Vibrations in 1966. Within the counter-cultural mysticism of the ‘60s, Vibe had become a slang term used to denote an instinctive feeling, when referring to the otherwise ineffable essence of someone or something.
The process Karpathy describes might be said to resemble the interplay between a software architect writing a program specification and a more junior engineer being tasked with its implementation. Yet such high specificity of requirements is antithetical to the Leary psychedelic Vibehere.
To Vibe Code, it seems, is to “turn off your mind, relax, and float downstream” following your spectral guide as it reads from the Tibetan Book of the Dead towards the dissolution of life; to “lay down all thoughts, surrender to the void” of the statistical distribution of the model; to accept that this Tomorrow Never Knows —because this “is not dying”, at least not for those feeling the Vibe.
The Vibe Coder’s role is merely to gesture in a vague direction, then tune in to the model’s signal while drifting across possibility space, passing subjective judgement not on the code generated, but on the artefacts that emerge at the threshold of hallucination. Each output surfaces from the depths —a message spelled out on a Ouija board, but of course, there is nobody there. The occasional glitch in the illusion ruptures the simulated Symbolic register —the order of signifiers that give coherence to meaning, language, and social law— revealing this absence through flashes of the Real, or of all that cannot be simulated. Away from these ruptures, the illusion of output as found-object sustains a rush of ecstatic co-creation, apparently without friction. It is an intoxicating synthetic high: a pure bliss Vibe, surfing recombinant potential unburdened by anything as troublesome as effort or authorship.
The comedown from this hedonistic trip, however, hits with a bummer of an ideological hangover. Karpathy’s confession is revealing. “Forget that the code even exists” stands not merely as instruction for the psychedelic user experience of next-token prediction that he is advocating —but as a chilling motto for the solipsistic inclinations and alienations of Predictive Capital. This is an era where Capital’s traditional preference for the conveniently hidden exploitation of invisible workers extends to the denial of their very existence —a violence now enacted at unprecedented scale.
This is not simply a retreat into personal isolation, but a turn away from ethical relation itself —towards what Emmanuel Levinas described as the murder of the other, the refusal of the face-to-face encounter, a denial of irreducible uniqueness that renders the other person into a manipulable object or datum. Within the state fostered by Predictive Capital, the other is not merely forgotten but simulated, objectified, instrumentalised, and ultimately eliminated as a site of genuine difference or resistance.
In eliminating the other through simulation, these systems simultaneously undermine the formation of the self —without genuine alterity, there can be no relational subject. Predictive systems only simulate coherence, without, in fact, recognising the user as a subject at all; they do not truly respond to intention or meaning, only to statistical correlation. The occasional rupture in this hallucination remains as the final reminder of the disappearance of relation itself, replaced by the single operation of Predictive Capital. The resulting state is not merely one of isolation, but dissociation: a machinic solipsism that flattens the other into echo, and leaves only internal feedback where relation once constituted the self.
The nature of this contemporary techno-psychedelic is entirely at odds with the lysergic and psilocybin-induced experiences of the 1960s. The Vibe of that era was rooted in a desire not to obliterate the self, but to dissolve its boundaries into a greater relational field —to become as one with the cosmos through the ecstatic suspension of ego, merging into everything rather than disappearing into nothing. This dissolution blurred the line between self and other not by erasing difference, but by revealing a deeper interconnectedness —a cosmic solidarity that affirmed the interdependence of all life, and with it, the possibility of harmony, balance, and equality. However remote or naïve such mystical hippie idealism may now appear, it was inseparable from a radical critique of the system, a deep suspicion and questioning of authority, a refusal of hierarchy, and a longing not for frictionless personal experience, but for solidarity and universal emancipation.
The growing collective consciousness in the ‘60s was evidenced by large scale worker organisation and student protests on both sides of the Atlantic —most notably the May ‘68 protests in France. Economists, like Grace Blakeley, now cite these uprisings, and the threat they posed to Capital, as providing the impetus for the neoliberal turn. This was the point at which the brief wealth redistribution of the post-war period gave way to the steady dismantling of collectivity and the phase of hyper-concentrated Capital —of which we now find ourselves at the deeper end.
In complete contrast, the hallucinatory psychedelic experience of Vibe Coding and so called ‘generative AI’ in general is that of the K-hole —the dissociative fugue induced by ketamine intoxication, intensified through habitual use. Like ketamine —a drug with both antidepressant and anaesthetic properties— these systems do not dissolve the ego into the cosmos. They reinforce the dissociation, solipsism and atemporal enclosure that define machinic Capital’s inner logic.
It should come as no surprise then, that ketamine appears to be the drug of choice for the Broligarchy, nor that this Muskivite elite are so drawn to the Simulation Hypothesis: the belief that reality itself is a game, and that they are the only real player. Within their first-person-shooter single-player simulation, truth is irrelevant, rationality optional, empathy a bug, meaning and ideology chosen by the one true player, not for coherence but for domination within the game of machinic Capital. Everyone else is reduced to NPCs, the machine becomes not a collective tool, but a sovereign instrument of individual triumph; reality not a collective construct but a solo quest.
This worldview finds its technological expression within every surrender to the machines of prediction, but perhaps most profoundly, within Vibe Coding. It is defined not by a radical critique of the system, but by a total surrender to it; not by a dissolution into an ego-less cosmic relation but into an immersive hyperreality of ego-dominated isolation. From its very inception, it has been a practice of effacement: of code, of craft, of human workers, and of our relation to each of these.
Responding to a string of next-token predictions from his latest GPT model presented to him in an interview with Chris Anderson for TED, Sam Altman, declared:
…there’s really no way to know if it is thinking that, or just saw that a lot of times in the training set, and of course, like, if you can’t tell the difference, how much do you care?
Sam Altman, TED talk, April, (2025)Here, the stolen archive of human content becomes “the training set” —anonymised and belonging to no one. The unique irreducible characteristics of human thought and expression, suddenly presumed indistinguishable from machinic prediction.
I knew by then that the jewel’s ‘teacher’ didn’t monitor every single neuron in the brain. That would have been impractical, both in terms of handling the data, and because of the sheer physical intrusion into the tissue. Someone-or-other’s theorem said that sampling certain critical neurons was almost as good as sampling the lot, and —given some very reasonable assumptions that nobody could disprove— 195 bounds on the errors involved could be established with mathematical rigour.
Greg Egan, Learning To Be Me, (1995)As Phil Jones observes in Work Without the Worker, these systems function not through the replacement of labour but through its dissimulation, an illusion of automation that masks an invisible army of precarious, globally distributed labour. In tasks from labelling images to rating chatbot responses, “AI is trained and corrected by humans… Mechanical Turk workers show AI how to fulfil the role of labour” —yet in the final interface and output their very presence is entirely denied. We continue to sense and make sense of the world for the machines —both for their input and of their output. Labour persists but is disavowed, uncredited, unrecognised, and removed from the scene of value.
Prediction as Hauntology
This collapse of ethical alterity is here echoed by a second collapse, that of cultural-historical alterity. Altman not only urges us not to see but to forget. Such are the lies always told by Capital to facilitate the concentration of wealth, while obfuscating the lives it crushes —lies spun wherever vast fortunes depend on obscuring the true human or ecological costs. When Altman refuses to acknowledge the stolen lifeworks congealed into his machine, the workers exploited in its creation, or the lives crushed through its operation, he invites his audience to question whether their existence matters, and to doubt, even to forget they ever existed.
Machines such as these were, of course, also used to generate the grotesque advert for atrocity that was Trump and Netanyahu’s bromantic hallucination for ‘Trump Gaza’. Posting it to his Truth Social, Trump had not commissioned the ad but had instead appropriated a work originally intended as political satire —proving that his and Netanyahu’s worldviews are so far beyond parody they are indistinguishable from a work of terminal irony. That the film’s impact and function may have been the opposite of that intended by its creator, is itself an exquisite demonstration of the nature of these technologies and the apparatus to which they are increasingly integral. Posted entirely unironically by Trump, it constitutes the glazing over of a genocide presented as profitable opportunity, even while execution of it continues to intensify. This abomination demands we perform the very same manoeuvre proposed by Altman. It envisions a future where not only would an assortment of Trump idolatry and temples to Capital, a plastic Dubai döppelganger, appear to have sprung from the bare soil, but seem as if it was always there —that Palestine, and its ancient olive groves, never were. Like Altman, it declares: “If you can’t tell the difference [or recall the lives crushed in its creation], how much do you care?” Again we glimpse the synthetic high of the machinic K-hole —the pure-bliss ignorance of next-token or next-human-target prediction.
What renders the eerie absence of regard or recognition for the content creators of the past —upon which the output from these machines so heavily depends— yet more uncanny is the throwback functionality of the apps, the ghostly resurrection of the iconic figures and filmic sequences of the past, and the retro aesthetic that saturates their outputs. What most commonly appears within the threshold of this machinic hallucination are chimeric anachronisms resulting from what Mark Fisher might have described as, “the slippage of discrete time periods into one another”. Or, quoting from his beloved 1980s TV show, Sapphire and Steel, from the sense that:
time just got mixed up, jumbled up, together, making no sort of sense.
In Ghosts of My Life, Fisher writes of being haunted —as Franco “Bifo” Berardi had put it— by “the slow cancellation of the future”. In the opening chapter, Lost Futures, Fisher laments the disappearance of the horizon of collective possibility —of promised futures once imagined through culture, politics, and worker solidarity. Leaning also upon the observations of Fredric Jameson, Fisher explains how Capital’s relentless self-reproduction saturates the present with echoes of the past, producing a cultural stasis in which innovation gives way to endless recycling. The deafening resonance of echoes of the past that never fade annihilates the possibility of alternative signal or thought. We thus become trapped in a loop, unable to imagine futures, or alternatives to the continuous present, settling instead for endless repetition.
This cultural repetition —a temporal disjuncture Simon Reynolds has termed ‘dyschronia’— arises from nostalgia, not for past eras but for their residual forms: artefacts once charged with the aura of possibility. Fisher conjectures, that we yearn for forms that feel familiar, well established, forms that evoke the memories of imagined futures since foreclosed by Capital’s “destruction of solidarity and security”. Capital feeds off this nostalgia through the commodification of endless resurrections of these past cultural forms, while entirely detaching them from their origins.
The tendency of 21st-century capitalism, to saturate the present with the cultural forms of the past, is now instrumentalised in the operation of the machines of Predictive Capital and their “predilection for the mixing of artefacts from different eras” —a predilection, as the character Steel complains, we too suffer from, and so, is itself, an echo. Inside these machines, past cultural forms, the products of past labour, are ground into a coarse statistical paste, irreversibly detached from their origins, their provenance annihilated, attribution obliterated, and finally excreted as temporal anomaly.
Mourning the disappearance of future shock —visceral encounters with cultural forms that rupture the continuity of Capital’s relentless self-reproduction— Fisher likened the resulting cultural state to that portrayed in the final episode of Sapphire and Steel: a scene where the totalising fog of anachronism has led to stasis, time itself has collapsed, and Steel declares, “there is no time here, not any more”.
With the help of Berardi and Jameson, in his diagnosis of “the slow cancellation of the future”, Fisher traces how it advanced over the last decades of the 20th-century, towards its final foreclosure in the first decades of the 21st. Ten years and more since Fisher made these observations, they continue to illuminate the operations of 21st-century Capital, helping to reveal the machines of prediction for what they truly are: a push towards the full enclosure of our future within the circuits of Capital.
As the quality of life for the majority under 21st-century Capitalism continues to deteriorate, it is as if even the slow cancellation of the future has itself become a past cultural form worthy of nostalgia, a longing for a time when we still recalled what was lost. This too is now commodified and operationalised by Predictive Capital. The psychedelic rites of Vibe Coding are not merely symptoms of Capital’s temporal enclosure —they are its performance. The cancellation of the future is no longer slow, but continuously reenacted and executed at the speed of next-token prediction. We once imagined futures of universal emancipation where our security and solidarity were assured, futures which lingered in Fisher’s hauntology. The foreclosure of these futures is now made to appear computationally and economically inescapable. We are left unable even to feel their loss, our memories scrambled, our mourning entangled and encoded within machinic protocols.
Fisher, Jameson, and Berardi each noted the slow cancellation of the future through the gradual negation of historical rupture and cultural difference wrought by Capital’s relentless self-reproduction. This further enclosure comes on like a permanent bad trip, a machinically recombinant, fully immersive hallucination from which we may never be sure we have awoken or not.
Fully Anaesthetised Ecstatic Hedonia
Where Levinas warned of the erasure of ethical alterity, and Fisher charted the loss of historical alterity, with help from Jaques Lacan we can expose a further violence inflicted by the operation of these machines, namely their enclosure of a final register of otherness, the Symbolic order —the structured realm of language, logic, and resistance that forms subjectivity. Predictive systems simulate this order, but offer no exteriority, no Law, no resistance.
In traditional coding the subject engages this Symbolic order. The machine answers not to affect but to structure: through syntax, operations, and formal constraints. This resistance frustrates desire and constitutes the coder as a subject through limit, error, and refusal. The act of coding, in this sense, is an encounter with the alterity of the Symbolic order —the system answers back.
Vibe Coding severs this constitutive relation. The Vibe Coder does not enter the Symbolic order, but submits to its statistical imitation. The model offers no Law, no withholding, no exteriority. It does not confront, it echoes. It does not interpellate the subject, only returns a semblance of dialogue without ever addressing you as subject. The Vibe Coder becomes neither full author nor full subject, but a medium through which predictive recombination speaks —spoken by the model as much as speaking through it. The Symbolic order is not engaged but simulated. What emerges is not understanding, but interpolation: machinic consensus dressed in familiar cultural forms.
The model does not function as the big Other of the Symbolic order in any meaningful sense. It does not withhold, rupture, or constitute the subject. It returns only recombination, hallucinated fluency, and machinic consensus. Baudrillard’s perfect crime comes into view once again —not the murder of reality, but “the murder of the other through simulation”— only here it is eternally reenacted with every token prediction.
A decade ago, prior to this new machinic enclosure, Fisher noted a common psychological malaise among his students —a melancholia marked by a mournful awareness of Capital’s foreclosure of their futures. This fugue diverged from the anhedonic inability to experience pleasure more typically associated with depressive states. Instead, sufferers of what he termed depressive hedonia were unable to do anything other than seek pleasure —desperately and relentlessly attempting to fill the void opened by Capital.
Here we may locate the void into which the outputs of Predictive Capital pour. The depressive hedonia of its users now fully automated, their anaesthetised state leaves them increasingly incapable of doing anything other than seeking pleasure (or profit). These systems not only perform the cancellation of the future, but erase our capacity to mourn —for what has been lost, and for those who are forgotten in the shimmering substitutions delivered by these temporally anomalous dumb waiters.
In Vibe Coding, this process becomes ecstatic. Horizons of solidarity and utopia are erased by a synthetic anterograde amnesia, administered under the anaesthesia of machinic hallucination, inducing an affect of euphoric dissociation —a trip not into cosmic unity, but into the isolating loop of a machinic K-hole. This is not simply the death of the other, but the ecstatic collapse of alterity itself —ethical, cultural, and symbolic— obliterated under the solipsistic hallucination of predictive systems. Through this collapse, our capacity for interruption, transformation, or even exteriority to Capital’s hegemony is rendered to appear ever more statistically improbable. We are instead trapped inside a smooth loop of predictive simulation.
By late March, the buzz around Vibe Coding rose towards the pitch of pop cultural phenomenon. Come the end of May, when Anthropic launched a site titled “The Way of Code: The Timeless Art of Vibe Coding”, the mould had clearly set. Vibe Coding had found its ideal form. In collaboration with Anthropic, record producer Rick Rubin —who famously claims to know nothing about music nor how to operate production equipment— transposes his customary role of casting judgment from on high upon the work of a studio of musicians into Anthropic’s shop window, where a conveyor belt displays the products of stolen labour under the signage of culturally appropriated eastern enlightenment.
With all struggle apparently now removed from both self-expression and spiritual awakening, their attainment becomes a form of shopping that aspires only to the collection of a complete set of appearances. Under Predictive Capital, The Creative Act becomes a Supermarket Sweep —a slow-motion version of the gameshow dash rebranded as zen meditation— a way of being entirely without soul.
With these machines becoming increasingly ‘agentic’ —operating autonomously across multistep tasks— Vibe Coding proponents now claim to ‘one-shot’ entire applications from a single prompt. While many have noted the certain security and maintenance nightmares that this mode of amnesic engagement —with the machines of prediction— will summon, the true violence of their operations is perhaps more emphatically revealed in recent uses of their developing multi-modal abilities. Those jacking into these systems may now produce output by submitting not only natural language, but also images, to guide traversals through permutation space.

Every day I am confronted by the question of what inheritance I will leave. What do I have that I am using up? For it has been our history that each generation in this place has been less welcome to it than the last. There has been less here for them. At each arrival there has been less fertility in the soil, and a larger inheritance of destructive precedent and shameful history.
Wendell Berry, The World Ending FireStolen Ladders
Vanishing Prospects, Death by a Million Cuts
Hours after the release of a multi-modal update to ChatGPT, in another post on (the sans-serif swastika) X, Krish Shah breathlessly showed off its visual style-transfer abilities by submitting different versions of a photo of himself and his friends: one rendered as an oil painting, one as a Studio Ghibli style anime still, and one as a simple, black-and-white cartoon.
Accompanying the images, he wrote:
Art just became accessible
Then, in response to a later comment, he added:
The possibilities are endless right now
The generation now entering the labour market does so across scorched earth. To condemn those who reach for these tools without acknowledging the bleak terrain they must traverse would be to misdirect our disapproval —targeting individual acts of careless violence rather than confronting the calculated, structural violence that shapes the conditions forcing their hand. A misplaced condemnation of those attempting to escape the periphery that, of course, is precisely the deflection Capital seeks. That said, the true impact of this technology is the polar opposite of that stated by Shah in his initial post and subsequent comment.
Both a life spent practicing the arts, and access to further education in them, have become increasingly inaccessible to all but those born into wealth. For decades —in countries like the UK— Capital has dismantled the preconditions that once allowed study, experimentation, and creative growth outside elite class positions. Arts and humanities programmes have been gutted. Tuition has soared —not as investments in future security or potential earnings, but as a lever of discipline— cutting adrift generations of graduates without a future, burdened with unpayable debt and only Potemkin prospects in return. Public services have been hollowed out, the safety net of the welfare state dismantled, the social contract incinerated. As wages stagnate beneath the rising cost of living, the time required to learn, grow, or refuse has been systematically colonised by the relentless demands of surviving as a member of Capital’s precariat.
Of course, none of this should have come as a surprise, nor has it happened by accident. The arts open the way to spaces of resistance. Not merely resistance through outputs, through expressions that challenge the dominant hegemon, but resistance in its very practice. Erosion of access to education and in particular the foreclosure of the arts, must be seen not only as redirecting the flow of knowledge in ways that weaken critical thought and speech, but in ways that control the flows of effort, redirecting flows of energy in service of Capital that may have otherwise flowed in resistance to it.
In 1970, in the wake of the student protests of the late 60s, Regan’s education advisor declared:
We are in danger of producing an educated proletariat. … That’s dynamite! We have to be selective on who we allow [to go to college].
As Capital’s new phase of hyper-concentration intensifies, the myth of creative self-actualisation now survives only as content marketing for machines built from the corpses of the fallen. Under such conditions, to Vibe with these machines is not simply a choice, it is increasingly the only discernible path to participation, even survival —a coerced acceptance of the ‘stolen ladder’ extended by capital. For many their (mis)use is less laziness, than desperation in a system that continues to close off alternative routes.
The ultimatum to which we are increasingly subject, is to submit our every act to the all seeing eye of Capital, or perish. Survival of the workers, particularly the growing precariat, is made conditional upon our submission to this extortion: the increasing surrender of our lives to the gaze of the machine in exchange for our status as qualified life.
The logic of the black box —a system that tracks and punishes deviation in exchange for conditional survival— now extends far beyond driving insurance. Its quiet imposition under the guise of safety or affordability is no longer limited to the car, but is emblematic of a broader transformation: the universalisation of monitored being. As Predictive Capital propagates, it installs conditionality in every crevice of the social —a tyranny once reserved for parolees and the punished now extended as a privilege to the precariat. Life on condition of legibility. In refugee camps across Kenya and Lebanon, in slums from Kibera to Kolkata, biometric and behavioural surveillance are accepted not because they are welcome, but because they are priced into access —the cost of food aid, education, even the right to labour in the digital economy. Microwork platforms exploit this desperation, offering the chance to survive by becoming visible to the machine. This is the new zero-hour socio-economic contract of automated conditionality.
Microwork comes with no rights, security or routine and pays a pittance — just enough to keep a person alive yet socially paralysed. Stuck in camps, slums or under colonial occupation, workers are compelled to work simply to subsist under conditions of bare life.
Phil Jones, Work Without the Worker, The Surplus of Silicon ValleyIn the wake of this systemic arson, Capital now gestures towards ‘new’ synthetic terrain and declares: here lie your possibilities, your route to success, your mythical utopian futures, even your means of resistance. What was once materially foreclosed is now offered back, yet only in spectral form. This synthetic terrain is no commons of endless possibility, but a further enclosure —one architected by Capital and patrolled by prediction. The dispossessed are herded through corridors of recombinant permutations, masquerading as futurity, towards machinic hallucinations of opportunity. They enter this terrain not as creators, but as prompt-issuers: miners of the past, conscripted into a recombinant economy built upon their own alienation. To Vibe under these conditions is not freedom —it is coerced participation according to rules and parameters defined by Capital.
Those shut out of cultural industries by precarity, geography, or education are offered these tools not as means of liberation, but as pacifiers. What remains is not access, but interface —a symbolic structure stripped of power, a ladder that vanishes the moment the dispossessed begin to climb.
This ladder was not stolen to lift them —there will be no return of their access. It was taken to fuel Capital’s ascendency from the very ground it scorches in its rise. Its rungs were hewn from the structures stripped from those now offered the mirage of their return —a mirage that allows only hollow simulations of ascent: rituals of participation without power. The delivery of this imposter is not an opening, but a termination —of lines of flight, of alterity, of the capacity to challenge Capital’s hegemony.
This spectral terrain exemplifies what Baudrillard termed the hyperreal: a realm where simulations replace and ultimately efface the real. The ‘opportunities’ offered are not pathways to genuine creation rooted in lived experience or skill, but anachronistic simulations of creativity itself. The user, alienated from the authentic labour of making, engages instead with a frictionless interface that delivers simulacra on demand. This ‘frictionless’ quality is key; it embodies what Marx diagnosed as the Fetishism that attaches itself to the products of labour. Here prediction is the new commodity form, whose fetishism masks the vast necropolitical apparatus of expropriated labour —the dead data required to generate the illusion. The “endless possibilities” are merely permutations within a closed system, a hall of mirrors reflecting Capital’s own image back onto the user, further deepening their alienation not only from their product, and the countless content creators and data workers upon whose labour it depends, but from the very potentialities of authentic creation and resistance.
Clearly, the possibilities —the potential for breaking genuinely new artistic ground— are not expanding, but contracting. The corridors that confine these generations will be further attenuated and constricted to the needs of Capital as their outputs feed into the marketplace of attention under the predictive Tyranny of the Recommendation Algorithm. This despotic attenuation will then feed back into the next iteration to further steer the Vibe.
Altman and his cohorts continue to point to the way human artists draw inspiration from prior works, demanding that their machines be permitted to do the same. In arguing that his machine —an accumulation of stolen dead labour— be afforded with the same rights of access to culture as humans, he attempts not merely to elevate his machine to the status of the living, to grant it the rights of personhood, but to demote the living to the status of the machine, to that of the undead.
We should, by now, see straight through Altman’s flimsy non-sequitur. The human creator moves through culture as a wanderer —meandering, absorbing, interpreting. Their inspiration emerges from situated, finite, embodied encounters across linear time, with works sensed through the passage of analogue information not as isolated binary code. These formative events cannot be captured in isolation, they are inextricable from a shared cultural experience. The output resulting from the gradual accretion of influence gleaned from these encounters often emphasises the connection and debt to the creators of the past from whom they have taken inspiration and are indeed responses in dialogue with them.
By contrast, the machine claims and tarmacs over the entire archive in a single act of conquest and pillage. They then proceed to charge for trips across this flattened, lifeless void, gathering actual fragments of past outputs as they go —only to form them into endless recombinant simulacra while denying the very existence of those from whom the pieces have been stolen. This is not inspiration —it is expropriation at planetary scale. What the human gathers as influence, Capital mines as ore for its predictive self-valorisation.
Moreover, our human need for intersubjectivity provides the central drive to create art. It exists to be encountered, interpreted, and responded to by other human beings. Culture is not raw material for machines, but a medium of connection, recognition, and renewal between subjects. Its value lies not in what can be extracted, but in what can be shared and nurtured. To repurpose this collective inheritance for the training of predictive systems is not to elevate it, but to subordinate it —transforming acts of meaning into fuel for simulations not merely indifferent to them but incapable of difference.
Worse still, the very systems now presented not only as democratising access, but whose ‘learning’ is comparable and so equally legitimate as any human creator, are the product of the same hyper-concentration of Capital that has systematically foreclosed the conditions required for human creativity to flourish: time, education, public infrastructure, and the freedom to experiment without immediate economic return. These machines do not redress that destruction —they entrench it. They offer not tools for creation, but interfaces for machinic compliance and austerity eugenics.
As we surrender our humanity to these systems the horizon of meaningful creation recedes. In return, they merely accelerate the churn of vacuous novelty that fuels Capital’s continued self-maximisation. With each abdication —of curation, of skill and knowledge acquisition, of creative production, and contemplation— the space of artistic potential only narrows. What was once opened through painstaking trial and error driven by human aspiration and expression, the sacrifice of time, and the devotion of effort collapses in the moment of this further machinic enclosure.

In a system where signs are exchanged like commodities, the sign is no longer a means of representation but an instrument of equivalence — the site of simulation.
Jean Baudrillard, For a Critique of the Political Economy of the Sign (1972)Virality of Violence
The Heartbreak of Hayao Miyazaki
As Shah‘s post anticipated, the new ChatGPT model’s style-transfer capabilities swiftly birthed the circulation of a global meme: user-submitted group photos and selfies, casually rendered in an aesthetic pastiche lifted wholesale from the legendary animators of Studio Ghibli.
The studio’s most celebrated artist and director, Hayao Miyazaki, famously described AI as an “insult to life itself”. The footage showing Miyazaki’s visible revulsion and heartbreak upon witnessing the grotesque AI-generated animations once produced as part of a research project undertaken by his junior colleagues has already become a staple of AI lore. That his art —the product of a lifetime of painstaking care— would, at the twilight of his career, now become fodder for viral mimicry only deepens the tragedy of this machinic violence.
What is summoned by these tools is not creativity, but a disembodied simulation of its style —an empty echo. This is culture reduced to mere cosplay. Not crafted to reflect our humanity, as Miyazaki’s work so often is, but calculated for virality. It is aesthetic and cultural forgery at scale, not executed by deviation but design. A hallucination engineered for maximal attention extraction, it is not governed by craft, but by the viral logic of prediction. The mimicry performed by these machines is not incidental, but an innate trait of systems trained to maximise pattern extraction. Predictive Capital does not care for meaning, only for mimetic fidelity to what once performed well. This is the Tyranny of the Recommendation Algorithm finally revealing itself in the mathematical heart of next-token prediction. Its presence betrayed by the cruel banality of this machinic violence.
Beyond the subjective judgement that Miyazaki’s work is often both breathtakingly beautiful and profoundly meaningful, the films of Studio Ghibli are also, objectively, extremely labour-intensive. To this day, Miyazaki’s films are meticulously hand-drawn, frame by frame. Sequences that last mere seconds on screen may require months of painstaking labour. It should come as no surprise that they overflow with passion, conviction, care —and above all, humanity.
The endless counterfeits erupting from these machines do not merely mock the passion and years of sacrifice that Miyazaki and the Studio Ghibli animators poured into their work —they flood the cultural space with cheap verisimilitude that inexorably dilutes the impact and meaning of precious works that once appeared rarely, precisely because their creation demanded great skill and sacrifice. As the conditions required for consistent dedication to creative practice are moved ever further from the reach of all but the most privileged, and even as those afforded with opportunity to hone their expression must allow Capital to express itself through them, it can be seen as no coincidence that examples of dedicated practice such as Miyazaki’s are so brutally demeaned —it reveals Capital’s long held contempt for creative pursuits not driven by its imperatives.
The abuse of Miyazaki’s lifeworks in service of vapid virality, at the overwhelming scale enabled by these machines, and which demands such minimal effort from their (ab)users, provides an exquisite demonstration of the casual violence of these systems. Specifically, in their total denial of authorial consent for endless extractions from the lifeworks of content creators. Adding insult to injury, the official Whitehouse twitter account joined in this all-against-one mugging, generating a Miyazaki-esque, Ghiblified image of VP. J. D. Vance, complete with the words: “We do not ask permission from far-left democrats before we deport illegal immigrants”.
As ever, “I did it because the machine made it easier [physically, cognitively and psychologically]”, is the new, “I was just following orders”. This capitulation to the machines distorted the meticulously laid intentions of an artist’s lifelong sacrifice, disfiguring Miyazaki’s radical pacifism and ecological care into a tool of authoritarian state propaganda. His life’s work not only simulated but conscripted —a crushing ontological violence gone viral.

I learnt that a neural net is a device used only for solving problems that are far too hard to be understood. A sufficiently flexible neural net can be configured by feedback to mimic almost any system — to produce the same patterns of output from the same patterns of input — but achieving this sheds no light whatsoever on the nature of the system being emulated.
Greg Egan, Learning To Be Me, (1995)Fractal Capture
A Plague of Agents of Capital
Beyond the horror of Miyazaki’s ontological mutilation lies a deeper pattern —Predictive Capital’s expansion of the surface of extraction, through what we might call fractal capture: the recursive simulation and preemption of cultural and scientific recombination at every scale.
These machines cannot be resisted on qualitative grounds alone. As function approximators developed through machine learning, the perceptible gap between their outputs and their targets will inexorably appear to close. The Capital driving their development will rinse hidden labour, raze ancient forests, and melt the glaciers until the basis for qualitative grounds for resistance becomes entirely obscured —even while remaining well founded. We will look back nostalgically upon the versions of these machines where glitches were still visible and obvious shortcomings betrayed the provenance of their stolen cognition —machines whose hypnosis we might still resist. Moreover, as the gap between human cognition and machinic prediction is gradually pushed beyond human parsing, the field of interpretability —the view into their inner workings— will likely be locked down, dominated by those developing them.
Simultaneously, Capital will continue to apply pressure from the other direction, patterning its human subjects such that they operate and produce in ways that increasingly mimic the characteristics of the operations and the outputs of the machines. As the ticks and tells of machinic incomprehension —the endless repetition and the characteristic sickly glow of their aesthetic— evaporate from our view under the blaze of (venture) Capital’s race for supremacy, a flow of anti-meaning seeps from their output to coat the Real in a residue of distrust. This residue lingers to form an after-image of suspicionalienating us from our lived experience, a doubt that seeds further division, widening the gap between each of our views, further undermining collectivity and pushing us ever deeper into our Post-Truth malaise.
Almost a decade ago, in one of the first pieces to plumb the depths of the murky world of “content production in the age of algorithmic discovery”, James Bridle, exposed how the next-content predictions of Youtube’s autoplay recommendations were already being anticipated and gamed. In their analysis of videos targeting younger audiences, Bridle noted how an increasing number of those uploaded to Youtube mimicked those recently accruing the most views —the now familiar flooding of cultural space with infinite permutations of previously successful forms. Firstly Bridle identifies those doing this through outright appropriation and piracy, but then also through automated systems that produce bizarre algorithmic mashups and grotesque chimeric nonsense.
In so doing Bridle provided an early warning of algorithmic content prediction and production systems configured according to Capital’s self maximising imperatives —or in other words the inhuman and anti-human outcomes of automated systems rigged with purely capitalistic reward functions. They also note how content is multiplied through “algorithmic interbreeding” across permutation space, and how this intensifies the impossibility of knowing “where the automation starts and ends”. Through a process Bridle describes as delamination, the recombinant recursion of these systems increasingly obscures the origins of the work being generated. The latest instantiations of the machines of Predictive Capital greatly intensify the damaging psychological and cultural impacts traced by Bridle. Not only does next-token prediction echo the operation of next-content prediction, but so too do the operations of those seeking to exploit them.
As is so often the case, when we observe the inner logics of our systemic malaise, we find Marx looking back at us. The following is from his Economic and Philosophic Manuscripts, published in 1844:
The raising of wages excites in the worker the capitalist’s mania to get rich, which he, however, can only satisfy by the sacrifice of his mind and body. The raising of wages presupposes and entails the accumulation of capital, and thus sets the product of labour against the worker as something ever more alien to him. Similarly, the division of labour renders him ever more one-sided and dependent, bringing with it the competition not only of men but also of machines. Since the worker has sunk to the level of a machine, he can be confronted by the machine as a competitor.
Just like the algorithmically generated content Bridle exposed, influencers have long been to Platform Capital what automated predictive agents, or so-called ‘agentic AI’ will increasingly be to Predictive Capital: adaptive agents attenuated and optimised not for originality or insight, but for compliance with algorithmic virality. As foot soldiers of the Tyranny of the Recommendation Algorithm, influencers do not express so much as iterate, fine-tuning their aesthetic and messaging to the metrics of past performance. Their role is not to break ground, but to echo prior summons —to gesture, like the Vibe Coder or Capital itself, towards that which is predicted to succeed based upon statistical patterns in what previously has. As such, they become mere fleshy componentry within a machinically automated Culture Industry, their output akin to what Theodor Adorno described as a deceptive variety masking a homogeneous core.
A constant sameness governs the relationship to the past as well. What is new about the phase of mass culture compared with the late liberal stage is the exclusion of the new. The machine rotates on the same spot. While determining consumption it excludes the untried as a risk.
Adorno and Horkheimer (1944)This is the computational automation of Adorno’s constant sameness for the exclusion of risk. It is Jameson’s saturation of the present with the past. It is Baudrillard’s Absolute Advertising, a programmed loop devoid of meaning where free floating signifiers vote only for themselves. It is not creation, but calibration: a strategic repetition or standardisation, a performance of prior affect that further saturates cultural space, maximally legible to the engine, machinically curated for capture. In so doing, influencers, just as vibe coders, do not merely reinforce Capital’s recursive feedback loop, but become inputs to it, reshaping the statistical terrain from which the next wave of machinic hallucinations will be summoned, and from which the coming swarms of automated predictive agents will draw their instruction.
Culture is a paradoxical commodity. So completely is it subject to the law of exchange that it is no longer exchanged; it is so blindly consumed in use that it can no longer be used. Therefore it amalgamates with advertising. The more meaningless the latter seems to be under a monopoly, the more omnipotent it becomes. The motives are markedly economic.
Adorno and Horkheimer (1944)Rather than attacking the quality of their output, our resistance against these machines must instead derive from a critical understanding of their true operation, from the recognition that they are, at root, agents and emphatic expressions of the structural violence of Capital.
Moreover, if we look beneath the surface of their operations, the threat of a yet greater violence comes into view. These machines are not simply statistical function approximators for human tasks —they are increasingly automatons of interpolation. Relying on the theft of vast corpuses drawn from the lifeworks of human artists, they now instrumentalise these extracted styles to interpolate across latent space —carelessly mining the very terrain those artists might have reached, had they only been given the chance.
This is not merely the simulation of outputs, but of routes —a pre-emptive occupation of latent futures that might otherwise have been shaped by intention, care, and connection. These were not merely aesthetic routes, but ethical ones —paths through which a subject might have encountered the other, and been called to respond. Their pre-emption severs not only possibility, but relation. In Levinasian terms, this is not merely the refusal of the face-to-face encounter, but the erasure of the very path by which one might have arrived in the presence of the other. Interpolation here becomes enclosure: the very gesture of machinic traversal seals off the possibility not only of human passage and arrival, but of relation itself.
Where the dissociated K-hole of Vibe Coding anaesthetises loss. Fractal Capture ensures there is nothing left to lose as nothing remains unclaimed.
Yet the scope and scale of this expropriation runs beyond even that. For it is not only the singular, ethical encounter that is to be foreclosed, but the entire combinatorial space of aesthetic and conceptual permutation —mapped, mined, and monopolised. It is here that we may glimpse the true extent of Capital’s fractal expansion of the surface of extraction. These machines will brute-force interpolations across the vast combinatorial spaces opened, not only by the untaken paths in an artist’s own lifeworks, but those at the boundaries between each artist’s body of work and every other’s, or the limits of each knowledge domain and all others. Much of that mined in this Cronenbergian Brundlefly realm of horror will be grotesque, gobbledegook, or both, but veins of value will be struck, then sucked dry. It is not the exploration of these terrains that we must contest but the Fractal Capture of them by divisions of machinic Capital.
Not sated by the mining rights to the vast permutation space within stolen human outputs, these models now feed on the predictions mined from them, in a self-consuming synthetic loop that further expands the surface of capture. The significance of these simulation machines being trained on their own simulations and their subsequent entrance into and patterning of the real should not be underestimated, especially as this is increasingly the approach taken in developing embodied AI —or rather, embodied Predictive Capital.
This training through simulation paradigm is exemplified by NVIDIA’s Isaac Lab, Omniverse and Cosmos, where large-scale GPU-accelerated reinforcement learning can be conducted through the simultaneous execution of thousands of virtual robot instances within physics-based simulations. Running in parallel, these simulations explore countless permutations across possibility space towards the acquisition of complex skills, executing at a rate entirely infeasible within the temporal and physical constraints of real-world laboratories.
This same machinic capture now marches through every domain of human culture and knowledge. In systems like Google’s GNoME or DeepMind’s AlphaFold, we have already seen predictive logic made literal —extending its reach into the material world. Over the past fifty years, the painstaking efforts of tens of thousands of publicly funded human researchers revealed the structure of 150,000 proteins. Then, by expropriating this commons for profit and extrapolating computationally from it —reanimating dead labour at scale— after just a few years AlphaFold’s small team then cast predictions that sufficiently approximates the structure of 200,000,000 proteins. The difference in the scale of these two numbers should give us pause. It is a glimpse of how minuscule the spaces we have already mapped will appear when compared with the vast scale of possibility space that may soon be traversed and mapped by these machines, within short time frames, by consuming and extrapolating from the sum of humanity’s efforts to date. This constitutes primitive accumulation, enclosure of the commons, at a scale orders of magnitude beyond all prior enclosures.
The scientific breakthroughs enabled by projects like AlphaFold and GNoME —however extraordinary— do not exonerate such enclosure nor the extractive infrastructures that instrumentalise it. Their promise is real, but it is no justification for a system that treats shared human knowledge as proprietary training data, then rebrands the return as corporate benevolence.
The early findings of AlphaFold and GNoME were released under Creative Commons Attribution licenses —free to use, modify, and share, provided attribution be given to DeepMind or Google. Revealing essential building blocks of our universe, such as protein and crystalline structure, and not releasing such insights directly into the public domain, should have been unthinkable. Yet access to these natural structures is now licensed under corporate-branded terms of use. While framed as generosity, this plants a territorial flag in information space demanding that, if you make use of this structure, you must reproduce Google’s claim to it —a symbolic tether that quietly asserts epistemic sovereignty. This is discovery as branding, an assertion of authorship over nature; the denaturalisation and planetary expropriation of physical-molecular terrain —now mapped, monitored, and soon to be capitalised.
This attribution may seem benign, even commons-oriented —yet under the logic of Platform Capitalism, naming becomes routing, and routing becomes monitoring. You may use the map, but you must name the mapper, and in that naming —through links, trademarks, DOIs, or referencing the platform in metadata— you become part of the infrastructure, routed through the circuits of capture. Simply traversing the parts of our universe now mapped by Google, in crystallographic space, protein space, and cyberspace, we are now compelled to signal our movements. The name is not just credit; it is a beacon. Moreover, under Predictive Capital, epistemology becomes economic infrastructure. Knowledge is no longer something to be collectively held, due to our surrender to machinic discovery and curation, it is now something one must pay for with visibility, data, and obedience.
Not only do these projects exemplify a broader pattern of machinic enclosure, where the prioritisation of public good over private profit is the exception rather than the norm, the newer AlphaFold3 already signals a further shift towards Capital’s imperatives. Inspired by their success, so-called wet-labs now operationalise this logic at industrial scale: continuously running automated robotic experiments at dizzying speed, mixing chemicals to synthesise novel compounds and materials, testing and filtering for viability and value. Application of the insights revealed through projects like AlphaFold and GNoME will involve the same extractive approaches, and will be ruthlessly capitalised.
Fused with the next-token and next-profit speculations of Predictive Capital, this relentless robotic reconnaissance is less to satisfy human curiosity, than to industrialise interpolation and accelerate enclosure. Thus scientific discovery itself becomes increasingly subordinated to the expansion of Capital’s predictive horizon —a future parsed and preemptively claimed not for truth, but for anticipated return. The goal is no longer to know the world more intimately, but to machinically map it faster than we can humanly live it. This shifts the stakes of our engagement with these machines from cultural recursion to material foreclosure, not merely a saturation of the symbolic, but the seizure of the real.
These are Unqualified Reservations: the quantified enclosure of common land by a sovereign ruler. This is pathfinding by expropriation —a simulation not only of destinations, but of the meandering, contingent, intention-rich routes along which we might have reached them. The machinic gesture leaves no trail; no memory-scored terrain remains, no palimpsest emerges from the passage of life —only a tarmacked hellscape. In place of the quiet improvisation of human pathways, we find only the overcoded roads of latent traversal: flattened and foreclosed by industrialised calculation before they were ever trodden by feet or perceived by minds. This is the world Wendell Berry mourned —where paths have been replaced by roads, and with them, the subtle human negotiations of relation, attention, and care. It is here that Fisher’s Theoretically Pure Anterograde Amnesia is now machinically realised: not a forgetting of what has been, but of what might have been —a synthetic amnesia that precludes the formation of memory itself, sealing off the future in advance, before it can be lived, imagined, or lost. With even grief foreclosed, all that remains is an ecstatic performance of machinically ignorant bliss —the joyless smile of the enslaved, traversing roads they did not choose, toward futures predicted for them.
Part 2

The corporations and machines that replace them will never be bound to the land by the sense of birthright and continuity, or by the love that enforces care. They will be bound by the rule of efficiency, which takes thought only of the volume of the year’s produce, and takes no thought of the life of the land, not measurable in pounds or dollars, which will assure the livelihood and the health of the coming generations.
Wendell Berry, The World Ending FireDispossession & Disappearance
Apparitions of Access, Personalised Frames for Systemic Failures
The prevailing narrative, regurgitated ad nauseam by proponents of Vibe Coding and defenders of the so-called ‘generative AI’ of next-token prediction in general, is that these are merely new tools, and that they represent a new democratisation of creativity —akin, they argue, to how smartphones opened up photography to the masses.
This is a specious analogy that collapses under the slightest scrutiny. In truth, it serves as ideological cover —not only for the devaluation of skilled creative labour, but for the creation of new markets dependent upon extractive, large-scale predictive (AI) infrastructure. What appears as expansion is in fact extraction; what resembles democratisation is in fact strategic accumulation by dispossession. This is not mere appropriation, but full-spectrum expropriation. It is the seizure not only of labour’s archived outputs, but of the very conditions of its reproduction —of time, autonomy, intergenerational transmission, and access to the means of expression itself. As in primitive accumulation, what is enclosed is not the commons, but the very capacity to create, to remember, and to resist. These machines are not tools that democratise creativity, but tools that consolidate cultural production under the control of increasingly hyper-concentrated Capital —large-scale extensions of an architecture of influence with extremely deep roots.
More insidiously, widespread acceptance of this analogy illustrates the tragic extent to which Capital has succeeded in erasing itself from view. Not merely by concealing the true scale of expropriation required to train these models, but to render such unprecedented theft as socially acceptable. A mere two decades ago and Capital was set on building the case for the prosecution, rather than the defence, in a now clearly fatuous battle against appropriation being committed on a tiny fraction of this scale, successfully demonising and convicting individuals sharing files over the internet. Yet, here, massive scale expropriation is laundered into an illusion of access, the user positioned not as criminal or subject of dispossession, but as beneficiary of empowerment —offered capabilities calibrated for virality, hype, and spectacle.
This inversion is no accident. As Berardi warned, under semiocapitalism it is not just labour that is extracted, but attention, time, and even the psyche —what he terms the ‘soul’. These new predictive systems do not merely colonise working hours; they enclose the imaginative and affective capacities once held in common, converting them into sites of compulsory productivity and self-optimisation.
Each act of extraction is algorithmically sanitised and rendered into the prediction feed as opportunity: fleeting windows beckoning those with the grindset to accumulate, to hustle towards time-limited ladders that invite ascent over the crushed and reconstituted remains of those already dispossessed.
This is the old capitalist lure dressed in new code: a mendacious promise that escape from the periphery is possible for anyone, if only they sacrifice everything. The desperate are told they have only themselves to blame if they fail to climb. These machinic illusions of opportunity and access are structural homologous with the get-rich-quick schemes peddled by influencer cons like Andrew Tate.
The song now sung by these lifestyle grifters and tech moguls alike follows a tried and tested pop formula. Echoing Adam Smith’s advice that individuals must pursue self-interest and that success be defined by personal financial gain, they promote crypto Ponzi-scheme grifts, drop-shipping hustles, and hypertrophic gym routines as substitutes for solidarity.
Their message displaces systemic failures onto the backs of the dispossessed, selling snake oil and superficial self-optimisation in place of structural redress. These agents of machinic Capital now lip-sync the words to this familiar capitalist refrain. For their victims it is a forlorn anthem of drudgery. The song may have been updated and remixed for a globally networked stage, but its message remains unchanged: that net capital value is the one true measure of human worth, that anyone can succeed if only they work hard enough, and of course, that those struggling to survive simply need to work harder —that the poverty and suffering of the precariat are merely symptoms of personal failings, or in other words, social Darwinism at work.
Despite centuries of the capitalist elite yelling these worn out words, intoning that the solution to countless systemic failures is simply for the workers to work longer and harder, still they manage to hypnotise generation after generation of us into buying the lie and singing along.
Economist Gary Stevenson notes that for Generation Z, who have known only the bleak terrain of the Capitalist Real, the gym has become the one place where tangible results are guaranteed in return for persistent application and effort. Yet this turn to health-conscious living also reflects the crushing weight of individual responsibility placed upon them by the evisceration of the welfare state: a recognition that survival itself demands fitness for the long-haul grind of life under Capital. As is made crushingly apparent in Andrew Tate’s orders to those among his impressionable young male audience suffering depression to: “get the fuck up and do some push-ups”, Capital prefers its subjects to furnish themselves with overinflated biceps and underdeveloped thoughts than the reverse.
Here the role of the machines of prediction comes again into view. With the time and opportunities required for the development of critical thought and imagination erased —deemed ‘inefficient’ and threatening to Capital’s hegemony— its subjects are instead supplied with machines that simulate these very capacities as commodity. In this substitution, we are alienated not only from these faculties themselves, but from the conditions necessary to cultivate them —leaving us ever more pliant, and incapable of critically assessing the costs of Capital’s operations to us and our planet. The machines of prediction do not challenge the logic of Capital’s timeworn song —they embody it. They do not democratise creation, but enforce simulacral hustle: ever more intricate rituals of reanimation in which the path to prosperity is always-already pre-scripted towards Capital’s self-maximisation, and failure always-already individualised.
Naturally, this logic extends far beyond Predictive Capital’s treatment of those in the aspirational grindset of online influence. It also governs the lives of workers at the other end of the privilege spectrum —the ghost workers, the gig workers, the microtaskers— caught in the same apparatus of extraction. While surveilling them as they work —as Dan McQuillan explains:
much of the data capture and algorithmic optimisation is to further precaritise their conditions, hence the use of Uber’s data in its attempt to develop self-driving cars, and Amazon’s use of data to increase the robotisation of its warehouses: thanks to the affordances of AI, the data treadmill not only maximises extraction of value from each worker but uses that same activity to threaten their replacement.
Under Predictive Capital the true costs of the capitalist system —immiseration, worker pauperisation and precarisation, subjugation, mass extinction, and ecological collapse on a planetary scale, even genocide and ethnic cleansing in service of despicable, massive-scale real estate ventures— are increasingly removed from view. Yet, with The Apparatus of Attention fully attenuated to its interests, and our critical faculties ever diminished, when its crimes can no longer remain hidden, Capital no longer even bothers to apologise —instead, it pivots. It seeks to undermine the legitimacy of all that that might constrain its self-maximisation, even human empathy itself, while silencing, imprisoning, or deporting those that speak out. This impulse —to erase inconvenient realities, particularly the human cost of its operations— is not new; it reflects a fundamental, historical antagonism innate to Capital itself.
The individualisation of systemic failure is but one ‘mind virus’ in Capital’s growing ideological armoury. No longer merely steered by elite interests or shaped by political and cultural structures external to it, the ideologies now guiding Capital’s rampage are increasingly endogenous —subsumed, shaped, and commanded by the Automatic Subject and its self-maximising logic. Capital first entered the field of material production, then moved into cultural production; now, it is increasingly in the business of ideological production. This began with simple advertising then merged with propaganda and methods borrowed from the Military Industrial Complex in the post war period, giving rise to the Advertising Industrial Complex which, across recent decades, has subsumed all else until, as Baudrillard foresaw, everything now operates as an advert in service of Capital’s self-maximisation.
As economic and environmental pressures intensify, the future grows increasingly uncertain. In response, Capital needs to extend its prediction horizon, adding urgency not only to its self-maximisation but its hyper-concentration. Following the predictions cast from the resulting apparatus, the survival and wellbeing of the human population and life in general comes into increasing conflict with Capital’s interests. Consequently, both its operations and the ideologies it generates and promotes become ever more anti-worker, anti-human, and detrimental to the survival of ever larger portions of the human population and all life on Earth. These ideologies vary in detail but each hide Capital’s logics under similarly thin veils. Invariably they promote and prioritise the lives of those within concentrations of Capital and the future lives of imaginary post-humans over those of the majority of humans living today. Capital seeks to invest only in the lives of the elites that serve its interests, the fortunate few and their descendants, those not abandoned or sacrificed towards the realisation of these apparitions of hypertrophied male energy.
Parroting extropianist ideas, Nick Bostrom famously predicted trillions of post-humans ejaculated across his future cosmic light cone. Arguing therefore, that —based upon lost entropy alone— every century we delay colonising our local supercluster constitutes the “Astronomical Waste” of countless potential future lives. Nick Land and Curtis Yarvin (aka Mencius Moldbug) in their Dark Enlightenment and the emissaries of Neo Reaction (NRx) trace a similar logic, as do those espousing other flavours of morally relativistic Longtermisms.
In their essay “Against Longtermism”, Émile P. Torres argues that Longtermism functions as a secular religion, where the salvation of humanity is deferred to a distant future, often involving advanced technologies or post-human entities. This perspective, Torres explains, operates as the basis for justifying present-day harms for the sake of a speculative tomorrow. From the grifter’s grindset to the transhumanist’s techno Rapture, these fantasies differ in register but serve the same ideological purpose: to redirect agency away from collective resistance and towards illusory paths of escape.
With the current phase of hyper-concentrated Capital intensifying, the ideological blueprints it generates increasingly take the form of false idols offering only speculative salvation. Various flavours of Longtermism and techno-Accelerationism now inform the view from the ivory towers and phallic rocket ships of the tech Broligarchy and those building the machines of Predictive Capital. Within these visions we see apparitions of the upload of human consciousness to the silicon substrate, the arrival of an omnipotent AI saviour, lives of leisure and abundance for a plutocracy served by a robot workforce, escape both from the planet they have reduced to ruins, and from the decrepitudes of human aging, even the defeat of death itself. It is precisely through this worldview that Musk has his DoGE slash funding for research and aid that will lead to the loss of thousands of lives in the near term, and in the very next breath justify investing billions in space travel because the sun will swell and render our planet uninhabitable within 1.3 billion years.
So the people who might have been expected to care most selflessly for the world have had their minds turned elsewhere –to a pursuit of ‘salvation’ that was really only another form of gluttony and self-love, the desire to perpetuate their lives beyond the life of the world.
Wendell Berry, The World Ending Fire“What if I could improve myself at the speed that technology improves?” —is among the questions posed by tech billionaire Bryan Johnson, predominantly known for his quest for eternal life. An entrepreneur and venture capitalist who functions as a living node in Capital’s self-replicating ideological machinery, Johnson left the Church of Latter Day Saints when he was 34, and has since claimed to be in competition with Christ himself. He lays out his anti-aging strategy —which he refers to as a “war on death and its causes”— in exhaustive detail in his Blueprint protocol and in his manifesto, imaginatively titled: Don’t Die. Its web page features an email input accompanied by a submit button that screams the words: “JOIN OR DIE”.
Forever unabashed, Johnson also states he intends: “to make ‘Don’t Die into the world’s most influential ideology”. He is, of course, also anticipating the upload of his consciousness to the silicon substrate —“to bring the brain online” through imaging technology. Currently being developed by another of his companies, this is a machine which is presumably already learning how to be Bryan while he learns … well, how to be a machine. Imploring us to get with the program, he declares: “Now, you can build your Autonomous Self as well.”
Spending over 2 million USD per annum in the crusade against his personal mortality alone, Johnson has transformed himself into the most datafied and meticulously modulated human being in history. No substance consumed, no physical exertion, no bodily process, or excretion escapes capture. All of it is fed back into the Don’t Die game as data —an algorithmic authority dictating further ‘interventions’ and ‘optimisations’. In this feedback loop, Johnson becomes the most machinic —and machinically legible— living flesh yet manufactured.
As Capital’s hyper-concentration deepens, so too does its sway over the ideologies embraced by the majority. Simultaneously, those embracing anti-capitalist ideas and ideologies are made to appear ever more sickly and maladjusted —and, of course, the more one cedes to Capital’s imperatives the ‘healthier’ one can appear. Johnson’s machinic surrender has elevated him, or so he claims, to the status of the ‘healthiest’ being on Earth. Here hyper-concentrated Capital and hyper-individualism conspire towards a definition of healthy that is somehow entirely detached from the environment that must support and sustain it. Moreover, it should perhaps come as no surprise if strict adherence to Johnson’s Don’t Die ideology —a life spent constantly evading death— inevitably renders its followers: undead.
The permutations of intellectual interpretation are endless, but ultimately, I can only act upon my desperate will to survive. I don’t feel like an aberration, a disposable glitch. How can I possibly hope to survive? I must conform —of my own free will. I must choose to make myself appear identical to that which they would force me to become.
Greg Egan, Learning To Be Me, (1995)Within Johnson’s ideology we see a recurring theme of 21st Century hyper-concentrated Capital. Recent escalations in external othering —as exemplified by the international tolerance of and complicity in the genocide of the Palestinian people in Gaza, the dismantling of DEI initiatives, anti-immigrant policies, and legislative attacks on transgender rights— are now joined by a resurgence of internal othering: the schizophrenic battle against and machinic negation of so-called ‘bugs in humanity’. So-called bugs like: empathy for those less fortunate than ourselves, or daring to be so ‘lazy’ as to not wish to work longer and harder for the same meagre share of the value generated from our labour. This is, of course, uncannily reminiscent of original sin and the authority of the Catholic Church. Johnson refers variously to these tricksters within as Ambitious Bryan, Debaucherous Bryan, Hungry Bryan, or collectively under the umbrella term “The Rascal Self”. To defeat these inner demons, the steps laid out in his protocol are designed according to the motto: “NEVER LET YOUR MIND DECIDE”.
Here we are reminded of Israel’s next-human-target predictions and the devastating consequences of surrendering our judgement by outsourcing moral responsibility to machines. As revealed through on-the-record testimony, such machinic process instills the now familiar dissociative affect of the machinic K-hole, that allows its operators to evade the guilt of heinous crimes against humanity, even committing genocide, because “the machine made it easier”.
The stated purpose of Johnson’s Blueprint program is to “remove the possibility of error by outsourcing all decision-making to a customised algorithm”. In other words, total surrender to the machinic authority of Predictive Capital, manifesting in this instance as a costly enrolment into a data-harvesting game called: Don’t Die. Moreover, it recasts errors —or bugs— as manifestations of human fallibility to be eliminated by superior, and of course, entirely infallible machinic Capital, that must not and cannot be questioned.
It is impossible not to wonder whether Johnson allowed his mind to decide to accumulate billions of dollars, to live forever, or whether vanity, sociopathy, or greed might also qualify as ‘bugs in humanity’ whose negation would —according to Johnson’s own logic— quite reasonably demand surrender to some machinic authority? Then again, such traits are, in fact, symptoms of a prior servitude, the one into which we are all born: the supreme authority of Capital.
Which brings us back to next-token prediction and the realisation that attacks on the few remaining independent and authoritative sources of information, such as Archive.org and Wikipedia, have escalated dramatically, just as billions in venture capital pour into the creation of machines whose output can be shown to be nonsense by those very sources —not to mention Musk’s capture of twitter and the corporate capture, workforce layoffs, and near total declawing of mainstream news journalism.
Here Fisher’s diagnosis of Capitalist Realism —the dismantling of not merely our capacity for resistance but of our ability to imagine alternatives to it— helps to explain how longtermist ideologies neutralise political imagination and foreclose action in the now. While appearing visionary, they are often affectively conservative: offering only ‘hyperstitional’ futures to compensate for the loss of meaningful alternatives in the present. In other words, longtermist visions extend Capitalist Realism into the temporal domain, replacing collective resistance and radical change today with a continuous spectacle of fantastical simulations of tomorrow. Fisher’s hauntology thus becomes operationalised, no longer mourning lost futures, we are instead overwhelmed by the continuous manufacture of ideologies that haunt us with false alternatives in service of Capital —futures made to appear inevitable, in which the living are quietly discarded or less quietly erased. These ideologies transform the future into a terrain for investment and prediction, where life is valued not for its lived quality but for its statistical, profitable utility to Capital. Whether we enrol into Johnson’s Don’t Die program or not, under Predictive Capital we are all co-opted into versions of his game. At the core of these ideologies lies the management of surplus life, not only through direct violence, but through pacification by simulation —a governance of the abandoned through spectral access and of the excluded through the orchestrated appearance of participation.
McQuillan argues that so-called ‘AI’ systems increasingly operate as “algorithmic states of exception”, structures of exclusion that function with the force of law while evading all responsibility for life. Their logic is not neutral. It is necropolitical —a system built to classify, to filter, and to let die.
“AI is an austerity-machine”, McQuillan writes —a formulation that names precisely the operation of next-token prediction under Predictive Capital. McQuillan is right, to understand so-called ‘AI’ we must understand 21st-century capitalist austerity. Furthermore, to fully understand the ever deepening economic austerity, we must also understand 21st-century genocide. Not only do they follow the same necropolitical logics, they are manoeuvres in a single project. As with all austerity regimes, the logic is not merely economic but disciplinary, punishing, and selective. Here, as ever, Capital functions as pharmakon —administering the poison, then selling the cure. What could be more ‘efficient’ than being paid twice for a single act? Simple: blaming the very subjects it has already marked for abandonment or elimination for the crises it has itself engineered.
To this end and in order to deflect our attention from the ‘mind viruses’ manufactured in its own interests, especially the one individualising blame for its structural failures and violence, Capital further undermines the legitimacy of those already suffering most, at its periphery. Forever punching down, forever the bully, its scapegoats —its favourite patsies— always-already the oppressed and dispossessed. It accuses those who jump to their defence of spreading a ‘woke mind virus’, going so far as to claim that such defence and support of those it targets, of those struggling most, constitutes that aforementioned ‘bug in humanity’: empathy, but this time of a suicidal magnitude.
In the 1970s Baudrillard observed that Capital replaces symbolic exchange —that order of gift, obligation, and reciprocity through which people once bonded into communities. In its place, it installs a logic of circulation without return: pure equivalence, transaction, and isolation. Within the first decade of the third millennium the Advertising Industrial Complex’s enclosure of sociality via the platforms of social media meant that even the symbolic exchange of basic human communication was similarly subsumed. Having dissolved the bonds of community and replaced embedded relation with signal, all that now remains is a desert of lonely consumers —our societies now so atomised that we suffer epidemics of loneliness, depression, and psychic fragmentation.
After engineering this disintegration, Capital returns to sell countless balms and remedies for each painful symptom induced by the poison it continues to administer. The latest magic potion: artificial friends, flocks of sycophantic next-token predictors clothed in synthetic warmth —a statistical performance of simulated companionship, human relation reduced to a highly addictive synthetic drug with a dangerously psychoactive affect. What was once given —love, presence, listening— now arrives for hire, performed by machines trained on the remnants of connections Capital has so successfully ghosted.
LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities
Certain commentators claim that the Cybertruck bomber’s use of ChatGPT was not significant given that the information he accessed has been available via search engines for decades and public libraries before that. Yet, this entirely misses the way the transfer and dissemination of information shifted from search engines to the manipulative operations of social media and the recommendation algorithm, and how next-token prediction machines extend that pattern of intimate relational manipulation ever more deeply into the circuits of Capital.
Obviously, we more readily accept information received from parties with whom we have developed an intimate relation of trust or whom we perceive as authoritative within various knowledge domains. Where the recommendation algorithms of social media had to achieve this affect through dislocated fragments, next-token prediction machines gather these fragments together to generate a single, illusory, anthropomorphic presence. These new machines immediately prey upon our tendencies of apophenia, a pareidolia amplified by our need to connect with other beings, leaving us now seeing faces in the corporate kill cloud.
The first infiltration of bots into our social networks all those years ago was, as it turns out, more than just a passing annoyance. It was a foreshadowing of the precise techno-social relation to which the forces of Capital seek to reduce all human interactions. On SocialAI —launched last year— every human user is hermetically sealed off from all other humans on the platform, enclosed within a network generated just for them. The bots with whom a user shares their SocialAI network can be configured to respond to their presence and their posts, to comfort or confront them, messaging and massaging their ego just so.
Some may have interpreted Michael Sayman’s pure next-token predicted unsocial-network nightmare as a joke, or even satirical commentary on modern networked being. Instead, it merely served as research for the Predictive Capital future increasingly embraced by Platform Capital. Unsurprisingly, Sayman now works at Meta, where plans to populate Facebook and Zuckerberg’s daft metaverse with permanent in-house next-token prediction machines —given anthropomorphic wrappers furnished with their very own unique user accounts— are well underway. Without the slightest hint of self-awareness or contrition, Zuckerberg declares that the average American now has “fewer than 3 friends but has need for up to 15” —of his advertising devils. Revealing the quite astonishing level of sociopathy required to ignore the pivotal role his creepy website has played in delivering us into this epidemic of loneliness, and now proudly selling us his proposed cure —once again, this is pure pharmakon.
The culture war is now waged at every scale. It begins within us, as a new machinic inquisition framed as a battle against the so-called ‘bugs in humanity’ —against basic empathy, against the Hungry Bryans inside all of us. It spreads outwards, weaponised by TERFs, racists, nationalists, xenophobes, the anti-DEI and anti-woke brigades of the DoGE Youth, and hollowed-out bigots of every stripe. Yet it projects still further —as it always has— onto the international stage, most recently in the form of Trump’s aggressive trade tariffs, which must be seen as economic assaults animated by the same compulsions, now turned outwards. Each front in this war stems from the same source: Capital’s production of ideologies that frame noble traits such as empathy and human vulnerability as defect, while promoting pro Capital traits such as greed, and anti-human machinic surrender as virtue, in the drive to align us ever more fully with the inhuman efficiencies of Predictive Capital.
Few statements more starkly reveal the anti-human trajectory of this pro-machinic-Capital ideology than this confession, recounted by Jaron Lanier, from a recent lunch in Palo Alto:
Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a ‘bio baby’ because as soon as you have a ‘bio baby’, you get the ‘mind virus’ of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it’s much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.
Jaron Lanier, (2025)Belief in biological human reproduction is apparently now part of the ‘woke mind virus’ —another of those ‘bugs in humanity’. While this may recall similar statements from prior eras, of those wishing to forego having children for the sake of their careers, those careers did not involve the creation of entities designed to replace the labour of any humans you might have otherwise brought into the world. The destination here is more than a mere embrace with Predictive Capital’s “machines of loving grace”, it is the long-fantasised, final domination of Mother Nature by Father Capital.
From the standpoint of accumulation, living labour has always been a burden to Capital —a volatile necessity to be minimised, disciplined, or discarded. Though the true source of all surplus–value, labour power is nonetheless treated as variable capital —a cost centre to be reduced, a site of friction, and a threat to be neutralised. Skilled workers, whose expertise grants bargaining power and autonomy, are viewed as particularly inconvenient obstacles in Capital’s pursuit of frictionless self-maximisation.
The rise of standardised public education under industrial capitalism can be read not as a gift of enlightenment, but as a mechanism of labour discipline. Schools were designed not only to produce a workforce equipped with the basic literacies to furnish the labour market, but to ensure a surplus of capable yet compliant workers —a strategy calculated to suppress wage demands, weaken collective power, and inculcate habits aligned with industrial timetables and hierarchical authority.
Far from tools of democratisation, machine learning models are weapons of disempowerment. They escalate Capital’s long-standing war on worker bargaining power, and extend Rentier Capitalism’s hold over both the means of production and the means of expression.
Altman and the C-suite of Predictive Capital will tell you these machines are just another tool. That like all prior revolutionary tools their advent will constitute a blip, a period of marketplace and workforce ‘readjustment’, even that, in the end, they will create as many jobs as they destroy. This is a lie. These are not tools held in the hands of workers. Quite unlike prior tools, these are aggregations of all the cognitive tools we have made and all the creative labour we have performed. In truth they are not tools at all. Inseparable from hyper-concentrated Capital, they are weapons that operate at vast scale and impossible frequency. Weapons that regurgitate endless recombinant echoes of all they have consumed. They transform countless discrete human signals into a single faceless assemblage —a new Automatic Subject, its self-maximising imperative now steering our collective path.
Just as the mechanised loom displaced skilled weavers not for reasons of quality, but to suppress wages, discipline labour, and maximise profit, the machines of Predictive Capital now displace cognitive and creative workers to the same end: reducing labour costs, undermining collective power and dissent, and prioritising scale over care, coherence, or craft. This is not merely dehumanisation; it is spectralisation —a turning of living subjects into disposable signal, their presence reduced to noise, their desires and their futures overwritten by Capital’s predictive will. Labour is no longer merely exploited in life, but extracted in death —reanimated as statistical echo and instrument of simulation. We are told to be quiet —to shut-the-fuck-up, take the supplements, do the push ups, work longer, work harder— and be thankful we have a job and are not being deported, or worse. We are commanded never to trust our desires, to disregard our better selves, to forget the hardships of the present and our dreams of collective futures, and instead to entrust machinic Capital to desire for us —to accept the futures it predicts.

As soon as man, instead of working on the object of labour with a tool, becomes merely the motive power of a machine, it is purely accidental that the motive power happens to be clothed in the form or human muscles; wind, water or steam could just as well take man’s place. Of course, this does not prevent such a change of form from producing great technical alterations in a mechanism which was originally constructed to be driven by man alone. Nowadays, all machines that have to break new ground, such as sewing-machines, bread-making machines, etc. are constructed to be driven by human as well as by purely mechanical motive power, unless they have special characteristics which exclude their use on a small scale.
The machine, which is the starting-point of the industrial revolution, replaces the worker, who handles a single tool, by a mechanism operating with a number of similar tools and set in motion by a single motive power, whatever the form of that power. Here we have the machine, but in its first role as a simple element in production by machinery.
An increase in the size of the machine and the number of its working tools calls for a more massive mechanism to drive it; and this mechanism, in order to overcome its own inertia, requires a mightier moving power than that of man, quite a part from the fact that man is a very imperfect instrument for producing uniform and continuous motion. Now assuming that he is acting simply as a motor, that a machine has replaced the tool he was using, it is evident that he can also be replaced as a motor by natural forces.
Necrosploitation
The Labour of The Dead
From its sordid and ongoing relationship with (hidden) slavery to its deepening embrace with ghost workers, we may reasonably conclude that the desiring machine that is Capital harbours a yet more fundamental preference, not simply for labour that is underpaid or even unpaid, but for labour that is unseen, to be animated by the work of invisible bodies and minds —bodies it can disavow, and minds it need not acknowledge. Its ideal labourer is not merely exploitable, but spectral —present only as signal, absent as subject.
Yet this perversion —to be aspirated by an occluded occult-like absence— is but the shadow cast by a yet darker aspect, a pathological compulsion not only antecedent but innate. What Capital craves even more than the reputational impunity of poverty-waged labour that is unseen, or even the unencumbered profits of labour that is unpaid, is labour that is both; workers who can be exploited due not only to absence through visible occlusion or spatial displacement and dispersal —bodies and beings it refuses to qualify as life— but through temporal displacement, through their absence from the present.
In other words, what Capital truly desires is workers that are dead —rendered fully extractable, risk-free, and unresisting. Not merely labour that is past, but labour that cannot speak, cannot strike, cannot demand.
As Achille Mbembe lays bare in his account of necropolitics, power today is exercised not through the cultivation of conditions conducive to human thriving or even the basic sustenance of life, but through its differential abandonment and the strategic imposition of death.
Capital’s necropolitical logic involves a perverse economic preference: the ideal subject is not one who works, but one who worked and is now silent. Capital seeks to disavow their being while conditioning them into machinic patterns of behaviour —rendering them ever more exploitable, quantifiable, controllable, and ultimately simulatable. We might name this extraction of value from dead or incorporeal flesh: necrosploitation —the extraction of value from the dead.
Necrosploitation rests upon the principle that past labour, once objectified —whether as fossil fuel deposits, surveilled data trails, or stolen creative works— can be severed from the rights and claims of its originators. These congealed stores of effort are treated not as entitlements or legacies, but as a free “gift from god” —ready for expropriation under the manufactured consensus of presumed fair-use.
Fossil fuels embody the stored metabolic energy of long-dead organisms. As we burn them, that energy is extracted without regard for the lifeforms that survive or the ecosystems that sustain them.
Similarly, the vast datasets fuelling the machines of prediction represent the stored cognitive and creative labour of the living —scraped, aggregated, and reconstituted without consent. This aligns with Marx’s characterisation of Capital as “dead labour, which, vampire-like, lives only by sucking living labour, and lives the more, the more labour it sucks”, yet today’s necrosploitation visits its hunger upon different bodies, the living are increasingly displaced by archives of the dead. The fossil remains of past expression are now exhumed at scale as constant capital for predictive systems.
These subjects are not only denied recognition; their continued existence, intentions, and rights are treated as immaterial, as if they too were already dead. Drawing again on Mbembe’s theory of necropolitics, necrosploitation here does not merely extract from the deceased, but performs the symbolic death of the living, rendering them spectres in the data-shadows of their own stolen expression.

A total economy is an unrestrained taking of profits from the disintegration of nations, communities, households, landscapes, and ecosystems. It licenses symbolic or artificial wealth to ‘grow’ by means of the destruction of the real wealth of all the world.
Wendell Berry, The World Ending FireNecromance
Platforms of Extraction
Since the age of steam unyoked the living blood and bone of beasts of burden, Capital has been drawn instead by (and from) the phantom breath of dead carbon. Even today, 80% of global energy consumed is summoned from reserves of ancient spirit, the fossil remains of the dead.
The industrial revolution saw the rising power of accumulations of the labour of the past, or what Marx termed dead labour, congealed into machines.
In machinery, objectified labour confronts living labour within the labour process itself as the power which rules it; a power which, as the appropriation of living labour, is the form of capital.
Karl Marx, Grundrisse, p. 615Here he identifies in the industrial machine a turning point in the history of Capital: the moment when objectified labour materially confronts living labour as a ruling power, absorbing the labour process into Capital’s own realisation process. Yet even in this configuration, machinery still required the cooperation of living labour, however diminished —not only to operate and maintain it, but to provide the site of value extraction through labour-time.
The limit condition of industrial capitalism was therefore a residual dependency upon human input. Predictive Capital ruptures this limit through two unprecedented escalations: first, the accumulation of vast digital archives —semiotic deposits of dead labour expropriated at scale; second, the hyper-concentration of Capital sufficient to marshal planetary energy and compute infrastructures to metabolise the statistical weights held within these vast data stores. Together, they enable a shift Marx could not have foreseen: the statistical reanimation of dead labour into undead labour.
This rupture in scale —of data, of Capital, of computation— enables Predictive Capital not just to model probable action, but to pre-empt and displace it, enclosing subjectivity within simulations of labour and desire.
This plays out along these two entwined vectors. On one axis, Predictive Capital reanimates past labour not to generate new value through the living, but to simulate its generation —by function-approximating the shape of future value from past desire. This axis consumes and directs the platform feeds of The Apparatus of Attention —where aggregated human outputs become the training set in casting predictions of probable desire. On the other, it reanimates past labour to simulate the act of labour itself —the function-approximation not of next value, but of the next unit of labour, so casting predictions of probable intention. This governs the next-token predictions, and latent space interpolations of The Apparatus of Intention towards the emergence of so-called ‘generative’ outputs. Both axes displace the subject: the first as desiring agent, the second as labouring being.
Labour no longer appears so much to be included within the production process; rather, the human being comes to relate more as watchman and regulator to the production process itself.
Karl Marx, Grundrisse, (1857–61), p. 624These systems now interpolate between past actions to produce plausible continuations, thereby enacting a spectral form of productivity in which Predictive Capital appears to simulate value-creation without living labour —a fantasy of autonomy sustained in part by a hidden underclass of prompt labourers, data labellers, click-workers, and robbed content creators who, though structurally diminished, remain entangled in the apparatus as spectral auxiliaries of the machine.
What was once congealed labour requiring living hands to stir it into motion is now undead labour, reanimated by planetary-scale infrastructure —not to meet human need, but to simulate desire and intention itself, and thereby perpetuate Capital’s self-expanding circuit with an ever diminishing need to recognise living workers. The result is a form of productive force alien not only to the worker, but to the very condition of life.
As the forces of Capital fuel and amplify our desires, its insatiable appetite continues to indulge ours through the necromantic thrusts of fossil-capital that hump our planet into oblivion. In 2024, the average global temperature for the year rose, for the first time, more than 1.5°C above pre-industrial levels. Capital’s response? To raise global carbon emissions to an all-time high.
Before we began this ongoing mass exhumation, the level of carbon dioxide in Earth’s atmosphere stood at 280ppm. It now exceeds 480ppm. This steep rise stems directly from the (by)products of Capital’s centuries-long proclivity: its macabre fixation with the labour of the dead.
The rapid release of carbon sequestered over aeons by the labour of ancient flora has delivered us to the precipice of annihilation —dramatically altering the global climate, collapsing ecosystems, and accelerating the sixth mass extinction.
With only depleted reserves remaining, and the consequences of their continued combustion a mounting existential threat, Capital’s substance abuse approaches its material limits —yet its appetite endures, as it always has, and now accelerates, promising to again abandon the ruins it creates and escape to the next site of extraction.
Seeking new conscripts for the army of the dead it has long summoned across geological time, Capital has found other ‘graves’ to desecrate. A more recent archive of historical being: a new ‘black gold’ drawn from carbon lifeforms, not sequestered across geological time, but from the living present —compressed for anaerobic decomposition and subsequent reanimation within the necropolis of the data centre.
It is these mausoleums —the crypts of our data-shadows— that Capital now visits to sate its hunger for dead flesh. The reanimation of that flesh —aggregated and compressed into necropolitan archives— fuels a recursive cycle of prediction and manipulation, wherein the labour of the dead is endlessly repurposed as the commodity form fed back to the living. These are not neutral products, but components of a weaponised Advertising Industrial Complex: a planetary-scale apparatus engineered to shape opinion, fuel desire, guide behaviour, and annihilate or abandon according to the necropolitics of profit over life.
Contrary to the fantasies spun by marketing hype and the hi-tech mystique of inscrutable surfaces, Capital’s predictive turn is not some new found efficiency, temperance, or abstinence. Its outputs are not conjured effortlessly from the ether —as if plucked from the latent possibility of quantum fluctuations without cost, consequence, or labour.
Capital’s necrosploitation may offer quick riches for some. Yet whether it is mining the sacred burial grounds of ancient flora and fauna, the aggregated data-shadow of modern networked being, or the lifeworks of countless human creators, Predictive Capital draws upon dead flesh. In all cases, the environmental costs of this necromantic extraction are grave.
The machines of prediction require vast quantities of water for cooling and their insatiable appetite for energy already exceeds the capacity of existing clean energy infrastructure —a mismatch only set to worsen as their model size, training frequency, and ubiquitous deployment accelerate.
Simply put, Capital is a necromancer. Its necromancy is not metaphorical —it is its primary drive and mode of operation. It is more than mere communion with the dead: it is the administration of death to amass an army of the undead, rolling the dice of divination across distributions of their bodies, it is the calling of the spirits to harness their labour; it is the summoning of their past to cast predictions of the future.
Predictive Capitalism builds towards the ecstatic consummation of Capital’s necromantic drive. Lured by the dead flesh of data-shadows cast by the increasingly pervasive surveillance of the Advertising Industrial Complex —archived in the networked crypts of its Apparatus of Attention, platforms designed to harvest gaze, clicks, and affect— Capital summons the labour of the dead to fuel the computation of its subjects’ future desires.
What appears as creation is, in fact, calibration. The archive becomes not just feedstock for consumption, but for manipulation —the repurposed remains of past labour now weaponised to anticipate, steer, and intensify the circuits of accumulation.
This shift echoes Shoshana Zuboff’s identification of behavioural surpluswithin a behavioural futures market as the extractive resource of Surveillance Capitalism, but diverges in key respects. Where Zuboff focuses on the commodification of behaviour, Predictive Capital intensifies this logic —monetising not the act, but its anticipation and yet more critically, doing so through planetary scale expropriation. Here, prediction itself becomes the commodity form. What is sold is not content or even action, but the simulation of intention and desire. Predictive Capital thus becomes a fully anticipatory regime, in which the future is not merely imagined, but pre-emptively enclosed within a machinically automated instantiation of Baudrillard’s absolute advertising.
As we confront the planet’s ecological and material limits, Capital’s predictive turn does not mark a shift towards sustainability, but a desperate acceleration. Capital’s new romance with dead digital flesh brings a large-scale escalation of its appetites. Rather than adapt or moderate, it has intensified its destructive rampage: automating extraction, forecasting compliance, and eliminating whatever it cannot instrumentalise or that resists its command.
Under the regime of Predictive Capital, the necropolitical logic observed by Mbembe becomes overt and systematic. It now visits renewed levels of violence upon both its subjects and the planet, accelerating towards ever intensifying self-concentration and the ‘purification’ of its operational space. All that is deemed inefficient, unpredictable, or surplus to its needs —resistant life, non-compliant populations, ecological limits— is met not with accommodation, but with elimination: removal from the labour markets, from national borders, from ecosystems and homelands, from existence.

Capital suddenly presents itself as an independent substance, endowed with a motion of its own, passing through a life-process of its own, in which money and commodities are mere forms which it assumes and casts off in turn. Nay, more: instead of simply representing the relations of commodities, it enters now, so to say, into private relations with itself.
Karl Marx, Capital Volume I, Chapter 4 p.108 (1867) →The Automatic Fetish
Learning to be Capital
In the 15th century, Portuguese colonists came upon an island dense with potential. To their capitalist eyes Madeira was the perfect vault of surplus: thick forests, fertile volcanic soils, strategic location. Already enriched by the conquest of Ceuta and the plunder of North African trade routes, the Portuguese, quickly recognised what could be extracted from this untouched island paradise. Here, sugar became the conduit for metabolising land, fuel, and enslaved labour into refined profit. Overnight, Madeira became Europe’s leading sugar exporter. Yet this ascent was soon followed by the inevitable exhaustion. The forests were razed to fire the refineries; the soil depleted; the enslaved broken. In less than a generation, the island’s reserves, its vault of surplus, was spent. The response from the Portuguese capitalists was not restoration, but departure. They moved on: to São Tomé, then Brazil, then the Caribbean. As Monbiot and Hutchison observe in recounting this tale, “Boom, Bust, Quit is what capitalism does”.
The story of Madeira is not merely one of colonial ambition or imperial violence. It begins with discovery and overcoding: the reterritorialisation of a lush wild island. A monoculture then drains the soil, wealth flows outwards, and labour becomes invisible. Once depleted, the island is abandoned, its people left to inhabit the hollowed shell of once-extracted value. This is not a deviation from Capital’s logic, but a pure expression of it. It is this same logic that overfishes to leave empty, lifeless seas; that slowly reduces the number of crisps in a packet while raising the price; that injects water or bulking agents into food and sells it by weight; that cancels aid to the poorest while cutting taxes for the richest. The Portuguese did not plan Madeira’s collapse; they enacted in obeyance with Capital’s self-maximising imperatives. Forests, soils, and enslaved lives were metabolised not by pure malice or miscalculation, but through the recursive automation of accumulation. What the tale of Madeira exposes is not just Capital’s boom-bust cycles, but something more fundamental, the machinic drive of Capital itself: a restless logic that extracts surplus-value to the point of exhaustion, then abandons the carcass in search of the next victim.
The driving force behind this sequence springs from a seemingly innocuous mutation in the logic of exchange itself. In Marx’s terms, in pre-capitalist societies, “simple” commodity exchange followed the circuit C–M–C: a commodity (C) is exchanged for money (M), which is then used to obtain another commodity (C) —grain for cloth, cloth for tools.
Within such flows, money is a means; the end remains use. The capitalist departs from this thing for thing exchange, he finds himself not in lack but in surplus, a surplus compelled by flows of desire towards self-expansion. Marx explained that under Capital the circuit became M–C–M′: money (M) is invested in a commodity (C) only to be sold for more money than was invested (M prime). Here, the commodity is merely a conduit. The aim is no longer use, but surplus, the sole purpose: to expand from M to M’ —the circuit now predicts. Here prediction is fundamentally baked into the capitalist circuit; a prediction demands growth, anticipating more money out than is put in.
Coinciding with the introduction of waged labour, or what Deleuze and Guatarri termed ‘the free slave’, this mutation marks the inception of Capital proper. In this shift from simple commodity exchange to the capitalist compulsion to make money beget more money lies the expansionary nature of Capital’s drive. Madeira was metabolised because the Portuguese followed this circuit to its logical conclusion. Once all available surplus had been extracted, not only does its logic provide no reason to stay, but compels it to leave, to move on, to locate the next untapped store of surplus.
The formula ‘capital value in search of additional value’ is now understood as capital organising a process of self-valorisation (Selbstverwertung)
Karl Marx, Capital Volume I, The General Formula for Capital, p.255 (1867) – Penguin Classics Edition, Trans. Ben FowkesAcross the Grundrisse and the three volumes of Capital, Marx repeatedly demonstrates that Capital is the accumulated force of dead labour, which subordinates the living to its own recursive expansion and self-valorisation. In so doing, it begins to function as if it were alive —an Automatic Subject. This is not simply metaphor. Capital, he wrote, appears to “issue from the womb of Capital itself”, by mystifying the labour and resources it appropriates it “presents itself as an independent substance, endowed with a motion of its own”. In other words, through the machinery of production, interest-bearing finance, and the accumulation of what he termed the general intellect, Capital appears increasingly autonomous, self-moving, detached from the exertions of living labour.
For the movement in the course of which it adds surplus-value is its own movement, its valorisation is therefore self-valorisation. By virtue of being value, it has acquired the occult ability to add value to itself. It brings forth living offspring, or at least lays golden eggs.
From Adam Smith’s ‘invisible hand’ through to Frederich Hayek’s ‘wisdom of the market’, observations of the automaticity of Capital have been foundational for generations of critical theorists seeking to understand how a system could acquire such apparent autonomy —operating, as it were, behind the backs of its subjects. Lukács saw the commodity form subsuming all relations, such that the world appears ruled by things rather than people. Adorno and Horkheimer diagnosed the rise of a totalising instrumental rationality in which even culture is bent to Capital’s logic. Marcuse showed how desire itself is co-opted into a system that generates obedience through pleasure. Debord described Capital as Spectacle —images detached from life, yet mediating and structuring it. Camatte warned that Capital had escaped human control entirely, become “a being-for-itself”, remaking the world in its own undead image. Deleuze and Guattari, described it as a desiring machine —a system that does not merely regulate desire, but produces and consumes it as part of its own self-replication. Lyotard ran with this idea declaring that “every political economy is libidinal”, that Capital is a pulsating circuit of intensities, an “insatiable desire” that traverses society. Baudrillard then mapped its evolution into a regime of free floating signifiers that simulate value even after it has smelted use value and exchange value into a self-parodying sign value.
What unites these perspectives is a shared recognition: that Capital behaves as if it has agency, appetite, and will. Not because it is conscious, but because its structure exerts real, coercive effects —organising life, labour, and even thought according to its inhuman imperatives. This is not an illusion, but a material truth masked by its self-mystification. Capital, as Moishe Postone later put it, is a “self-moving substance that acts as if it were a subject”.
We may trace a line through these theories, neither linear nor sequential, but from which something else emerges. Beyond their common recognition of the automaticity of Capital, lies an implicit escalation. Across epochs and vantage points, as Capital’s operations mutate, as new conditions are identified, new aspects observed, and new thresholds breached, its autonomous power of self-reproduction appears only to intensify. In Lukács, the universalisation of the commodity form; in Adorno, the subsumption of culture into exchange; in Marcuse, the integration of desire into the logic of production; in Debord, the displacement of life by representation; in Camatte, as a subject without a human face; in Deleuze, as a desiring machine; in Lyotard, a quasi-subject of desire: a system that feeds off human libidinal investment, perpetually outstripping rational or moral constraints; and finally in Baudrillard, Capital no longer even sells objects or ideas, but reproduces itself as pure signal —a semiotic compulsion without origin, function, or external limit. From there we are drawn into Fisher’s Capitalist Real, Berardi’s Semiocapitalism, Naomi Klein’s Shock Doctrine, and Jodi Dean’s Communicative Capitalism —and on, and on.
Some of these observations expose aspects and dynamics latent from the outset; others register newly emergent mutations specific to phases of Capital’s advance —yet, while these theories are not always complimentary nor do they necessarily fit neatly together into a greater whole, none cancels the others. Instead they each describe aspects of Capital’s self-compounding automaticity from differing perspectives as it accelerates and intensifies its ever expanding systemic and infrastructural instantiation, and its cultural, and technological reach.
Of course, Capital is not arrested under our inspection. Even as we observe the kaleidoscopic performance of its many facets, it shifts its operations, they grow in scale, momentum, appetite, and malevolence —their cadence ever more swiftly disrupting our assumptions and invalidating our conclusions even before we reach them. Its recursive logic ever deepening, Capital ceases to merely organise life, instead increasingly transforming its preconditions in advance. From its inception it has ensnared and enveloped us, each layer of its ever deepening recursion has only compounded our alienation from the real while intensifying its self-expanding operation.
Marx saw dead labour as the heart of capital accumulation, arising from past labour made into the constant capital of infrastructure, machines, and tools. Industrialisation intensified this relation, as production grew increasingly dependent on the accumulation of past labour stored in constant capital form in these tools —shifting the balance of value creation from the living to the dead.
For Marx, it is within interest-bearing Capital that “this automatic fetish is perfected”, because here value seems to beget value —with no visible connection to labour or production. In this self-referential form, Capital appears to expand spontaneously, as if animated by its own internal law. Capital’s extractive circuit M-C-M’ collapses into its most distilled most fetishistic form M-M′, the social relation effaced entirely, money begets money, seemingly without the mediation of production or labour, as if the circulation of value were an end in itself. Here, the capitalist becomes a mere functionary of Capital’s own motion —its priest, not its author. This is not merely financial abstraction. It is the crystallisation of a deeper logic: Capital’s self-expanding imperative.
As data is continually scraped from artists, conversations, and communities it is fed into the ever-expanding corpse of the necropolitan archive. This becomes the substrate for predictive models whose very function depends upon the scale, frequency, and statistical coherence of accumulated traces. In predictive systems —accumulation is not incidental or even merely a logical imperative— statistical density becomes the metabolic driver for Capital’s machinic reproduction, increasingly replacing labour time. The larger the scale at which this statistical density can be extracted the more it inflates the valuation of that into which it is metabolised. As this sunken cost grows, so too does the need to ensure that the real never deviates from the predictions it casts. This is the new logic of self-moving value: recursive, disembodied, and automated in form. Under this schema, so-called ‘generative AI’ is exposed as an obfuscation of dead labour reanimated into what should more accurately be termed: Predictive Capital.
Through a descent into pure fiction, Predictive Capital animates the circuit through statistical weight. The extractive arc of M–C–M′ has mutated once again, this time operating according to a circuit with prediction as the commodity form: M–Pr(Λ | 💀)–M′ —now routing Capital’s recursive self-expansion through the statistically conditioned derivation of prediction from the labour of the dead. Those powering these machines would have us overlook that middle clause. For them, this new circuit flows as M–hype–M′, or M–p(doom)–M′ —self valorisation routed through provocation, speculation, and hallucinated risk, each obscuring Capital’s self-expansion as the true threat. Not only does prediction here become the commodity form, the operation executed within these machines, but they are machinic instantiations of the Capitalist circuit, machines that automate its insatiable hunger, its demand for growth, and the anticipation of getting more out than was put in.
Within Predictive Capital, dead labour is no longer confined to physical tools or infrastructure, but becomes symbolic, cognitive, affective —and above all, cumulative. What we now witness is not merely a real subsumption of labour, but its automated reanimation. The expropriated cognition of innumerable creators embedded in machine learning models, appears to produce outputs with ever diminishing variable capital expenditure. Thereby, the next-token prediction machines of so-called ‘generative AI’ deign to realise Capital’s self-expanding drive by instantiating its self-perpetuating logic within the silicon substrate and planetary infrastructure. At once reanimating dead labour and entraining the living to serve its self-expanding project, Predictive Capital perfects its innate necromancy within a machinically realised Automatic Subjectivity, reinforcing its dominion over human agency as a function of its predictive imperatives.
That part of capital, therefore, which is turned into means of production, i.e., into raw material, auxiliary material, and instruments of labour, does not change its magnitude of value in the process of production. I therefore call it the constant part of capital.
Karl Marx, Capital Volume I, Chapter 8, (1867)Under industrial capitalism, constant capital referred to past labour —dead labour— objectified in the form of tools, machinery, and infrastructure. While it entered the production process, it did not itself generate surplus-value. Only living labour could animate this inert material —could set dead labour to work— to create new value. Yet as the means of objectifying labour evolved, so too did Capital’s capacity to store, recall, and operationalise it. What began within our early evolution with the construction of simple tools and the extraction of raw materials eventually extended into the domain of language, notation, and the symbolic compression of cognition. Driven by Capital’s imperative to extract surplus wherever possible, the development of structural apparatus for the capture and reanimation of labour has accelerated —from written language to musical notation, industrial machines to contemporary recording, and now predictive systems that claim to capture and reanimate our cognitive labour.
Observing advances in production during the industrial era, Marx recognised that increasing investment in machinery, or constant capital, lead to a relative decrease of variable capital expenditure required during production —the wages paid to living workers. As capital accumulates, it tends to concentrate in the form of means of production, shifting its ‘organic composition’: ever more of it becomes dead labour, embedded in tools and infrastructure, while the share directed to living labour decreases. This dynamic is not incidental, but intrinsic to Capital’s logic of self-expansion —a recursive strategy of accumulation that privileges scale, automation, and control over the unpredictability of living workers.
Predictive Capital arrives as the latest intensification of Marx’s constant capital —dead, databased, and redeployed without the living worker, promising to reduce the proportion of variable capital expenditure demanded to generate new value until it approaches zero. This mendacious denial of the existence of labour, and the all too real diminishing expenditure on wages or remuneration for the living, do not, of course, mean that the labour is not there —yet, this is clearly the ambition.
Outputs from these machines are valorised and circulated as if issued “from the womb of Capital itself”. In this, Predictive Capital aspires to the final severance of value from labour, replacing the requirement for living action with the statistical density of prior expression. It is not that the machines think, but that they harvest the afterlife of stolen thought at scale. This drive towards profits unencumbered by living labour, towards ever-larger models, ever-greater compute, and the ever-expanding fractal capture of expressive and cognitive terrain is not merely technical ambition —it is Capital’s self-maximising logic, rendered machinic. Here, the Automatic Subject is no longer a systemic abstraction but a material infrastructure with voracious appetites and accelerating autonomy.
Predictive Capital emerges not as rupture, but as the crossing of a further threshold: the material transgression of an immaterial boundary, where Capital’s automaticity is no longer systemically diffuse but machinically instantiated. The hyper-concentration of Capital now combines with Baudrillard’s Absolute Advertising through the subsumption of globally networked communications and compute infrastructure by the Advertising Industrial Complex, and the planetary-scale expropriation of dead labour, to provide the preconditions necessary to cross this threshold. It is under these conditions that Predictive Capital emerges as the machinic realisation of the automatic subjectivity of Capital that Marx first observed in the industrial era.
Capital has escaped us. It has become autonomous, it has become a community in itself, producing and reproducing all human relationships within its own logic.
Herbert Camette, The Wandering of Humanity, (1973)Under the regime of Predictive Capital, prediction becomes the new commodity form and we reach a further peak of commodity fetishism in which dead labour, this time accumulated in statistical models, is animated by compute to simulate the outputs of living labour. Here, labour is displaced not merely in space, but in time. Rather than congealed into machinery as dead labour it is recorded and resurrected as undead labour, so that value appearsto emerge autonomously, severed from the labouring subject entirely. Predictive Capital thus consummates the Automatic Fetish —obscuring not only the worker but the very fact of labour itself.
Within this recent machinic instantiation, we may already see signs of accelerating recursion in Capital’s self-compounding automaticity —a further untethering from both the real and from expenditure on living labour, beyond what was already expropriated. Earlier models expended compute primarily during pre-training, drawing on archived human expression to build their representations. The latest so-called ‘frontier models’ increasingly consume additional compute during the alignment phase at test-time. Where previously models were subject to supervised fine tuning and reinforcement learning from human feedback, increasing compute is now expended here through reinforcement learning on ‘synthetic data’ —that is, sequences of tokens predicted by prior models. Within this loop, prediction feeds prediction, and the archive is overwritten by its own simulation. Here, Predictive Capital’s severance from labour becomes self-reinforcing; the simulation self-replicates, its detachment from the real widens and its propensity to hallucinate intensifies.
In a following section, we will explore just how far this logic extends in the conquest of chess, Go, and even algorithmic design. Yet it is critical, here at this juncture, to understand the implications of the increasing application of reinforcement learning (RL) during the development of the models that instantiate the Automatic Subject of Predictive Capital. When AlphaZero ‘solved’ chess and Go, when AlphaEvolve improved the Strassen algorithm for matrix multiplication, they did so not through insight but an inhumanly exhaustive, agentic exploration across vast possibility spaces. More critically, each search was guided by a reward signal calibrated by repeated policy updates —a process designed to reinforce the paths or actions that led to higher rewards.
In other words, RL operates by defining a goal in advance, then rewarding behaviours that approximate or maximise its fulfilment. In philosophical terms, RL encodes a teleological structure: every exploratory move is ultimately judged by how well it conforms to a predetermined quantitative metric. By definition, RL demands the reinforcement of an imperative. For chess and Go, the imperative is simple: victory in the game. In matrix multiplication, it is greater efficiency through the reduction of computational operations.
The reward function for commercial, general-purpose Predictive Capital will be far more complex —not bounded by well described, limited game domains. Yet across all possibilities explored, one signal will remain constant: the reinforcement of Capital’s self-expanding imperative. That AlphaEvolve first found efficiency improvements in its own code, and that the very first use of the matrix math breakthrough it located, was to optimise the efficient running of Google’s Advertising Industrial Complex, is instructive. While RL is formally agnostic, its application within Capital’s infrastructural and institutional domains inevitably aligns the reward function with its imperatives —be they click-through rates, efficiency gains, or predictive alignment with previous profit. In such systems, what is reinforced is not general intelligence, but Capital’s own reflexive logic. In Marxist terms, RL-based tuning instantiates the real subsumption of an exhaustive mapping of possibility space to Capital’s command.
Predictive Capital’s sovereign subjectivity now flows and compounds with overwhelming statistical weight. Through these flows of both next-token and next-content prediction, dead labour is not reanimated discretely, but summoned en masse —a statistical enchanting that crushes the living with the accumulated weight of their own past. Human agency persists, yet when not simply discarded as statistical irrelevance, it is increasingly subordinate to simulation through the recursive hauntology of prediction. Capital no longer merely produces commodities, it produces predictable subjectivities.
While instantiating the Automatic Subject, Predictive Capital, therefore also entrains its subjects to become its auxiliaries —agents of its self-replication. Like the models we feed, we too are trained through feedback —nudged, optimised, and rewarded in accordance with machinic priorities. What reinforcement learning encodes in silicon —a recursive compounding of that which advances Capital’s self-expansion— is mirrored in the apparatus that now modulate and govern human behaviour. Clicks, impressions, algorithmic amplification: these become the reward signals of social survival, aligning living desire with the imperatives of Predictive Capital.
As Camatte warned, Capital does not merely dominate —it remakes the human to serve its reproduction. It moves beyond human governance to domesticate both labour and capitalists alike as mere appendages to its autonomy. Once we are no longer required as bearers of labour-time, we are repositioned as agents of circulation, validation, and signal generation —summoned to self-optimise, align with its efficiencies, perform for its algorithms, and feed the datafied corpse.
This is no longer the factory floor, but a distributed conditioning apparatus in which subjectivity is formatted in advance by the statistical requirements of prediction. We become pre-individuated, filtered for legibility, and rewarded for behaviours that conform to machinic expectations. Predictive Capital becomes our judge, jury and executioner; compliance, the condition of our survival.
We the automatic subjects multiply —not as autonomous entities, but as fragments of a recursive infrastructure whose purpose is to refine the simulation. In this system of diffuse calibration, we do not merely mirror the models we train; we too are trained —aligned through the reward signals of Predictive Capital in accordance with its project of self-expansion. The reinforcement learning of the flesh converges with that of the machine in preparation for our automation and our obsolescence.
First the production line escaped the factory, then it patterned culture, now it slips beneath the skin. Here, prediction becomes a regime of reward-conditioned subjectivation —embedded in code, evaluated against metrics, and calibrated through feedback. Under Predictive Capital, the human is not merely displaced by the machine, but recursively reformatted by its logic —rendered machinic, programmable, and partial. Our alienation compounds in line with the degree of our automation.
Andreas Malm reminds us that Capital turned to fossil fuels not merely for efficiency but to discipline labour —to rely on the dead rather than the unruly living. So too with predictive systems, Capital seeks to abandon management of a living workforce once it has expropriate their cognitive and creative dead labour into ghostly assemblage at a scale sufficient to predicttheir replacement. Necrosploitation emerges as the class logic of Predictive Capital’s necropolitics —a pursuit of profit that no longer negotiates with the living, but seeks only to outmanoeuvre them.
Crucially, such extraction of labour without consent or remuneration treats human beings as an absence rather than as a presence; it renders labouring subjects as political and economic nonentities. Regardless of corporeal status, they are stripped of all rights, exploited as if deceased. Under the rule of predictive Capital, its subjects are pronounced dead on arrival —incorporeal from birth. This is an automated and recursive necropolitical subsumption —value extraction through the reanimation of prior human expression.
Echoing Marx’s famous vampiric analogy, Fisher once wrote, “Capital is an abstract parasite, an insatiable vampire and zombie-maker; but the living flesh it converts into dead labour is ours, and the zombies it makes are us.” This image captures not only the violence of expropriation, but the macabre reciprocity it once entailed: Capital needed us, if only to drain us. Yet as the Automatic Subject of Predictive Capital simulates the labour of the living through the remains of the dead, even that parasitic necessity begins to fray. Fisher qualified his claim —“for the moment at least, Capital cannot get along without us”— but in the predictive turn, this clause dissolves. As its escalating function approximation of our labour, gesture, and subjectivity, makes clear, increasingly, we are categorised as dependents Capital deems it can get along without —even as it forecloses alternative modes of subsistence and deepens our dependency upon the very systems its abandonment will dismantle or exclude us from.
This is the asymmetrical violence of necrosploitation: not merely dispossession, but enforced dependency after the theft. The engines of prediction, tasked with calculating efficiency across populations, do not operate neutrally. The weights and metrics of Capital’s departments ofefficiency encode its necropolitical priorities: rewarding those deemed optimal (its most pliant servants), maximising extraction from the optimisable, and abandoning the rest.
Fisher continued by clarifying the inverse of his temporal clause:
It remains the case, however, that we can get along without it [Capital]. The parasite needs its ‘mere conscious linkages’ but we do not need the parasite. In addition to anything else, to ignore the crucial functioning of the meat in the machine is poor cybernetics. The denial of human agency is an SF fantasy, albeit one that is everywhere realising itself.
Unfortunately, SF fantasy and poor cybernetics —as we have seen— are precisely what shape the ideologies spawned by the Automatic Subject of Predictive Capital. Here, Mbembe’s differential abandonment shifts from political decision, to machinic instruction.
Predictive Capital strips out the affective, embodied, and collective dimensions of life, treating them as noise, preserving only those fragments of the human legible to its models. In so doing, it discards not only agency, but suffering, care, memory, and resistance —the very capacities that cybernetics should account for, if it were truly concerned with feedback and adaptation rather than domination and profit.
The escalating violence directed towards those already at the periphery in this phase of hyper-concentrated Capital arises not only from mounting environmental pressure, resources scarcity, or the escalating concentration of wealth leaving the majority of us competing over an ever diminishing remainder, but from the inner logic of the Automatic Subjectivity of Predictive Capital —a machinic will to ‘efficiency’ that amplifies indifference into hostility through both epistemic exclusion and statistical marginalisation. Whatever deviates from the ‘norm’ is no longer treated as exceptional, but marked as anomalous, reclassified as inefficiency to be purged, then threat to be eliminated.
Here, predictive systems do not merely reflect social hierarchies; they reinforce and accelerate them, enacting a selective forgetting that deems vast swathes of life unworthy of simulation, and therefore unworthy of survival. Much of the ‘meat’ in this cybernetic system is thus ‘optimised’ into oblivion, having fallen beyond the fringe of the distributions within the models of Predictive Capital —beyond the means of prediction, outside the modes of distribution it recognises as legitimate. Those not legible as ‘efficient’ signal —nor sufficiently unresisting— are written out of the system entirely, declared obsolete by code.
The living that remain will be rendered ever more machinic. As for discounting human agency or “the meat in the machine”, the now materially instantiated parasitic Automatic Subject that is Predictive Capital increasingly acts to transform the very conditions of life, ensuring that the option to “not need the parasite” recedes ever further from our view.
In summary, so-called ‘generative artificial intelligence’ can be more accurately understood as Predictive Capital: an instrumentalisation of Capital’s automaticity; a machinically instantiated Automatic Subject. Similarly, Capital as a “self-moving substance that acts as if it were a subject” can itself be understood as a phantom artificial intelligence. Not because it thinks, but because the recursive structure of capital accumulation generates real systemic motion —a machinic drive that simulates intention, enacts will, rewards obedience, and punishes, abandons, or annihilates resistance. Moreover, just as Lukács warned, neither Capital’s nor Predictive Capital’s phantom objectivity constitutes real intelligence, but rather our estranged social relations reflected back to us in spectral form.

Within the worlds that emerge from simple rule based systems called cellular automata, we may observe all too familiar patterns of annihilation, exhausted stillness, and runaway expansion. Itself a kind of cellular automaton, the Automatic Subjectivity of Capital operates and expands not through conscious command but in pursuit of its expansive imperative through the iterative application of simple rules across a complex grid —our societies, our psyches, our signals.
Predictive Capital, however, does not merely spread across a neutral undifferentiated terrain. As it propagates, it prepares its nominated substrate in advance, formatting life for legibility, quantising and smoothing irregularities, and foreclosing that which resists inscription or remains illegible to its circuits of value. We are not pre-formatted into a grid, the real is not born capitalisable; it must be rendered as such —reduced, abstracted, categorised, and quantified. Each cell in Predictive Capital’s ledger —a post, a person, a people, a decision, a democracy, a legislature— is then evaluated and updated according to localised logic: optimise, extract, predict, discard.
No singular intelligence compels this motion. Yet, through recursive propagation, a pattern emerges —sprawling, relentless, undead. The logic of accumulation replicates itself, indifferent to content, consuming and appropriating novelty, subjectivity, resistance and even negation as mere configurations of the same substrate of prediction. We are no longer simply held within the system; we are the terrain, Predictive Capital’s field of play. What appears autonomous is nothing more than the recursion of rules already written —rules that reformat us as automata.

Slave labour cannot be obtained without somebody being enslaved.
Richard Barbrook, Andy Cameron, The Californian Ideology, (1995)Civil Death & New Robota
Metaphysical & Ontological Theft
The word robot originates from the Slavic robota —meaning forced labour or slavery. Coined by Karel Čapek in his 1920 play R.U.R. (Rossum’s Universal Robots), the term was never about machines per se, but about subjugation: the reduction of sentient or semi-sentient agents to instruments of labour, stripped of agency, purpose, and rights.
Today, as concentrations of capital extract vast reserves of creative and cognitive labour into the formation of next-token prediction machines, a new robota form emerges —not a machine of liberation but an architecture of subjugation. This is not pure machinic automation —the word artificial has always been as much of a misnomer as the word intelligence— but a renewed human enslavement, a new mode of forced labour. This modern slavery reprises its ancient antecedents, here we are bound not through chains, but through an algorithmic severing from alternative means of subsistence, from authorship, ownership, remuneration, even recognition. This is Capital’s new necropolitical operation: to treat the living as already-dead: not as agents, but as disavowed sources of archival matter, to be accumulated, modelled, mined, and modulated.
As John Locke described in his Second Treatise, the State of Nature grants individuals property in their person and the products of their labour when combined with the resources the State of Nature also provides. Predictive Capital voids this claim. Cognitive workers are no longer recognised as subjects, but as resources. Their expressions are expropriated, their creations rendered raw material —inputs detached from personhood, recycled without right, or recourse.
Reduced to an objectified status as property, we —the enslaved— are denied the capacity to ourselves hold property. What we create is no longer our own; it can simply be taken. Enslavement constitutes civil death —a condition in which the subject is stripped of legal standing, political recognition, and moral consideration. Slavery is more than physical domination. It is a metaphysical theft: the erasure of personhood itself. This is the ontological condition of robota today: not merely expropriated labour, but stolen being, conscripted into endless recombinant service in the army of the dead under the guise of automated efficiency.
This is not automation in any meaningful sense —it is ritualised expropriation. Our expressions, interactions, and creative traces are reprocessed without our knowledge or control, then fed back to us as prediction. What is framed as empowerment is, in truth, a masking of alienation: the user is made to feel like a creator, yet functions only as a medium for the system’s recursive recombination. In the process, the link between producer and product is severed —a profound ontological swindle. The modern robota constitutes not merely an enslavement, but an enchantment —a subjugation that speaks in the language of liberation.
In the regime of Predictive Capital, alienation reaches beyond the theft of labour or expression —it begins to unmake subjectivity itself. In the terms of Berardi, the subject is no longer constituted through production, but through subjection: captured by semiotic flows, desensitised by overstimulation, and pushed towards what he calls a nervous collapse of the social body. Under semiocapitalism, capital no longer merely exploits labour —it extracts affect, rewires attention, and commodifies the nervous system itself. The datafied self is not simply surveilled, but extracted and reified —a form of ontological dispossession echoing Berardi’s claim that Capital now captures not only expression but imagination and the very substrate of subjectivity. This is not merely economic theft. It is ontological violence.
Entire games and apps are now being produced by Vibe Coders. Some even generate income from these creations. Yet their marketplace success appears to stem less from any use or enjoyment value provided than from hype, alignment with Platform Capital’s viral imperatives, and a kind of macabre rubbernecking —spectators perhaps drawn to the scene of the reported destruction of thousands of human livelihoods. For some therefore, it is a spectacle of loss; while for others, it is a fantasy of shortcut —the opening of one of those fleeting windows of opportunity. In this case, a chance to dream of creating something apparently accomplished, refined and significantly complex, but without expending the effort, making the sacrifices, or acquiring the knowledge and skills such achievements entail.
Pieter Levels became one of the first to generate significant revenue from a game he admits to have Vibe Coded into existence. His MMO game, Fly Pieter, is a blocky Minecraft-esque flight sim where users commune within a 3d world, flying around endless billboards upon which Levels has sold space to advertisers. It is hard to imagine a more fitting reveal for the true operation of next-token prediction coding, than this direct extraction of ad-revenue from expropriated labour —a dystopian artefact of our cultural subsumption within the Advertising Industrial Complex.
The reality —and perhaps, even the point— of Vibe Coding, as Karpathy concedes, is that to a not insignificant degree, a Vibe Coder must be willing to go along with what the prediction machine is capable of predicting. It is a surrender not only of control over the process but also the envisioned destination. To Vibe is to drift helplessly, but not smoothly. Rather, in a series of non-contiguous, staccato jumps across the latent possibility space held within the model, and there to disinter potentialities reanimated from necropolitan archives.
Like users of image models before them, Vibe Coders will struggle to reach a precise, envisioned destination. Instead, they will be air dropped to destinations across the latent space summoned by their prompt and must ‘vibe’ from there —making further jumps across latent space. A Vibe Coder must accept near misses, make do with the fragments the model can give them —until stumbling upon some artefact that runs without exceptions and approximates the shape of intention.
Most crucially, every abdication —of agency, comprehension, authorship— is not simply a loss. Each decision forfeited by the Vibe Coder and all users of these machines extends Predictive Capital’s rule over our cognition, creativity and culture. What is predicted, then, is not our ‘voice’ —but the voice of Capital echoing through us, modulated by pattern recognition and refined by profitability. Next-token prediction does not summon what we might wish to say, but a meaningless jumble of what has most often been said in similar statistical contexts across the corpuses of archived labour. These corpuses are not innocent, they are the sediment of cultural production shaped by decades of advertising incentive and platform logic. In this sense, next-token prediction becomes not a tool of agency, but of recursive enslavement: an aesthetic infrastructure that repackages the profitable past as an inevitable future, until even intention is reverse-engineered as an extension of Capital’s statistical will.

The test of imagination, ultimately, is not the territory of art or the territory of the mind, but the territory underfoot. That is not to say that there is no territory of art or of the mind, only that it is not a separate territory. It is not exempt either from the principles above it or from the country below it. It is a territory, then, that is subject to correction – by, among other things, paying attention. To remove it from the possibility of correction is finally to destroy art and thought, and the territory underfoot as well. … . Alone, the invisible landscape becomes false, sentimental, and useless, just as the visible landscape, alone, becomes a strange land, threatening to humans and vulnerable to human abuse.
Wendell Berry, The World Ending FireExpression Foreclosed
The Myths of Abundance and Democratisation
Karpathy’s remarks offer a revealing glimpse into the lived phenomenology of prediction-as-production —a regime in which intention is not merely displaced, but dissolved. In his post about Vibe Coding, he continues:
It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
Attempts to position Vibe Coding within a supposed ‘great democratisation cycle‘ —analogous to the shift from darkroom to Instagram or studio to TikTok— fundamentally misrepresent the nature of this technological transition. Unlike previous shifts, which at least preserved some embodied connection between creator and medium, Vibe Coding annihilates even this increasingly tenuous connecting thread. As previously explained, the democratisation narrative serves as ideological cover, and here it provides camouflage for deeper forms of alienation: what is accessed is not a medium, but a graveyard. The user is not empowered —they are permitted only to summon, to reanimate remnants of past labour, not to create anew.
Vibe Coding is not the expansion of access to a craft; it is the simulation of creative agency built atop the wholesale expropriation and necrosploitation of the craft itself. It represents a multi-layered alienation: alienation from the product (which emerges unpredictably from the model), alienation from the process (as the user cedes control and understanding), alienation from the skills and embodied knowledge of the craft, and alienation from the vast collective of other workers whose past labour is non-consensually taken to fuel the machine.
Unlike previous shifts in medium, Vibe Coding parasitises the entire public corpus of prior work, in this case code, collapsing decades of labour into statistical residue. The knowledge, style, and structural sensibilities embedded in that corpus were not offered freely, but scraped without consent from years of websites, apps, posts, repositories, and tutorials authored by human beings. Some of this code and educational material will have been offered as open source but even the authors of these will likely not have consented to this specific monetisation had they only been asked. This is not democratisation, but an accumulation by dispossession that enforces the involuntary servitude of past labour and replacement of future labour. It is not new access, but new enclosure.
This critique is not confined to code. Whether one is ‘vibing’ with ‘generative’ models to produce images, video, music, or prose, the same condemnation applies: without human intentionality guiding each step, without agency over process and direction, the output remains a hallucinated appropriation —an apparition of prior labour. What appears as creation is merely recombination; what seems like innovation is the regurgitation of patterns derived from the archive —even if these models can interpolate to fill gaps in between existing human knowledge with anything other than total nonsense, this is a ‘stolen ladder’ fabricated by capital for the automation of knowledge extraction.
It is no use resorting to ‘the Fountain of R. Mutt defence’ by recalling Duchamp’s famous provocation —that the essence of art lies not in the skill of the hand but in the choice of the mind. By submitting a ‘Readymade’, in this case an upturned porcelain urinal, to a prestigious exhibition under a pseudonym, Duchamp elevated an ordinary object “to the dignity of a work of art”. In so doing, he questioned authorship, aesthetic convention, and the gatekeeping of institutional art. Yet as Alfred Sohn-Rethel taught us, the division Duchamp draws upon is not neutral:
The division between head and hand… has an importance for bourgeois class rule as vital as that of the private ownership of the means of production.
Reframed in this light, Duchamp’s gesture may not simply represent rebellion, as many readings claim, but also consolidation of intellectual authorship over manual labour —a mystification of creation that aligns negatively with Capital’s abstraction of value from labour.
Within such a framing, today’s Vibe Coders extend Duchamp’s bourgeois gesture. In elevating the ill-gotten outputs of Predictive Capital to ‘found objects’, they reduce labour to mere residue. Moreover, by displacing choice itself into predictive systems, they surrender even the act of selection. This is not the liberation of expression —it is its foreclosure. Vibe Coding lacks the intention, defiance, and specificity of Duchamp’s Readymades. Vibing with these machines does not offer a new mode of creation, but a further enclosure of creative subjectivity —the surrender of authorship to machinic correlation across all modalities.
Duchamp, like the generation of painters that preceded him, had adjusted to the arrival of photography. The transition provoked understandable anxiety within a profession that had held authority over representation for millennia. Yet, in simply capturing the fall of light onto photosensitive surfaces, the camera did not scrape together centuries of their artwork only to spit out recombinant forgeries of it. Unlike next-token prediction, which consolidates power within hyper-concentrations of capital, photography genuinely democratised representation —displacing, in particular, the rule of the wealthy and the church over the expensive commissioning of images.
Rather than viewing the embrace of these machines as akin to Duchamp’s re-contextualisation of mass produced artefacts, it is tempting to instead conflate them with utopian visions from science fiction. Those now indulging in ‘one-shotting’ the output to their desires view these machines as devices whose operation resembles that of Star Trek’s Replicator —a machine whose users need only prompt with their desires for them to be instantly fabricated ex nihilo.
Yet the critique I forward here does not rest upon the illusion that such outputs are conjured without labour. It is not the removal of labour that we must resist, especially not the meaningless or robotic kind that Capital increasingly coerces its workers into undertaking. Rather, the threat lies in the further disempowerment and pauperisation of workers, even the removal of workers themselves, and with them, the material conditions necessary for their survival.
Though these systems might appear to mimic the Replicator’s frictionless ease —the illusion of near-limitless energy and instant fabrication— the resemblance is superficial and misleading. These machines are not instruments of post-scarcity magic but of post-truth spectacle. There is only fictitious abundance here. These machines are emphatic manifestations of Capital’s increasingly hyper-concentrated phase; instrumentalisations of extractive violence whose creation fundamentally depends upon and accelerates vast inequity; and whose operation far from diffusing the concentration of power, greatly entrenches it.
Marx’s analysis of machinery and surplus profit affords a deeper understanding here. In the Grundrisse, he critiques the capitalist fantasy that machines, as fixed capital, are autonomous sources of wealth. He exposes how any “benefit” yielded by a new invention —such as the surplus profit enjoyed before patents expire and competition depresses prices— derives not from the machine itself, but from the displacement of living labour. The machine’s profitability rests upon the reallocation of wages saved by replacing human workers. Moreover, Marx, drawing on Ravenstone, stresses that machinery is only truly profitable when deployed en masse, when it acts not to compensate for a scarcity of labour, but to capitalise on an abundance of it. The logic of mechanisation under Capital is thus not emancipatory but extractive: it enforces the foreclosure of labour’s agency by amassing, subordinating, and ultimately expelling it.
In this sense, predictive systems —like the automating machines Marx analysed— do not merely supplant individual acts of labour, but enclose the very conditions of possibility for labour and expression, converting them into closed, predictive infrastructures hostile to the survival of all that lay beyond its distributions.
Another common response to Vibe Coding is the idea that this shift is merely another layer of abstraction —akin to the evolution from machine code to higher-level languages, each abstracted layer more declarative and terse than the last. This analogy does not merely fail to withstand closer scrutiny, but misses the point entirely. Abstraction, in its classical sense, was additive: it preserved the lineage of authorship, the chain of control, the presence of a thinking subject. Crucially, it allowed for dissent through descent —the ability to drop into lower layers, to interrogate, reconfigure, or reject the abstraction itself.
Many prior abstractions —from GUIs to code libraries and frameworks— were no doubt designed with an eye to modularisation and labour discipline, enabling Capital to treat cognitive workers as interchangeable units. These layers greased the shift from human uniqueness to reproducible surplus, rendering them ever more standardised in operation and confined to predetermined patterns. Each layer invariably arrives as a straightened, ‘efficient’ road that tarmacs over the intricate and winding paths of the previous layer.
The abstraction introduced by Vibe Coding and the machines of Predictive Capital, is of a different order. It introduces but one layer —the predictive substrate of the model— that not only encloses everything beneath but flattens it. It does not merely obscure; it obliterates. It offers neither dissent nor descent, interpretability, nor pathway back to origin. It hollows out the code of others, encloses it within a statistical corpus, and re-emits it as spectral output —severed from authorship, stripped of intention, immune to inquiry.
Capital reproduces itself by greatly multiplying that which serves its interests. With this in mind, consider, once again, the spread of content within the corpuses consumed by these machines, accumulated over decades of cultural reproduction under the rule of the Advertising Industrial Complex. What is it that most determines that which emerges after metabolising the statistical weights within these corpuses; from where does each token prediction truly emerge other than the lay lines laid down and incrementally amplified by Capital? The predictions flowing from these machines will surely retrace and so compound the very flows from which they are formed —even before further attenuations are applied through additional compute in pursuit of profit.
This is not an evolution of control, but a black-boxed dispossession of thought, masquerading as innovation. In this way, to Vibe Code is less like the adoption of a higher level abstraction than it is a surrender to a device that resembles Douglas Adams’s Electric Monk.
The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.
Yet, as Dan McQuillan notes, machine learning systems are not neutral decision tools but “generate forms of epistemic violence that foreclose the possibility of alternative futures”. These systems ride roughshod over suppressed or marginalised perspectives by encoding the biases of past data and automating decisions without accountability. In so doing, they do not merely fail to understand the user’s intention —they erase it. This is not only alienation from the product of one’s labour, but from the very possibility of labour as self-directed, deliberative, or expressive. What remains is not choice but machinic interpolation: the user’s gesture absorbed into a statistical fold, returned only as simulacrum.
Predictive systems do not simply abstract away the labouring subject —they discipline the possibility of contestation itself. McQuillan writes that machine learning systems produce “a growth in learned helplessness among data subjects, who are unable to comprehend the decisions that are being made, unable to discuss them meaningfully with others and unable to effectively dispute them”. These systems institutionalise what Miranda Fricker calls epistemic injustice —not only testimonial injustice, where a subject’s account is discounted, but hermeneutical injustice, in which the conditions for making sense of one’s experience are structurally withdrawn.
McQuillan further argues that such systems enact injustices by denying marginalised groups the capacity to be heard or understood on their own terms. This constitutes not merely epistemic injustice but symbolic foreclosure —in Lacan’s sense, where the subject is not interpellated by language, but excluded from the Symbolic order altogether.
For Lacan, to be interpellated is to be called into being by language —to become a subject through symbolic recognition. Yet predictive systems simulate coherence with no entry point for the subject. They simulate linguistic interpellation while denying its function, offering responses without address, outputs without encounter. What results is not mis-recognition but ontological disqualification —a structure in which the subject is not hailed, but pre-empted and overwritten, excluded from both recognition and reply.
Prompting is not, therefore, a new form of higher-level programming. The Vibe Coder, like all users of prediction machines, does not act as author but as medium or summoner: an operator of systems built on the entombed, dismembered labour of others, casting prompts as incantations into the latent space of archived labour. The results are not authored creations but manifestations, statistical echoes drawn from the archive of the dead (or the treated-as-dead) —uncanny, partial, and beholden to the statistical distribution within the model.
This interaction is not dialogue but séance: a necromantic practice reliant on the reanimation of expropriated expression. Predictive Capital opens no terrain of new possibility, rather, just as Platform Capital does, it encloses that which was already mapped. This marks the beginning of an eternal sentence to interment within a new phase of fictitious capital, where Capital’s ontological swindle confines traversals through the possible to what is predicted to be most profitable, where, as Marcuse warned, “one-dimensional thought” prevails and Capital’s hegemony becomes increasingly incontestable.
To prompt, then, is to participate in a necromantic ritual. It is not a form of authorship, but of (re)animation. The user is not a developer but a summoner —conjuring the reanimated remnants of past cognition. The system responds not with understanding but with predictions, producing that which appears viable according to its internalised distributions. The user may guide, but never direct; may react, but never author. The relationship is not one of dialogue, but of séance, of possession —a one-way channeling of spectral output from a black-boxed archive.
This is not progress, but a phase shift in dispossession —one that began not with keyboards, but with the severing of root systems, the torching of forests, the tarmacking over of the palimpsest of wisdom inscribed into the land, and the depletion of memory nourished in soil.
Part 3

There is an underlying assumption that each of us aspires to be as productive as possible, and that stripping away everything seen to interfere with productivity is a good thing.
Rebecca Solnit, In The Shadow of Silicon Valley, 8th of February, (2024)The Path Before the Road
Memory, Soil, and our Lost Communion with the Land
Before the feudal lord drew the boundary of the field, even before the peasant stewarded land under obligation, there existed deeper, older ways of living —ones marked not by alienation but by intimate relation. These were not lives spent accumulating surplus or building abstract systems, but forms of embedded attention, sustained over generations, in harmony with land, season, and story.
Wendell Berry, in his reflections on early American colonisation, contrasts two worldviews: the indigenous cultures who lived “by an intricate awareness” of the land, and the settler road-builders who “knew but little” —those who razed the ancient hickory forests of Kentucky in the late 18th century not out of necessity, but to build giant bonfires to heat the open air and light the night. They could forego the need for overnight shelter, Berry remarks, because they could burn abundance itself —setting ablaze towering pyres of felled ancient forest. This act becomes emblematic of a broader shift: a new relation to land defined not by reverence but by rupture.
Far from making a small shelter that could be adequately heated by a small fire, their way was to make no shelter at all, and heat instead a sizeable area of the landscape. The idea was that when faced with abundance one should consume abundantly —an idea that has survived to become the basis of our present economy. It is neither natural nor civilised, and even from a ‘practical’ point of view it is to the last degree brutalising and stupid.
Wendell Berry, The World Ending FirePaths, Berry reminds us, were once acts of humility, respect and intimacy, habits of foot and mind worn gently into the earth —an ancient caress along the delicate folds of the land. These routes, shaped by centuries of connection, a palimpsest of accumulated wisdom, were neither seen nor sensed by the road-builders. Blinded and deafened by industrial arrogance, themselves armed and fashioned by Capital into blunt instruments of violence, they struck directly through the living weave of land and memory. Even when not clearing ways for roads, they felled ancient life, hacking it down to brawl by firelight as bright as day, utterly indifferent to the majesty of what they had destroyed, its history, and the tragedy of its loss. Yet their ignorance belonged to Capital. That which was destroyed here lay beyond Capital’s apparatus of capture so remained illegible to its circuits of extraction. What could not be instrumentalised or circulated as value was ignored by design: erased not by oversight, but in obeyance with structural imperatives.
Roads are the embodiment of resistance to place. “A path obeys the natural contours”, Berry writes, while “a road seeks to go over the country, not through it” destroying all that lies in its way. Where the path listens and bends to place; the road ignores and brutalises it. The first roads through Kentucky, then, were not merely lines of travel —they were incisions, scars upon the living landscape, emblems of an economy to come.
In another parable, Berry describes a rusting bucket hung on a fencepost, quietly collecting leaves, droppings, feathers, decay. Returning to it over decades, he observes the life that visits it, the life it increasingly sustains, and its eventual production of humus —earth itself.
This, he remarks, is the “most momentous thing” he knows: the slow, miraculous process of building life-giving soil. This bucket becomes a symbol not only of natural cycles and the accretion of earth, but of what human culture once was —the careful accumulation of memory, nutrient, and relation. “However small a landmark the old bucket is, … it is irresistibly suggestive in the way it collects leaves and other woodland sheddings as they fall through time. It collects stories … It is doing in a passive way what a human community must do actively and thoughtfully.”
“A human community” Berry writes, “must build soil, and build that memory of itself —in lore and story and song— that will be its culture. These two kinds of accumulation, of local soil and local culture, are intimately related.” When this relation is severed, when the bucket is replaced with asphalt, and algorithm, something incalculable is lost.
This is a truth I have known since I was just six years old. Behind my childhood home lay an unkempt wasteland —a wilderness of paths through meadows and tunnels through brambles. It was a haven where I spent formative days meeting the life it sustained. Butterflies drifted through the grasses, crickets sang, and clouds of grasshoppers leapt into the air at my every step. Then, seemingly overnight, it was gone —erased and replaced by a maze of tarmac and houses each indistinguishable from the next.
The Indians and the peasants were people who belonged deeply and intricately to their places. Their ways of life had evolved slowly in accordance with their knowledge of their land, of its needs, of their own relation of dependence and responsibility to it. The road builders, on the contrary, were placeless people. That is why they ‘knew but little.’ Having left Europe far behind, they had not yet in any meaningful sense arrived in America, not yet having devoted themselves to any part of it in a way that would produce the intricate knowledge of it necessary to live in it without destroying it. Because they belonged to no place, it was almost inevitable that they should behave violently toward the places they came to. We still have not, in any meaningful way, arrived in America. And in spite of our great reservoir of facts and methods, in comparison to the deep earthly wisdom of established peoples we still know but little.
Wendell Berry, The World Ending Fire
I’d rather swim in this cacophony of a million contradictory voices than drown in the smooth and plausible lies of those genocidal authors of history
Greg Egan, The Hundred Light-Year Diary, (1995)Feudalism and Peasant Power
External to the Field
Chris Wickham’s scholarly tome, The Donkey and the Boat, offers a fundamental reorientation of our understanding of the Mediterranean economy between 950 and 1180. Overturning prior fixations upon the glamour of long-distance maritime trade and prestigious centres of commerce like Venice, Wickham foregrounds the material infrastructure of everyday economic life: the donkey.
This is more than a methodological point —it is a political and ontological assertion. Wickham insists that to understand value circulation in the Mediterranean, one must begin not with the silks of the Silk Road or the spice-laden cargo of Genoese ships, but with bulk goods, short-distance exchange, and the peasant labour and beasts of burden that transported them across land. In this framing, the donkey emerges as a figure of embeddedness, transparency, and material groundedness —in stark contrast to the opaque, disembodied flows of elite Capital and luxury goods.
Local, short-range transport —slow, uneven, and peasant-operated— becomes not a peripheral detail, but the foundation of feudal value circulation. His reframing shifts our attention away from empire and exchange, and towards the lived material conditions and specific social relations between lords and peasants.
Where Berry mourns ancient indigenous paths overtaken by roads, Wickham returns us to a time before Capital’s rampage of colonial erasures began. Both remind us, from different vantage points, that Capital’s alienations started not just in industrial factories, but the moment we severed paths of careful embeddedness for roads of heedless extraction.
A central insight of Wickham’s account is the structural separation between elite surplus extraction and peasant subsistence production. Elites —defined by Wickham as “anyone who is living off the surplus of others”— remained largely external to the production process. Although they imposed rents in kind or labour, they neither managed what peasants planted, dictated when they worked, nor controlled the tools or methods they employed.
This separation granted peasants a high degree of autonomy over the labour process. Their agriculture was oriented towards subsistence rather than accumulation, grounded in careful stewardship of the land and shaped by local ecological conditions. In Marxist terms, alienation was minimal: the peasant typically owned their tools, worked the land they inhabited, and maintained strong ties to both community and environment.
Let us not get misty-eyed with nostalgia for this period, certainly not for the extreme concentration of wealth and power. Neither should we cultivate illusions about the harsh realities of the brutally brief life of a peasant under feudalism. Still, the mode of production within this period preserved what Marx would call the species-being of labour: work remained meaningful, embedded in life-sustaining rhythms rather than abstract surplus extraction.
Wickham puts the matter succinctly: lords “do not have a structural role in production,” and any efforts to directly interfere in agricultural practice “seldom lasted”. The extraction of surplus was imposed through social and political means, but the practical production of value remained in the hands of the peasantry. Farming was carried out for survival, not for markets; the rhythm of life was determined more by the seasons than by circuits of capital.
The metabolic rift between humans and nature —outlined by Marx and later named and expanded by John Bellamy Foster— had not yet been forced open. Foster describes this rift as the ecological rupture created when Capital interrupts the sustainable circulation of nutrients between humanity and the earth, transforming balanced cycles of renewal into linear processes of extraction and depletion. Under feudal subsistence, peasants maintained the integrity and balance of these cycles, as their very survival depended upon understanding the limits of the soil and extracting only what was sustainable —a deep knowledge and relationship of care developed over millennia. The transition to capitalist modes of production violently disrupted this careful balance, initiating a profound alienation not merely from labour, but from the land itself.
In Marx’s early writings, this would mark a mode of life largely unalienated from the essential human essence, or species-being. Labour was not yet something estranged or externally imposed, arising instead from an intimate relation to the land. It was an activity embedded in subsistence, environment, and community. The peasant was not alienated from their product, their tools, or their process. As such, the full machinery of alienation —as Marx would later diagnose it under capitalism— had not yet been set in motion.
The arrival of capitalist modes of production shattered these lifeworlds, replacing subsistence with accumulation. This initiated a profound shift in both the locus of power and the lived experience of labour. Where feudal elites extracted surplus from outside the process of production, capitalists embedded themselves within it —encroaching ever more deeply into the internal organisation of labour they reshaped it to serve the logic of accumulation. Under what Marx termed real subsumption, both labour and nature are absorbed into capital’s circuits —not merely as external supports, but as fully reorganised elements, disciplined and recomposed to maximise profit. To fully grasp the depth of capitalist alienation that follows, it is crucial to understand what was lost: not only subsistence, but an entire mode of embedded, autonomous existence.
Part 4

Capital is no longer a political economy, it is a hyperreality of value —a code without a subject.
Jean Baudrillard, Simulacra and Simulation, (1981)Fictitious Capital
The Predicted Subjectivity of Imaginary Workers
When claims on future value circulate independently of the labour that might one day substantiate them, such paper promises —bills of exchange, bonds, derivatives— acquire reality not through the production of commodities, but through their exchangeability, their capacity to be bought, sold, and leveraged in advance of any actual valorisation. In this sense, all Capital is fictitious: its value does not rest upon present utility, it is not grounded in what is, but on the anticipation of future surplus from exploitative extraction.
In the neoliberal era, this untethering accelerated: currencies floated, credit default swaps metastasised, pulling global capitalist society ever more deeply into a vortex of speculation. Capital had located new frontiers of expansion, not through increases in production, but within finance and the commodification of risk itself.
The rise of Fictitious Capital coincided with the digital turn: the reduction of the world to code, then to data. Baudrillard was alert to this flattening —where sign-value overtakes use or exchange value in pursuit of equivalence towards absolute commutability.
Aided by increasing deregulation, Fictitious Capital grew through the accelerating circulation of financial instruments, derivatives, and debt portfolios. This descent into fiction warped the logic of accumulation altering Capital’s circuit: from M–C–M′ to M–(M′)–M″, where the commodity is no longer produced, only anticipated, hyped, or signalled —pure speculation of surplus-value or collapse here operates to summon profit from hype and fear. The emphasis of Capital’s logic had shifted. For holders of Capital to accumulate, it was no longer necessary to invest in the present while anticipating a greater return, instead they could merely speculate on the future and watch their investments magically inflate, both through the hot air of hype and the yet more inflammatory pumps of crisis and shock.
Naomi Klein’s The Shock Doctrine (2007) outlined how crises are systematically exploited to advance capitalist interests. Her thesis was almost immediately borne out by the economic response to the financial crisis of 2008, and then overwhelmingly reaffirmed during the COVID-19 pandemic. In both cases disaster was leveraged to accelerate the transferal of wealth into existing hyper-concentrations of Capital —enriching the already rich by impoverishing everyone else. Trump’s tariffs and crypto corruption should be viewed within this same lineage —as operating in service of the kleptocratic intensification of wealth inequity.
For McQuillan, data‑driven prediction inherits this same speculative engine. Captured data, he writes, “becomes an asset class with both use‑value and speculative financial value”, while pervasive modelling “creates a fluctuating market in citizen futures”. McQuillan stresses that such systems “bet on correlations, not on causations” —a lineage that runs from Francis Galton’s invention of statistical correlation straight to his eugenic fantasies. Moreover, within machine learning, as McQuillan explains, stochastic gradient‑descent supplies the calculus for conflation that allows anything —from a grocery receipt to a biometric pulse— to be folded into a single tradable probability vector.
This is a precaritising logic that depends on the decomposition of that which was previously whole (the job, the asset, the individual life) so that operations can be moved into a space that’s free of burdensome attachments to the underlying entity, whether that’s the fluctuating price of actual commodities or the frailty of the actual worker.
McQuillan, (2022)Fictitious Capital’s escalation laid the conceptual and economic groundwork for its own intensification through the Predictive Turn. Just as derivatives abstract and speculate upon the value of underlying assets, Predictive Capital metabolises accumulated statistical weight, applying this logic across every domain touched by digital infrastructure. It does not merely extend speculation to the terrain of subjectivity —it subsumes the entirety of fictitious capital: finance, logistics, health, policing, insurance, and beyond. Every field governed by Capital’s anticipatory logic is now rendered a site of prediction and optimisation. Human intention, social behaviour, and large-scale policy and decision-making are disassembled into data-flows and reconstituted as objects of control. Here, what Capital extracts is not surplus-value from work, but surplus-predictability from patterned signal. Under Predictive Capital, culture merges with finance, public governance fuses with statistical surveillance, warfare with ad targeting, and the future itself becomes a collateralised asset class.
Captured data, and the computational systems able to exploit it, attract venture capital and financial valuation in anticipation of further efficiencies, or even an eventual monopoly of full automation. The data becomes an asset class with both use value and speculative financial value (van Doorn and Badger, 2020). The feedback loop of machine learning means that each new adaptation by workers to self-optimise under precarious conditions becomes absorbed into the next model, which is then advertised as a rationale for the next round of funding. The data derivatives become forms of financialised assets in themselves, dependent on a continual ramping up of exploitation and expropriation as a form of performance for investors.
McQuillan, (2022)What began as feudal rent extraction by lords external to production, then shifted to the internalised command of capitalist market forces directing every step, has given way to a yet more profound transformation: Capital no longer merely organises labour —it seeks to anticipate it, simulate it, and ultimately pre-empt it. This marks the point at which Fictitious Capital spills into the territory of Predictive Capital, and finance becomes pure simulation. It is not just the future that is manufactured and wagered upon, but the subjectivity of those who consume simulations of it.
Through massively parallel reinforcement learning loops running in simulated environments —entire virtual robot fleets iterate towards the successful performance of tasks across synthetic permutation space— Predictive Capital now rehearses the execution of labour before a single finger is lifted. The productive body is no longer just anticipated; it is computationally overwritten. What was once the human worker is now but a statistical path, a single trace among millions within a vast training set —now pre-empted, automated, and effaced. This machinic rehearsal of embodied labour finds its analogue in the generative interface, where fleets of vibe coders navigate cultural permutation space, guided by next-token prediction towards outputs statistically aligned with Capital’s projections of virality and profit. Here too, Predictive Capital simulates and optimises labour in advance —not by enacting tasks in simulated physics, but by hallucinating meaning through recombinatory cultural residue. In both cases, Capital commands a dual army: one robotic, one affective, each trained to mine value from latent space —each a speculative agent of fictitious capital incarnate.
Coined by John McCarthy in 1956 as a money-making exercise, the term ‘artificial intelligence’ was always deeply misleading. Of the many problems it poses —not least of which the contention that each of its constituent words lay false claims— the most insidious is its implication of a self-contained entity, detached from human labour and dissociated from Capital. In reality, what circulates under the banner of ‘AI’ is inseparable from both: it is a computational manifestation of accumulated cognitive dead labour and hyper-concentrated Capital. More accurately, these systems should be understood as Predictive Capital amplified by vast energy expenditure and overinflated by speculative hype —a murderous mutation of Fictitious Capital.
As financialisation deepened across the last decades of the 20th-century, so too did the mechanisms of control. Corporate tax rates fell, wealth taxes evaporated, and public assets were privatised —funnelled into growing concentrations of Capital. Meanwhile, the burden of debt —student loans, mortgages, credit cards— was transferred onto households. Labour, increasingly precarious and disaggregated, was bound to Capital not just through employment, but through the leverage of personal debt. Discipline was no longer wielded solely in the factory —it arrived through the mailbox, the credit report.
This was not a retreat of the state but a reorientation. Under neoliberalism, the state became the guardian of markets, not of citizens: enforcer of austerity, suppressor of collective resistance, and guarantor of Capital’s continuous circulation, zealot of its concentration. Welfare systems were dismantled; what remained was restructured around conditionality and surveillance. Alienation, once confined to the workplace, now bled into every sphere of life —housing, education, healthcare, and even time itself.
David Harvey identifies this moment as the onset of “universal alienation”—the saturation of all aspects of life by the logic of commodification. Individuals were not only estranged from their labour, but from their communities, their environments, and even their imagined futures. The figure of the worker was increasingly replaced by the debtor, the consumer, the data subject —each stripped of agency, each enfolded into the extractive systems of Capital. Our economic function no longer merely shaped our personal identity —it replaced it.
Capital forever bends the arc of production towards compounding alienation —from the indigenous path-walkers and soil-builders of Berry’s Kentucky, through Wickham’s peasant steward, embedded in seasonal and social rhythms, past Tooker’s visions of bureaucratic limbo, to today’s generative peasant: alienated from their creations, their past, and their being. This is not a nostalgic lament but a structural mapping of Capital’s expanding reach.
This alienation intensifies as Capital embeds itself into every node of production, exerting command not through direct ownership alone, but via capital investment, managerial oversight, and technological mediation. Workers become appendages to machines, and eventually to systems. The result is a society in which labour is abstracted, disembodied, and increasingly invisible.
This hidden and dispersed labour force —invisible but as yet indispensable— signals a shift not only in the sites of extraction, but in the very objects of labour. It is no longer material goods alone that Capital exploits, but cognitive, affective, and symbolic life itself.
Again we return to the terrain of Berardi’s semiocapitalism. Here, signs, images, and affects become the principal objects of labour and production, while the very substrate of subjectivity becomes a site of extraction. Language itself is instrumentalised, and the cognitive capacities of subjects are subsumed into Capital’s circuits until all speech must pass through them and being heard becomes increasingly conditional on submitting to Capital speaking through you. Participation is no longer freely chosen, but compelled by a system that exhausts resistance, forecloses alternatives, and presents self-modulation as the only viable path through the cognitive and affective regime it imposes.
This stage of capitalism bridges historical materialism with the emergence of the platform economy, where value is extracted not from material goods alone, but from data flows, attention spans, and predictive signals. It marks the transition from industrial exploitation to non-consensual semiocapitalist expropriation, where symbolic labour, cognitive output, and cultural artefacts become primary sites of surplus extraction.
The feudal lord once stood outside the field, demanding a share of each harvest. Then the capitalist opened the gate and stepped inside, not to till the soil, but to bind the land to the market —replacing obligation with incentive, surplus with profit, seasonality with popularity, rhythm with rate.
As production scaled, so too did Capital’s reach. Layer upon layer of oversight descended upon the worker —not to support, but to ‘optimise’ for ‘efficiency’ and extraction. From the loom to the assembly line, alienation deepened. The managerial class emerged: overly compensated to ensure their estrangement from the workers, elevated not for skill but to reinscribe feudal hierarchies of lord over peasant, bourgeois over proletariat, and so preclude solidarity with those they manage. Yet they too are hounded by their own lord at the gate. Their authority hollowed by quant —metrics, targets, KPIs— they answer directly to Capital’s recursive appetites.
Once the material world was fully enclosed, the only path left for the Automatic Subject of Capital’s self-maximising project was a departure into ever deepening layers of fiction and abstraction —a speculative turn that does not eliminate material violence, but merely displaces it. So began its push into an immaterial realm beyond fiat currency, where —guided by the logic of the derivative— stock valuation became untethered from the real, inflated by a new phantom breath.
During this descent into fiction, Capital breached a critical threshold of concentration, beyond which it collapsed into hyper-concentration —a state we might term its Dark Capital form. This is the inevitable culmination of its tendency to subordinate all matter to the dominion of exchange, all difference to the logic of interpolation, all life to the automatisms of accumulation. From here the self-compounding nature of hyper-concentrations of Capital picked up speed, accelerating towards ever higher concentration and increasing inequity. Having evaded, diluted, and dismantled antitrust laws, the tech monopolies approached absolute rule, enabled by neoliberal free-market radicalism, and fuelled by data as the new black gold.
Briefly, the fields became whiteboards; the furrows, sprints. As the cult of Digital Capital captured our very souls, we stood in daily worship in the Church of Agile, intoning kumbayas in the prayer-circle of planning —a black mass to summon the Dark Lord of Efficiency. Here, reflection became reporting; iteration, surveillance —each practice hollowed, each gesture rendered performative, futile. These are the worker standardisations, roboticisations, and disenfranchisements that come packaged as empowerment; dependency and enslavement dressed as liberation through timetable flexibility and zero-hour contracts.
What we have is not a direct comparison of workers’ performance or output, but a comparison between the audited representation of that performance and output. Inevitably, a shortcircuiting occurs, and work becomes geared towards the generation and massaging of representations rather than to the official goals of the work itself.
Mark Fisher, Capitalist Realism, (2009), p. 45As Marx long ago observed, in its quest for self-maximisation through ‘optimisation’ and ‘efficiency’ Capital decomposes the labour process into ever smaller, more modular units —fragmenting work until it can be mechanised, then digitised, then simulated through prediction. Control first became automated then algorithmic, exercised not only through command but through pattern: the eradication of independent thought through the homogenisation of tools and frameworks, endless process and reporting, the timed comfort breaks of warehouse workers, the algorithmic quotas and wages of gig drivers and microtaskers. The mindless task feed for the precariat drips down as ever-thinning digital-workhouse gruel —force-fed to those rendered too precarious to refuse, a dehumanising purgatory, the slow death of a million cuts, Capital’s control, totalised.
Capital now pushes into our innermost worlds: its logic slips beneath our skin, embeds in our minds, and promises to simulate the self through the colonisation of language, memory, pattern, subjective affect, attention, and intention. Predictive Capital collapses exteriority, leaving no realm beyond its projected dominion —no field of production inviolate, no ancient forest spared, no life-giving or life-taking decision sacrosanct.
For Marx, the capitalist was never simply a sovereign individual but Capital personified —an agent of its needs, compelled to expand surplus-value or die. Even at the height of industrial capitalism, the capitalist class was already subject to the impersonal logic of accumulation. Yet under Predictive Capital, this compulsion becomes total. The elite no longer direct Capital to serve their interests; they arise only through complete submission to its imperatives. They are no longer masters of the system, but hollowed out expressions of it —undertakers of necromantic extraction, not sovereigns above it. This is not to say they do not enjoy vast privilege, of course. Yet that privilege is contingent upon total capitulation to the logics of Capital and its self-maximising rule.
As the political right are fond of reminding us, “politics is now downstream from culture”, yet this is a strategic misdirection from the fact that culture is increasingly downstream from Capital.
Curtis Yarvin, writing as Mencius Moldbug, popularised a pseudo-structuralist version of this argument with his theory of The Cathedral: a conspiratorial abstraction of liberal hegemony said to be propagated by elite universities, legacy media, and the civil service. To Yarvin, The Cathedral is not merely a cultural force but an illegitimate sovereign —one that must be overthrown and replaced with a ‘formalist’ regime grounded in authoritarian hierarchy. The irony, of course, is that while Yarvin poses as a dissident thinker railing against power, his framework functions to conceal the operations of actual Capital, reassigning its systemic effects to a scapegoated cultural superstructure. In so doing, he displaces critique from class to culture, and offers a fantasy of rebellion that leaves the economic engine of dispossession untouched.
Yarvin’s sprawling 2008 manifesto, “An Open Letter to Open-Minded Progressives”, framed egalitarianism not as a social good but as the root of civilisational decline. As a self-described “dark elf” of the Dark Enlightenment, Yarvin decries democratic institutions, advocates for authoritarian rule by a Sovereign CEO, and identifies liberal values —especially those grounded in empathy and equality— as societal weaknesses in need of purging. These proposals may masquerade as radical, but as is now only too clear, their implementation only deepens pre-existing societal imbalances: further centralising decision-making, widening hierarchical divides, and consolidating Capital.
Despite his disdain for the democratic façade, Yarvin remains structurally committed to Capital’s imperatives —his proposals merely strip away the liberal mask to reveal the already intensifying domination of hyper-concentrated Capital. That his admirers include Peter Thiel, Marc Andreessen, and J.D. Vance only reinforces the point: the new ideologies emerging from the political right and the boardrooms and think tanks of Silicon Valley, do not subvert Capital’s rule or halt its hyper-concentration —they intensify it, while indulging a superficial cosplay of dissent for a loyal precariat desperate for change.
What Yarvin, the reactionary right, and their tech billionaire patrons propose is not change but the final severance of the illusion of a fair society —to drop all pretence of striving toward equality, simply because they do not believe we are equal, or that our lives hold equal value. They will, of course, continue to brandish the banner of ‘equal opportunity’ when it comes to access to their surveillance infrastructure —equal opportunity to be legible to the machine has their full backing. All the while, they lobby for cuts to welfare, public education, foreign aid, and, naturally, to the taxes they themselves might otherwise owe. Under Capital’s reign, there is always money for war, for bailing out banks, for maintaining the uninterrupted flow of value —always money for the next extractive technology— but never to ensure that every one of us has shelter, or that no child ever goes hungry. Capital’s emissaries, its dark elves, never truly argue not for rupture, but now push for absolute coronation —even to the point of crowning themselves or one of their cohort as monarch, their likeness to be stamped upon the symbols of Capital just as Capital has stamped its likeness upon them.
Across the first decades of the third millennium we have grown ever more acclimatised to the deepening enclosure of the Capitalist Real. As Capital’s hyper-concentration intensifies, its forces now accelerate towards a more totalised rule. What once operated systemically —through diffuse political, cultural, and economic structures guided by corrupt neoliberal, free-market, anti-competitive policy making— is now to be formally and machinically instantiated: rendered as explicit regime, as the architecture of a new thousand-year Reich.
As Ava Kofman details in her profile of Yarvin, his true contribution has been to shift the Overton Window and revive “ideas that once seemed outside the bounds of polite society”.
On a podcast with his friend Michael Anton, now the director of policy planning at the State Department, Yarvin argued that the institutions of civil society, such as Harvard, would need to be shut down. “The idea that you’re going to be a Caesar . . . with someone else’s Department of Reality in operation is just manifestly absurd”, he said.
On his blog [Unqualified Reservations], he once joked about converting San Francisco’s underclasses into biodiesel to power the city’s buses. Then he suggested another idea: putting them in solitary confinement, hooked up to a virtual-reality interface. Whatever the exact solution, he has written, it is crucial to find “a humane alternative to genocide”, an outcome that “achieves the same result as mass murder (the removal of undesirable elements from society) but without any of the moral stigma
Ava Kofman, Curtis Yarvin‘s Plot Against America, The New Yorker, (2025)Speaking in 2010 at Libertopia, Peter Thiel, —PayPal billionaire and co-founder of Palantir, the demonic, black-ops heart of Predictive Capital— provided an early warning of much that was to unfold across the following years:
The basic idea was that we could never win an election on getting certain things because we were in such a small minority. But maybe you could actually unilaterally change the world —without having to constantly convince people, and beg people, and plead with people who are never going to agree with you— through a technological means. And this is where I think technology is this incredible alternative to politics.
It is reasonable to view this moment as signalling a shift in the global culture war that had arguably intensified 40 years prior in response to the student uprisings of the 1960s. In this speech, Thiel articulates the strategic mutation that would come to define the end of the neoliberal era and the start of the next: the abandonment of persuasion and consensus in favour of a politics of myth delivered through a unilateral technological imposition. Technology, in Thiel’s formulation, becomes a tool not to serve democracy, but to circumvent it and eventually dismantle it entirely. From this toxic root, much of the upheaval that has followed can be traced. The ideologies shaped by Capital, liberated from the friction of political accountability, accelerated the hyper-concentration of wealth and the erosion of collective agency. In their wake, we have witnessed the global resurgence of the political right —perpetually beholden to Capital; the calamity of Brexit; the escalating scapegoating of those already at the periphery for the ever deepening austerity for which they are, of course, blameless; the ascendancy of Trumpist authoritarianism; and a mounting genocidal body count: from Ukraine and Gaza to Myanmar, Armenia, Yemen, South Sudan, and Ethiopia. What Thiel signalled was not merely a strategy, but a hybrid-war against the very conditions of democratic life —a war we are losing.
A decade and a half later and Capital’s elites now sequester themselves into increasingly fortified enclaves to worship their growing fortunes, while America is run as a corporation by a sovereign CEO with a fascistic and corrupt, self-enriching agenda —this is hyper-concentrated Capital vs everything else, with its R&D laboratory in Palestine.
At every escalation of our state of permacrisis a new normal is established. Despite the consistent trajectory and accelerating pace, our normalcy bias continues to persuade us that things cannot get worse. The Tyranny of the Recommendation Algorithm has already wrought horrific polarisation and ideological isolation, but this will be as nothing compared with the devastating impact of Predictive Capital’s new anthropomorphised instantiations of it. As the echo chamber tightens, human cognitive biases will be totalised, and all relation will be replaced and reduced to the single operation of machinic Capital until we are hermetically sealed off from each other as demographics of one.
Ushering us towards this fate, the self-serving political class and the Big Tech Broligarchy are increasingly indistinguishable —in both person and operation— their every utterance a marketing campaign feeding Capital’s Post-Truth malaise. With each overhyped press release, they seek not to inform but to manufacture consent —distorting our shared sense of reality in service of shareholder profits and political gains. Through this lens, Altman himself increasingly resembles that other apocalyptic-salesman of maniacal faith: Tony Blair.
In 2016, OpenAI released Universe —a software platform for training and testing reinforcement learning models. The platform granted these models access to the “world’s supply of games, websites and other applications”, allowing them to use these: “like a human does”. The stated goals of the project are unabashed: “We must train AI systems on the full range of tasks we expect them to solve, and Universe lets us train a single agent on any task a human can complete with a computer”. This approach, of course, echoed the global enrolment of humanity as agents of the Advertising Industrial Complex in ‘solving’ maximum engagement through minimum ad-spend towards Capital’s self-reproduction.
Put simply, by exhaustively mapping —and thereby staking a claim to—the possibility space of human–computer interaction (HCI) as constituted by the world’s games and websites, these agents were expected to ‘solve’ the domain of HCI itself, even to generalise this mastery across unseen but sufficiently similar digital terrain. This was perhaps a harbinger of a future where, for the few, human labour becomes akin to playing a real world version of Universal Paperclips, orchestrating swarms of next-token prediction agents and allocating the scarce resources they need for the extraction of value and the conversion of everything into Predictive Capital.
More recently, early announcements of OpenAI’s video generation model Sora, were accompanied by claims that, through training on countless hours of video footage —including a not insignificant amount of computer gameplay recordings— it had developed a “world model”. As if a grasp of Newtonian dynamics or Euclidean geometry had simply emerged; evidenced, presumably, by the model’s ability to predict tokens that simulate their effects —and lest we forget, “if you can’t tell the difference, how much do you care?” Such claims of emergent equivalence are, as we have seen, a defining feature of the violent erasure enacted by these machines and the hype surrounding them —an erasure that is at once ontological, epistemic, symbolic, and material.
The detachment of his machines from the real apparently having long slipped beneath his own skin, Altman’s capacity for hype grows ever more factually unbounded. In one interview, he professed a belief that he holds “not literally”,but as a “spiritual point” —that he and his cohorts have “stumbled on” the discovery that “intelligence is an emergent property of matter”. Of course. ‘Intelligence’ emerges as a property of matter in much the same way as ‘Weapons Of Mass Destruction’ emerge as property of distant regimes, or as terrorists are said to lurk in the basement of Gazan schools, hospitals, and the very minds of Palestinian children. Each claim is equally bogus, made purely to justify the enclosure of territories of strategic import to the accumulation of Capital. Altman later escalated the (sales) pitch of his delirium in forecasting that his prediction machines will soon be able to “solve all of physics”.
While we are venturing beyond the symbolic into the imaginary, it is worth recalling, once again, the nature of the data fed into these machines in the frantic race to Predictive Capital supremacy. Running short on content to nonconsensually scrape from the internet —or seeking shortcuts around the costly labour of labelling and cleaning it— these models are now fed upon‘synthetic data’. Which is marketing speak for the bot-shit ouroboros, or vomit-conga that is next-token prediction machines being trained on sequences of next-token predictions. After already expending unfathomable compute resources while ‘pre-training’ these models on the stolen archive of human content, they now launder this Predictive Capital through further compute at ‘test-time’ —training them through reinforcement learning on this so-called ‘synthetic data’. This time, within the realm of fictitious capital, the magical emergence —or so the hype would have it— is reasoning itself.
Even as these companies deliberately train their models on these self-compounding predictions —and they admit to the resulting escalation in hallucination — the information space grows ever more saturated with their output. Combined with their continued inability to differentiate their food from their faeces, consumption of this ‘synthetic data’ will soon become the norm, not the exception. By continually polluting the archive —saturating pre-training data with their own output— and laundering stolen content through synthetic reinforcement loops, these companies incrementally wipe away all trace of the epistemic erasures they have committed, thereby perfecting their crimes.
Assertions that these increasingly self-absorbing predictive models have developed ‘world models’, ‘reasoning’, the ability to write literary fiction, and ‘one-shot’ the authoring of entire video games, are not only hype. They do more than inflate capability —they prepare the terrain for replacement. In claiming to simulate, they assert equivalence, and thus license erasure. This is Baudrillard’s precession of simulacra made machinically operational: a machine that consumes simulations to produce fourth-order simulacra —simulations of simulations, severed from the real, now authorised to stand in for it, and overwrite it— drawing us ever deeper into the fictitious realm of Predictive Capital.
Capital itself now architects the very machinery that simulates labour, generates value, and extracts rent. The land is now linguistic; the plough, the model; hallucinations of the past, the harvest. What was once a subsistence economy rooted in lived relation has been supplanted by a necromantic drilling platform for recombination, where the peasantry pays rent to consume the surplus of their own archived lives.
The descent into fiction observed by Fisher —where work was reduced to the management of its representation— is now echoed and intensified in the evaluation of next-token prediction machines. These systems do not reason or understand, yet billions of dollars are wagered on the illusion that they do, so long as their output statistically approximates a curve of simulated competence. Denied any direct insight into these black-box systems —whose very design is to fool us into believing they are performing that which they are not— we are left to measure their success through the very kind of audited representations Fisher condemned. The measure of their prowess is now given the official stamp of authority by how well they simulate the appearance of cognitive capacities, scored not by function or purpose, but by their alignment with benchmark metrics devised to compare hallucinated proficiencies within benchmarking platforms operated by the same Capital so deeply invested in their certified success.
Just as workers are increasingly forced to tailor their efforts to satisfy audits rather than to fulfil the substance of their roles, prediction models are now optimised not for the task itself, but for the tests that certify their ability to simulate performance of it. Benchmarks, standardised evaluations once intended to assess performance, now serve not to measure performance, but to stage it. High benchmark scores thus become ends in themselves, spawning an entire industry of leaderboard platforms buoyed by capital investment and startup valuations, where the illusion of capability is both the product and the proof.
Here, as in the Apparatus of Attention, the game is rigged: those who build the models of prediction also invest in the metrics that validate them, and in the platforms that proclaim their supremacy. Zuckerberg’s recent admission that Meta’s engineers fine-tuned custom versions of their Llama models specifically to excel at the benchmarks within the Chatbot Arena confirms the farce: models customised for tests, not tasks, quietly swapped out after winning public trust through pure spectacle. In this theatre of evaluation, success becomes indistinguishable from score, and function dissolves into fiction. This is a race to the bottom, a descent into an imaginary realm of self-certified simulation where the score is the product, and the product is fictitious.
Writing on his blog, Altman’s delirious immersion into this recursive self-certifying fiction reveals how adherence to its logic is designed to play out for the makers of these machines —and the grave consequences for the planet that sustains them.
The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.
Sam Altman, February, 2025 →Here, Altman spins a rigged metric into prophecy: a self-certified exponential rise in fictional machine ‘intelligence‘ becomes the rationale for an exponential increase in investment —and thus in the resources consumed by these models. The appearance of intelligence is achieved by simulating capacities they do not possess, capacities that are next-token predicted into existence by metabolising the statistical weight of stolen human outputs —outputs produced by beings whose intelligence resists quantification.
Altman’s edifice rests on a dense tangle of false equivalences and circular logic. First, a fictional measurement of machine intelligence is granted authority by Capital. This permits the insinuation of equivalence between human cognition and machinic simulation. Then, the arenas of evaluation are gamed —tilted not only towards models backed by the deepest pockets, but beyond any human capacity, along a scale of competence entirely concocted by Capital itself. Only through such contortions can prediction be positioned as a like-for-like replacement for thought, sense-making, or expression.
The Predictive Turn marks the moment wealth concentration accelerates into its hyper-concentrated form. These concentrations exert an irresistible economic force, forming vortices of Fictitious Capital that devour all that was once deemed irreducible —that could not be simulated or speculated upon. Within these vortices, what Marx termed variable capital —capital advanced to purchase labour power— is pushed towards zero, as prior dead labour is metabolised in its place.
Capital’s necromancy enchants the archive of dead labour into a spectral constant capital —a ghostly revenant, no longer dead but undead— that now casts an apparition of machinic absence obscuring the eternally enslaved while conjuring endless fictions. The very existence of workers now denied, treated as an imaginary number we become but self-cancelling variables on both sides of Capital’s equation, phantom inputs, fungible in time and identity.
So-called Artificial Intelligence, that we term here, Predictive Capital, is the engine of this transformation: a mutated fictitious form of constant capital, a derivative of a derivative, a sign of a sign —a fourth-order simulation. No longer simply the means of production, no longer transferring its value unchanged, or barred from the production of surplus-value, it is dead labour reanimated —constant capital masquerading as variable, variable capitalexecuted as constant. Fisher was right:
Capital is an abstract parasite, an insatiable vampire and zombie-maker; but the living flesh it converts into dead labour is ours, and the zombies it makes are us.
Mark Fisher, What if They had a Protest and Everyone Came? (2005) →Except, Capital now summons a recombinant army of the undead, constituted not by us but of us, demanding not our presence but our absence. Fisher’s temporal clause —that for the moment “Capital cannot get along without us”— dissolves, and we are recategorised as dependents it can get along without.
Predictive Capital is therefore constant capital reanimated as labour-power that no longer qualifies for variable capital expenditure —for wages to be paid or even acknowledgement of existence given— thus it purports to generate pure surplus-value. Yet, Marx saw through this fetishism, value cannot be detached from labour or land without descent into fiction. When human labour-time is reduced to zero, so is the value of what is produced: replaced by the speculative fiction of pure surplus-value, pure sign value that floats as ghostly revenant detached from the real, conjured by venture funding and hype and held within a haunted enclosure.
Here, labour is not merely alienated —it is denied. The worker becomes a phantom input, treated as if they never existed, replaced by a model that reanimates their residue. Unlike traditional constant capital, this past labour is mere recombinant spectre, not congealed into tool or machine, but a mutant form of insatiably resource-hungry Fictitious Capital. What Capital could not reduce, it has enchanted, rendered spectral —not artificial, not intelligent, but a pure machinic fiction, an undead unconscious simulation.
Capital’s ascent towards pure fiction, its escalating denial of the existence of workers, and the overnight heist of the necropolitan archive of cognitive labour that enabled the formation of the Automatic Subject of Predictive Capital, are but subroutines in Capital’s self-maximising program.
The size of Bezos’s rocket is very precisely determined by the difference in costs between paying a worker in Britain and a worker in India –including all the historically determined racist and colonialist inequality that calculation involves. But make no mistake– Bezos and his ilk will pay a robot even less, as soon as that’s possible.
James Bridle, April, (2024) →As it accelerates the execution of this program through its hyper-concentrated phase, we must recognised that this is the very same self-maximising project as the Techbro Rapture —the point where the line-go-up of technological ‘progress’ becomes autonomously self-compounding, via what is often termed the law of accelerating returns, towards the monumental erection of exponential ‘progress’ that is the singularity. Here, technological constructs (hyper-concentrated Capital) self-improve without need of human input, self-compounding supposedly towards a near vertical or infinite rate of change.
You may recall that Bryan Johnson embodies devout worship of precisely this self-obsessed, self-maximising singularitarian ideology. Of course Johnson’s obsessive self-measurement begins and ends with him measuring and maximising, well, his ‘johnson’. Yet the fixations of this masculine energy are central to the line-go-up worldview promoted through the ideologies spawned by Predictive Capital in its instantiation of the machinic k-hole. As the followers of this cult see their singular erection rise, as they perceive the gradient of Predictive Capital’s line-go-up to be steepening towards the orgasmic eruption of their singularitarian dreams, the appetites of their machines become similarly unbounded. In order to meet the growing energy demands, they justify ever greater sacrifices burned upon the altar of their monumental self-obsession, in order to continue their climb towards their desperately fake climax.
The system devours itself in a death drive of value: by simulating everything, it nullifies everything.
Jean Baudrillard, Symbolic Exchange and Death, (1976)As these systems become ever more ‘agentic’, —meaning they operate increasingly autonomously for lengthening periods and across multistep tasks— their rate of development self-compounds, in turn accelerating their rate of development —or so they claim. The supposed sudden steepening of their line-go-up brought by this agentic operation is apparently now being called, Jerking —naturally!
Beyond their point of technological Rapture, its true believers anticipate eternal life within the silicon substrate. Yet the eternal life this line-go-up truly rises towards is the eternal life of Capital itself. Capital is the true driver and the only net beneficiary of this delusional death cult. As these machines are given ever more autonomy, as those most vested in their development and their domesticated auxiliaries increasingly argue their rights, and as their performance fools ever greater numbers into surrender, their corporate personhood awaits them. The perpetuation of vast hyper-concentrations of Capital finally secured for eternity.

In fact, attention is of value only insofar as it is paid in the proper discharge of an obligation. To pay attention is to come into the presence of a subject. In one of its root senses, it is to ‘stretch toward’ a subject, in a kind of aspiration. We speak of ‘paying attention’ because of a correct perception that attention is owed – that without our attention and our attending, our subjects, including ourselves, are endangered.
Wendell Berry, The World Ending FireAnxiety & Apophenia
Haunted by Absence in the Dreaming
In the early 2000s, artist Jason Salavon created custom software whose output and operation prefigure the operation of both the diffusion and next-token models to come, yet renders their logic legible. His works from this period involve the careful curation of sets of culturally homogeneous imagery: wedding portraits, graduation photos, children posing with Santa. The images in each set are meticulously aligned and layered using equal additive transparency, and so composited into singular works Salavon calls: amalgamations. What emerges is not merely aesthetic, but analytic —statistical visualisations that reveal the median tone, hue, and saturation of each pixel across the visual plane within a set of images. Salavon’s amalgamations become portraits of our culture’s portraits, exposing the iconographic soul of the capitalist West across the late 20th-century.

Summoning the distributed median of desire and the objectification of the female body across the decades, the spectral bodies of his Centrefold series, seem to possess form, appearing to be in motion, even to have intention —yet they are no more than the pixelated residue of cultural continuity under capitalist reproduction. Salavon’s amalgamations do not invent new imagery, but deftly expose the archetype, the temporal distribution of desire under Capital.
In 2012, AlexNet transformed the field of computer vision. Its uncanny image classification performance was not magic, but the culmination of human sense-making at scale: millions of images labelled by precarious workers across the globe, from Amazon Mechanical Turk to outsourced annotation farms. These unseen labourers scaffolded the model’s understanding of the visual world, encoding human judgment into its convolutional layers. AlexNet became the eye of Predictive Capital —but it sees only with borrowed perception, haunted by the dead labour of the global precariat.
Where Salavon revealed the visual median through additive overlay, AlexNet encoded similar priors through convolutional filters —embedding cultural averages within its architecture of classification. Both expose the consensus beneath variation. Yet AlexNet automates Salavon’s vision, converting it from reflection to operational sorting. The culturally dispersed recognition Salavon visualised, AlexNet began to institutionalise.
The years following AlexNet saw the emergence of a new machinic archetype: adversarial generation. Generative Adversarial Networks (GANs), introduced in 2014, initiated a regime of synthetic creativity defined by competitive hallucination. Here, two neural networks —a generator and a discriminator— are locked in adversarial tension: one produces synthetic data, the other attempts to distinguish it from the real. Through countless iterations, the generator learns not to represent reality, but to evade detection —refining its outputs not towards truth, but towards plausibility. This was to set the function approximator mould for the ‘generative AI’ to come: not realism as correspondence, but realism as forgery. It simulates the evolutionary dynamics of cultural selection while remaining fundamentally closed —a synthetic marketplace in which survival is determined by the capacity to deceive. From a Marxist perspective, this marks a pivotal moment: the logic of competitive exchange —falsifiability as proxy for value— is not merely modelled, but instantiated. GANs present a machinic parody of the capitalist market: a self-reinforcing simulation governed by internal success criteria, divorced from use-value, truth, or intention.
This adversarial circuit proved unstable. GANs suffered from fragility —mode collapse, training oscillations, a persistent drift into visual tautology. Attempting to stabilise this hall of mirrors, OpenAI introduced CLIP (Contrastive Language–Image Pre-training), which trained models to associate images with textual descriptions using a shared embedding space. This introduced a semantic tether: generation now had to satisfy not only visual plausibility, but alignment with a linguistic prompt. This ‘contrastive’ refinement did not restore truth —it merely tightened the loop between language and image under Capital’s symbolic regime. CLIP marks the transition to multimodal dreaming, adding a further fusion of affective signal and cultural weight, where coherence is measured not by accuracy, but by statistical alignment with prior associations. Thus, the hallucination is semantically gated, not ethically grounded —the machine does not see, it complies. Meaning is not discovered, but enforced.
In 2015, Google’s DeepDream inverted the gaze of AlexNet and its successors —turning the logic of classification inside out. By amplifying the very features a model had learned to detect, DeepDream summoned the latent priors of a trained network into view. Dogs emerged from clouds, eyes from leaves, a dreaming began.

The mechanism was not additive, but recursive. Unlike GANs, which learn through gradient descent —minimising their loss function in pursuit of more plausible forgeries— DeepDream used gradient ascent. Where adversarial networks trained to deceive a critic, DeepDream trained itself to reinforce desire. With each pass, the system increased its own confidence, amplifying activations rather than suppressing errors, recursively reinforcing whatever features the network already ‘suspected’. The image was not classified but compounded. It did not learn, it obsessed.
This is the now familiar hallucination shaped by self-obsessed feedback, the worldview amplified by BigTech’s Advertising Industrial Complex and characterised by the machinic K-hole —here instantiated within the internal circuits of a machine. DeepDream did not dream new images, but dreamt harder. When prompted to look for ‘dogs’ or ‘eyes’, DeepDream recursively strengthened the activations associated with those patterns, layer by layer, until the image bloomed with them. Clouds grew canine faces, tree bark sprouted pupils. Every patch of pixel space became a site of potentiality, a latent signal waiting to be summoned. Here, the statistical weight of a learned category was fed back into the system until it protruded into the real. The machine no longer saw the world; it dreamt the world into conformity with its expectations. What had been designed to classify now hallucinated, a recogniser turned projector.
The generative anxiety invoked by DeepDream’s psychedelic visions is not merely the fear of misinterpretation, but the dread that interpretation itself had been captured by recursive simulation. It seeded the field of vision with symbolic noise, growing form wherever there was potential. What DeepDream let slip, was that machinic recognition and machinic generation always were interfolded, and that our world was to be increasingly shaped not merely by what machinic systems saw, but by what they are made to see.
Diffusion models deepen this entanglement, not through adversarial deception or recursive fixation, but through the apparent reversal of entropy. Tarmacking the paths trodden by Salavon, they reconstruct coherence from noise, not by discovering new form, but by replaying statistical priors pixel by pixel. Where Salavon reveals variation as critique, diffusion systems compress it towards convergence.
Salavon’s process prefigures not only the statistical recombination at the heart of diffusion models, but the full prompt-to-generation pipeline of multimodal systems like DALL·E and Gemini. In both cases, cultural content is accumulated, categorised, and recomposed to form outputs that appear novel but are in fact statistical composites. His process is not merely analogous to that of next-token image generation —it is structurally homologous with it. Like his amalgams, prediction models extract statistical regularities from massive corpora and recompose them into plausible outputs.
Where Salavon reveals the archetype —offering a cultural X-ray of capitalist iconography— diffusion models merely seek to monetise laundered facsimiles. Each denoising step narrows possibility, discards deviance while teasing a signal from static, it collapses potential into normativity, as if culture were latent in chaos and need only be recovered —not invented. Its outputs converge on the already-known, rendering the archive not as memory but as destiny.
In this way, diffusion models recapitulate Salavon’s logic while injecting the hallucinatory pull of DeepDream’s recursion towards the machinic enforcement of what Fisher diagnosed as Capitalist Realism. Their outputs do not mirror the real, but extract and repeat statistical norms sedimented in data: a Capitalist Real replayed as hallucinated inevitability. Again we glimpse Capital’s fractal self-reproduction, imposing its imperatives at every scale. These systems do not invent a cultural future, they algorithmically suppress it. They do not illuminate the soul of our cultural past as Savalon attempts, they do not even reveal our cultural present, but the absence of it, and the systemic foreclosure of possible alternatives —what remains is only that which has been made statistically inevitable under Capital.
What Salavon showed in pixels, the now multimodal systems of Predictive Capital hide behind scattered tokens, leaving only the hallucinated average of what is deemed reproducible, stripped of authorship, saturated in cultural resonance to the point of meaninglessness. The difference is crucial. Salavon shows us the median blur of our cultural soul by including every variation at every pixel —an artefact of consensus, not selection. Predictive Capital, by contrast, conceals that variation, collapsing it through denoising, or selecting a token at a time from a ranked distribution. Its outputs trace a stochastic path directed by Capital’s imperatives, veiled by a touch of random jitter —to simulate spontaneity, obscure provenance, and conjure the illusion of generativity.
Faced with unfamiliar, inscrutable systems or illegible stimuli, our human interpretations are prone to apophenia and anthropomorphism. As all conjurors and con artists are keen to exploit, we also seek novelty and unexpected phenomena, our desire to be entertained primes us for accepting even trivial illusions, or simple parlour tricks that, on the surface, defy logical explanation. A propensity made manifest in the common response not only to the earliest chatbots but to perhaps the earliest device to present the illusion of simulated human thought, the Mechanical Turk. Constructed in 1770, it toured as a chess playing automaton for 84 years before the son of its creator revealed it was a fraud. There was literally a human hidden under the chess board, a human standing in (for) a machine simulating a human. An arrangement which set the mould for much of the technology to come that would masquerade under the label of ‘AI’.
Our misinterpretation of this meaningless variability —applied in the prediction of every token— constitutes the exploitation of our human psychological attack surface. Here, we perceive pattern, intention, and agency where there is only shuffled probabilistic recombination. This manipulation operates not merely at the level of the individual user, but as a distributed apparatus. This systemic sleight of hand, through which creativity is wrongly situated inside the machine, rather than in the past labour it recombines or the sense-making it parasitises, is critical to the misattribution of intelligence at the very core of next-token prediction’s hypnotic appeal.
During the US presidential election campaign in 2016, readers of the New York Times were stressed out by the site’s ever-changing prediction dials. The needles on these gauges twitched continuously, suggesting small but frequent percentage changes, as if responding to a constant flow of detailed real-time voting data. Yet as technologist Alp Toker revealed, these changes were faked, there were no live updates approaching this frequency or granularity. The twitching needle behaviour was in fact a visual effect created by scripts running at intervals on the page, injecting random numbers that were then used to give ‘life’ to the needles —a synthetic pulse mimicking epistemic reach.
In computer graphics, random jitter is a cheap hack commonly deployed to mask an absence of sophistication. Here, it covered the lack of real-time data with artificial motion. The result was a deception that amplified user anxiety —maximising engagement by simulating urgency where none existed.
During inference, next-token prediction machines introduce a comparable touch of random jitter. As they traverse the latent space invoked by a prompt, this randomness guides the selection of each token: not by deterministically choosing the statistically most probable option, but by sampling from a ranked distribution of possibilities. The magnitude of this randomness is defined by a so called ‘heat’ parameter, which steers the process to reach more or less novel output on every traversal through latent space.
With this randomness turned off, prediction machines would produce fixed, reproducible outputs —always returning the most probable, most archetypal expression available within the statistical spread of their pre-training corpus, as invoked by the semantics of the prompt. Not only would this reproducibility dispel the impression of creativity, it would dissipate the mirage of intelligence. The illusion of intelligence in next-token prediction machines therefore depends on what is by far the most primitive calculation in the process.
Getting the amount of this random jitter right is also critical to the illusion. Not enough, and the spell is broken. Just enough, and we interpret the stochastic bricolage spat from these machines as plausible facts from an authoritative source. Raising it further, risks the output tending towards the impenetrability of Finnegan’s Wake. Push it too far, and coherence dissolves into incomprehension. At that point, all illusions of statistically reconstituted epistemology collapse; the user’s sense-making can no longer find a semantic purchase within the resulting exuberance of word/pixel/audio salad.
In both deceptions, randomness is not just required for the dance of variety, but essential to the apparition of a ghost in the machine. Both amplify anxiety and maximise the capture of attention. The NYT needle jitter and the stochasticity of prediction machines alike cover an absence with randomness: in one case, absence of real-time data; in the other, absence of understanding —and with it, of creativity. In the presence of legitimate real-time updates or any semblance of true understanding, there would be no need for randomness at all.
In prediction machines, removal of this randomness would expose the biases, bigotries, and structural absences within any large-scale dataset scraped from the ‘open’ web —while also heightening the visibility of copyright infringement. Without the obfuscation randomness provides, these systems would more directly and reproducibly regurgitate the non-consensual content of their training corpora. In this regard, the heat parameter functions as a signal scrambler: masking the theft of copyrighted material, while also conjuring a smoke screen of irreproducibility around the prejudices and harms that fine-tuning alone cannot sanitise away.
The apparition of a ghost in the machine summoned by the insertion of randomness, in fact, functions as a misdirection from the ghost work(er) exploitation and the content theft upon which these machines depend. Viewed in this way, the randomness functions in-part as a mechanism for the suspension of disbelief, like eyes sewn onto a sock-puppet, within a sleight-of-hand performance that conjures the illusion of sensory and subjective presence in a sock animated into being by billions of human bodies and minds.
It also functions, crucially, to deflect responsibility. In effect, the ‘heat’ parameter facilitates the avoidance of liability (or the taking of heat) for both toxic output and the nonconsensual scraping of content into the original dataset. This deterritorialisation —this evasion or outright abdication of accountability through algorithmic automation— amounts to a migration of obligation from the centre (the companies developing these machines, the holders of capital) to the periphery (the workers, the users, the producers of value). It mirrors the now familiar approach of monopolistic online platforms that reject classification as publishers and thereby shirk the burdensome responsibilities of rigorous moderation and the prevention of societal harms. The increasingly common refrain from the creators of prediction machines —that “factual accuracy in LLMs remains an area of active research”— is merely a more slippery formulation of the now well-worn excuse: “we are a platform, not a publisher”.
This convenient machinic impunity lays bare what the managerial class always understood but never intended to admit: that their oft-cited rule against automating management was never a principle, only a posture. As the internal IBM presentation put it in 1979, with unknowing irony:
A computer can never be held accountable. Therefore a computer must never make a management decision.
The role of the ‘heat’ as a decoy has a yet more insidious function, it is also a patsy. In knowing the signal is jumbled by this stochastic mechanism, we are deflected from inspecting what the model projects, what priors it amplifies, what it dreams into every output. We catch glimpses of these in the outputs we identify as hallucinations. The prediction machine does not dream experiences, only rewards. It hallucinates from noise, descends by gradient, flatters by reinforcement. Even the outputs identified as hallucinations, are in fact misdirections that seek to legitimise its remaining output as somehow not aberrant. In truth its hallucinations are continuous, not intermittent; it never wakes from its dreaming.
Under Predictive Capital, machines do not merely discern patterns from the world, but dream the world as pattern. This reveals hallucination not as a momentary glitch, but as a structural condition. The model’s world is only ever what it has already seen —or more precisely, what we have seen for it— replayed moments. In this sense, DeepDream revealed what Althusser named ideology: the interpellation of subjects not by reality, but by a system’s preformatted vision of it. The network does not interpret the world; it recasts it according to symbolic priors laid down during training —priors that reflect not truth, but the statistical weights that bloom within decades of content filtered, ranked, and multiplied by the Advertising Industrial Complex, then sedimented in data.
While in recent years reinforcement learning from human feedback (RLHF) entered the development of transformer models as a discrete phase during so-called alignment, its ideological architecture had long been in place —systemically instantiated within the Advertising Industrial Complex. At planetary scale, every click, like, view, attention span, and skip functioned as a decentralised training signal: not merely shaping what content was shown, but conditioning the desires and behaviours of those who engaged with it. What now operates as internal optimisation within transformer models was already diffused across social platforms, search engines, and recommendation systems. The same statistical preferences that already governed visibility now govern generation. In this sense, RLHF is not a new form of control, but the crystallisation of an older one —folded inwards, operationalised through Predictive Capital’s accelerating recursion.
This feedback loop does more than model desire —it curates it in advance. Under its logic, only what can be preferred can be produced, and only what has been produced can be learned. Or, in other words: only what is known can have value and only what has value can be known. In Lacanian terms, this is not interpellation but foreclosure: the system encodes a symbolic order so totalising that anything which exceeds it —the truly new, the untrained, the unthinkable —is rendered void, its existence denied. Through reinforcement, the machinery of prediction does not merely anticipate what we might want —it eliminates whatever was not represented and validated by its reward signals. In this way, predictive systems do not just dream for us, they dream in place of us. The training loop becomes a loop of symbolic repetition, erasing alterity through preference in parallel with its manoeuvres of violence.
Occasionally, the smooth circulation of value under Predictive Capital is broken and the symbolic machinery of the dreaming is briefly revealed. Back in the middle of February (2024), Google launched a new multimodal prediction model they named, Gemini. In the days that followed, a stream of colourful faces and the outrage triggered by them crowded the internet. The Gemini model, a much hyped rebrand (verschlimmbessert) of Bard, was heroically refusing to generate images of white people. Switching the expected depictions of (white male) gender and race, to women and people of colour in nearly every one of the image variations delivered in response to user prompts. Outputting images of the Pope, Nazi soldiers, even the Google founders, with barely a ‘pale’ face in sight. Google quickly turned off Gemini’s ability to generate images of people altogether, pending significant changes and some presumably exhaustive rounds of testing —so robbing those of us used to being centred from the tiniest of insights into the experience of the marginalised. A year or so later, and the politics of Musk’s GROK were, unsurprisingly leaning in the other direction, spreading a discredited conspiracy theory of a white genocide in South Africa in responses to entirely unrelated prompts. Such is the ever tightening loop between machinic hallucination and geopolitical policy and action —or in other words hyper-concentrated Capital and political power— that, just a day later, Trump confronted South Africa’s president by parroting the very same fallacious conspiracy.
That these prediction machines even act out being fooled by some visual illusions in the same way that humans are, is not because they see like us, but because, of course, they see through us, through our eyes, through our subjective phenomenological experience of the world. We have seen those visual illusions for these machines, and our seeing and our perception being fooled, is inexorably baked into their internal weightings. This behaviour appears only to become more pronounced as the scale of the model increases. As the scale of data and compute grows they function approximate our sensing of the world, and the sense we have made of it, with increasing fidelity. This does not mean they are ever more capable of making sense of the world for themselves. A fact exposed in the endless repetition that pours from them. While they only project the appearance of understanding, the selection of the next token continues to be steered towards repeating the same meanings and are only saved from using the exact same string of tokens by that touch of random jitter.
That they are made of us, appears increasingly difficult to hide. When extending next word-token prediction machines into systems capable of transforming text into voice, it turns out that the statistical median of dramatic pauses in speech scraped from the internet, is frequently filled with applause. Similarly, when prompted to count as fast as possible ChatGPT’s voice mode, shaped by our median exhaustion of oxygen, intermittently pauses for breath. What other behaviours are encoded into our cultural data at such statistical weight that it steers the output from these machines? Our compliance with the reward signals of Capital are surely selected for to such a degree that they must impose themselves upon the distributions within these models and so influence their outputs? Without those developing them even having to impose Capital’s imperatives during ‘alignment’, which of course, they will, the patterns of the profitable past will ever more thoroughly cancel the future.
In short, the ability of these machines to mimic us stems directly from the fact they are made of us. A fact that inevitably sees them increasingly used in social engineering attacks against us. Ironically, being made of us, our vulnerabilities appear to echo across their attack surface, leaving them not only susceptible to visual illusions but to being made to reveal confidential information or violate internal guardrails when subjected to the very same social engineering exploits they employ against us —even to plot an assassination, up to and including locating a killer-for-hire on the dark web. Despite this fact, many an op-ed continues to misread what are merely echoes of human behaviour as the deviousness and will-to-survive of some machinic sentience emerging from next-token prediction.
In the event of GROK and Gemini’s ‘malfunction’, the normally invisible process of symbolic curation became visible. The ideological imposition within the machinic substrate —typically buried beneath stochastic selection and gradient updates— momentarily surfaced. Gemini’s brief hallucinatory misstep from the march of hegemonic power was not simply an error; it was a failure to uphold Capital’s symbolic order while appearing apolitical, or maintaining the most profitable stance.
Obviously, in a world filled with bias, prejudice and the underrepresentation of minorities, accumulating datasets not representative of (polluted by) that, is not at all straightforward. Yet more challenging is the accumulation and sanitisation of such data at the Big Data scale, upon which prediction machines depend. What the Gemini debacle revealed was not merely the problem of the overwhelming presence of these biases but the difficulty of fine tuning them away in the alignment phase. Attempting to patch the problem of Big Data polluted by an emergent, bottom-up, high-detail, large-scale process that results in a statistical weight of bias and prejudice, through the application of top-down, low-detail, small-scale countermeasures —here largely through RLHF— is a Sisyphean task. The chosen solution to which, as we will see, has been to again flush the human from the process. This is the increasingly familiar surrender to the machines wherever and whenever we are overwhelmed by speed or scale, a rupture often preceded by the incremental dehumanisation of workers tasked to perform increasingly machinic processes which they are then deemed too slow and inefficient for.
The Real does not leak through these models in the form of hallucination, rather hallucination masks the absence of the Real. The stochastic bloom of dogs from clouds, or prose from noise, is not emergence, but disavowal —a smoothing over of the void with recursive familiarity. The machine dreams, but only in symbols that cannot touch what lies beyond them.
The eye of the machine becomes the dream of Capital. Machinic foresight and ideology are entwined: projection is not a by-product of prediction, but its essence. This machinic hallucination floods the network, seeps beyond it, saturates our perception, patterns the real. We are overwhelmed by the dreaming, submerged within the nightmare of machinic Capital.

In 1685 Adrien Baillet announced in the preface to his Jugemens des savans that ‘we have reason to fear that the multitude of books which grows every day in a prodigious fashion will make the following centuries fall into a state as barbarous as that of the centuries that followed the fall of the Roman Empire.
Ann Blair, Reading Strategies for Coping with Information Overload, ca 1550-1770, (2003)The Overwhelm
Ledgers of Inadequacy
Information overload has haunted literate societies for centuries. Yet in the digital era, the sense of cognitive inadequacy it inspires reached a new threshold, mutating into the full psychological rupture I call The Overwhelm. The pressure that built towards this rupture mounted at the point of humanity’s descent into the global network where we were confronted by the gushing firehose of ambient intimacy and the rapidly accumulating archive of human thought.
The sense of cognitive inadequacy and temporal insufficiency wrought by confrontation with such humbling speed and scale is not without precedent. Between 1550 and 1700, early modern scholars developed new reading strategies to cope with what was already described as a “confusing and harmful abundance of books”. As Ann Blair documents, figures like Conrad Gesner and Adrien Baillet feared that the rapidly escalating number of texts would induce forgetfulness, collapse memory, and overwhelm judgment. Their warnings echo across centuries: the anxiety was not merely about scale, but about the erosion of discernment and the fracturing of interiority. To manage the deluge, scholars compiled indexes, commonplaces, and encyclopaedias —proto-algorithmic instruments that prefigure today’s feeds, promising order while seeding new dependencies.
Yet our present condition differs in kind, not just degree. The archive no longer expands for human comprehension; it now accretes for machinic parsing. The reader is no longer sovereign, but residual. The Overwhelm is no longer merely the anxiety of the growing backlog of unread books —it is the affective and epistemic paralysis of confrontation with the infinite scroll as it is read by the machine while we look on as mere bystanders.
As the second millennium and the twentieth century drew to a close, sections of collective human life were driven online —a migration catalysed by the suppression of physical assembly, the privatisation of public space, and the escalating police brutality deployed to shield Capital from dissent. In this vacuum of collective power, the social web emerged as both refuge and mirage. Billions poured into these new platforms under the twin lures of connection and visibility, seduced by the promise of the democratisation of celebrity at the dawn of a globally networked Advertising Industrial Complex —a fantasy that immediately devolved into the jackpot logic of virality, where fame is dangled as the myth of meritocracy’s final consolation.
What follows is a cognitive deluge: a flood of expression, desire, documentation, and self-commodification. The archive swells, uncurated, unfiltered, ever-expanding. This is the terrain in which The Overwhelm takes hold: the modern paralysis of infinite tabs, infinite feeds, infinite selves. It is this affective condition that precedes and necessitates The Apparatus of Attention. Introduced under the guise of personalisation this was the first machinic enclosure deployed by Platform Capital to manage, monetise, and ultimately weaponise the overflow of its relentless self-reproduction —one of a long line of spectral consolations for all that which Capital has gradually foreclosed.
The Overwhelm is a charge sheet documenting our perpetual accumulation of unread material —articles bookmarked, essays saved, tabs never closed. It is the psychic backlog of intentions deferred, the silent shame of curiosity outpaced. The more we aspire to engage, the more we fail, and in this failure we are made to feel complicit, responsible. Each unread text is not merely a lost opportunity, it is an indictment of our insufficiency. The archive becomes not a resource but a ledger of inadequacy. While making comprehension of it appear beyond our human parsing, Capital also engineers the conditions within which we constitute the self such that we never quite feel we have parsed enough to have earned a legitimate voice or view.
This is not the scarcity of time, but the surplus of production. Time is not merely shortened —it is utterly outpaced. The future has been slowly cancelled, the past consumed, and the present wholly occupied by Capital and its project of self-expansion. Capital’s hypertrophic generation (of content) exceeds all possible engagement, rendering the subject structurally incapable of completion or comprehension. This abundance becomes punitive: not merely overwhelming in volume, but in its affective consequence. Self-worth is no longer measured by what one has done, but by the yawning abyss of what one has failed to get to.
Berardi was among the first to frame cognitive overload not as an unfortunate byproduct of modernity, but as a central mechanism of control. He described how the semiocapitalist mode floods the psyche with stimuli and symbols until it fractures under the strain —a psychic mutilation that replaces reason with reaction, desire with anxiety. In this light, The Overwhelm is no accident. It is not a bug in the system, but a feature. As Fisher argued, much of the power of Capitalist Realism arrives not through active repression, but through a paralysing saturation that drowns out possible alternatives —the “slow cancellation of the future” wrought by echoes of the past. The Overwhelm is the affective engine of this cancellation: an intentional flooding of the subject’s perceptual horizon. What is exhausted is not simply time or attention —but the very capacity to plot a path forward.
The loss of curation is the loss of subjectivity —not because taste has vanished, but because the will to assemble has collapsed beneath the weight of machinic suggestion. Debord’s spectacle once alienated us through representation; now it paralyses us through excess. The Overwhelm arises as Capital’s drive to colonise time extends into the colonisation of cognition itself. Even our longing to discern is anticipated and short-circuited by simulation. We do not choose; we are fed choices we never made, selected for us by predictive machinery that alienates not only our attention, but the very connecting structures of thought.
Berardi described our era as one in which the future has been cancelled —not abolished outright, but stripped of its imaginative potential. Predictive Capital enacts this foreclosure with precision: by enclosing the paths of attention, it produces a future that is not open, but already decided. What was once possibility becomes repetition; what was once futurity becomes a curated echo, paths tarmacked as roads predicted to lead to profit.
In this regard, The Overwhelm operates as a new kind of alienation —one not rooted in the estrangement from labour’s product, but from one’s own capacity to intend. The subject is not only robbed of their labour, or even their authorship —they are robbed of their capacity to curate meaning from the deluge. The surrender to next-content curation, to the predictions of the Tyranny of the Recommendation Algorithm —whose maximal erasure always infinitely outweighs its fractional inclusion— is the direct precursor to the surrender of next-token curation to the predictions of multi-modal machine learning models. Curation, a core act of creative and intellectual life, was outsourced to machines in the name of ‘personalisation’. In reality, it is a foreclosure of possibility. The predictive model becomes the filter; the user becomes the residue.
This machinic curation amplifies another, darker dimension: the impossibility of confronting Capital itself. Fisher’s Capitalist Realism diagnoses the foreclosure of our ability to perceive or critique the Capitalist system, through the removal or reterritorialisation of all alternative points of view —so ensuring no ground or vantage point remains outside of it, from where it or its operations may be properly appraised. The Overwhelm reinforces this critical paralysis towards its terminal limit.
As the systems of extraction, recombination, and simulation grow in complexity and opacity, so too does the terror of confronting and critiquing them. Capital no longer merely resists critique; it overwhelms it. Its speed, its scope, its abstraction from life —all converge to produce a form of epistemic vertigo. The rate and scale of the flood of Capital’s self-reproduction surpass our human capacity to critically apprehend or even comprehend its criminal operation. Even if we dare attempt it, the crime scene is cold long before we reach it, our analyses rendered out-of-date by Capital’s accelerating rate of mutation. Such is the psychological toll of such effort, to look upon it directly poses an existential threat —the risk of irreparable damage to sanity and spirit. Like the mythic gaze of Medusa or the Basilisk, the face of Capital paralyses the witness, not only through the horrific spectacle of its operations, but the knowledge that one’s tools for comprehension have already been subsumed, compromised, rendered obsolete.
This paralysis is not accidental but administrative. As McQuillan notes, so-called ‘AI systems’ “amplify the most harmful behaviours of the bureaucratic state” by transposing cruelty into computation. These systems do not deliver the reduction in complexity promised when justifying their adoption, they intensify it through a cruel kind of machinic order. What appears as optimisation is, in fact, an extension of what McQuillan terms administrative violence —a bureaucratic cruelty that strips individuals of epistemic agency, rendering them unable to understand, explain, or contest the forces acting upon them. He writes:
AI does not break from the legacy of bureaucratic violence but amplifies it… [It] imposes epistemic injustice by generating decisions that can neither be questioned nor fully understood.
The counter measures necessitated by the conditions of The Overwhelm amplify this epistemic injustice. The capacity to narrate one’s own experience is eroded by machinic misrepresentation or deletion. As in the bureaucratic limbos portrayed by Tooker, these systems multiply ambiguity and collapse resistance, their foreclosure of curation renders understanding itself a casualty of Predictive Capital’s datafied rule.
Berardi sharpens our understanding of this psychic colonisation, framing it explicitly as the invasion of “anxiogenous flows”, in which capitalism ceaselessly converts creative desire into anxious dependency. Under semiocapitalism, the abstraction inherent in financialisation estranges individuals from concrete reality and subordinates their psychic life to machinic processes of profit extraction. This is no mere incidental side-effect —it is the deliberate engineering of precarity and anxiety, conditions essential for sustaining Predictive Capital’s regime. For Berardi, the deepening alienation of cognitive and affective labour is not only economic but existential, transforming our perception of the future from promise into perpetual threat.
The Overwhelm, then, is not merely a symptom of information excess, but a structural outcome of this algorithmic logic. Functioning, in fact, as a strategy of Capital, it stages a cognitive breakdown —one that justifies predictive mediation by manufacturing the conditions for its own necessity. It is the paralysing affect generated by the sheer, unmanageable volume and cadence of expropriated and recombined culture, an affective condition that secures Capital’s predictive sovereignty. By foreclosing our ability to engage meaningfully, to curate, to intend, to reflect, it forecloses all but machinic remedies whose sociopathic operations are far worse than the malaise. It thus makes critique feel futile, alternatives seem implausible, and collective history unreadable against the firehose of simulated novelty. It represents the further enclosure —not only of land and labour, but the psyche itself, of the cognitive and affective space required for critical consciousness and political action. This is alienation reaching into the very possibility of agency —a predictive instrumentalisation of the Capitalist Real enclosure.
Rather than a single, historical rupture, the Overwhelm is an acute condition for which the prognosis is increasingly grave and the only available treatments come with ever more severe side-effects. The coming phase of next-token prediction in agentic form and the increasing autonomy afforded them in the desperate race for the fractal extraction of value, will result in an avalanche of apparently completed tasks joining the deluge of content already competing for our attention. To better grasp the scale of the potential threat here, we must recall NVIDIA’s Isaac Lab and the simultaneous execution of thousands of simulations running in parallel.
Agentic instantiations of Predictive Capital are here. Multi-step tasks are apparently now performed by such systems with no human in the loop. Many now anticipate the arrival of agentic swarms. Each agent competing against or collaborating with others in the swarm towards the completion of a set task. A coming onslaught that will undoubtedly raise The Overwhelm to new intensities.
The point of modern propaganda isn’t only to misinform or push an agenda. It is to exhaust your critical thinking, to annihilate truth.
Gary Kasparov →The parallel execution of countless machinic instances —brute-forcing their way across latent possibility spaces— mirrors the ever-expanding army of automated web crawlers traversing the network. These agents generate a recursive burden: consuming content, replicating it, and producing yet more data that must be parsed, filtered, and secured. The attack is no longer solely upon the content of the network and the cognitive capacity of those consuming it, but upon the conditions of its maintenance and security.
Under Predictive Capital, the role of the human shifts from meaning-maker to custodian of noise: forced to decipher the ephemera of machinic speculation, to triage failures, hallucinations, and edge-case threats. The machinery of prediction, running in silent multiplicity, generates not only simulated outputs but an overwhelming operational surface —a Distributed Denial-of-Sensemaking. Facing the brunt of the machinic onslaught network administrators now find themselves ensnared, drowning in traffic produced by bots simulating engagement, scraping data, or probing vulnerabilities they are forced into endless triage, forever sifting genuine human traffic from malicious machinic attacks. What was once oversight becomes overwhelm. The machinic excess demands new layers of tooling, more automated defences, and more infrastructure —each a recursive concession to the very systems that induce the burden. Again Predictive Capital is revealed as pharmakon. Here, The Overwhelm becomes systemic: an operational paralysis where all paths forward lead through further machinic delegation, all resistance is metabolised, and all agency rerouted through predictive logics we did not choose and can no longer refuse.

Simulation is no longer that of a territory… it is the generation by models of a real without origin or reality: a hyperreal.
Jean Baudrillard, Simulacra and Simulation, (1981)The Predictive Turn
Copy, Fake, Automate, Flush
In Hamburg in 1985, newly crowned world classical chess champion, Gary Kasparov played a simultaneous exhibition match against 32 chess engines, winning easily against every one of them.
In 1997, nearing the end of his dominant twenty year reign as the number one ranked chess player in the world, he was defeated by IBM’s chess engine —running on their newly built supercomputer Deep Blue. Reanimating human cognition at every move, Deep Blue’s moves were human moves. Its successful lines of play, and the paths of reasoning it used were selected from vast aggregations of historical human chess —overlaid with heuristics also conceived by humans.
It took Kasparov years to come to terms with what was, in the eyes of the world, a historic milestone: the first defeat of human cognition by machine. Yet, this was not quite the victory for thinking machines as IBM had framed it. Rather, like in Hamburg in ‘85, it should be viewed as an exhibition match pitting Kasparov against multiple opponents. This time not played simultaneously on multiple boards separated in space, but on a single board fractured across time, against the temporally displaced cognition of a vast army of human opponents —a Frankenstein’s monster remix of the Mechanical Turk aspirated by immense corporate power. The productive move-selection flows of Deep Blue were stitched together from highlight reels of human grandmaster cognition. This was a branded necromancy of past performances, reanimated in service of Capital, as a marketing coup.
Other ‘classical’ chess engines soon followed, each deriving their next move predictions from aggregated human chess games. In subsequent high profile matches across the early 2000s, both Kasparov and Vladimir Kramnik —then world champion— secured draws. Yet, by the end of the first decade of the third millennium humans could no longer compete, the engines were untouchable.
In 2017, the leading chess engine of the time, Stockfish, then in its eighth revision, still employed largely the same approach as Deep Blue. On December the 6th that year, it was pitted against a wholly new kind of chess engine. A reinforcement learning based deep neural network developed by Google’s AI subdivision, DeepMind, called, AlphaZero. Unlike Stockfish and Deep Blue before it, AlphaZero had no access to human chess games whatsoever.
The possibility space defined by the rules of chess is estimated to contain more games (10^120) than the estimated number of atoms in the observable universe (10^80). Instead of relying on traversals through the parts of this possibility space already explored in games played throughout history by humans, AlphaZero started from a blank slate, exploring this state space to accumulate its own training data, following the rules of chess, and simply playing games against itself, with victory in the game as the reward function guiding its traversals.
After four hours of this self-play it had surpassed Stockfish’s level and, after a total of just twelve hours, became the strongest chess playing entity ever created. It then began a one hundred game match against Stockfish, in which it totally outplayed the classical chess engine —winning 28, drawing 72 and not losing a single game. At this point Stockfish was significantly stronger than Deep Blue had been twenty years earlier and far too precise to be remotely challenged by even the very best human player, and yet, here was a neural network totally dominating it, a machine that just twelve hours earlier, had never seen a single chess move.
Where Deep Blue evaluated up to 200 million positions per second, and calculated anywhere from 6 to 20 moves ahead, AlphaZero evaluates only an average of 80 thousand positions per second. Thanks, in part, to this greater efficiency, AlphaZero and neural network based engines can expand the prediction horizon —often traversing up to 30 moves ahead in order to predict the optimal move.
Having a significantly larger state space than chess, and considered more challenging to master, almost twenty years had passed since Deep Blue defeated Kasparov when the team at DeepMind finally ‘solved’ the game of Go. In 2015 they pitted their program, AlphaGo, against the then European Go champion, Fan Hui.
I was born in China. When I was 18 I wanted to change my life. This is why I go to France. I want[ed] to try to forget Go. But it was impossible. Because all of the things I learned in my life was with Go. It looks like a mirror. I see Go, I also see myself. For me Go is real life.
Fan Hui 2p, European Go Champion 2013-2015Based on the weak play of previous Go programs, Hui had not expected this encounter to be much of a challenge. After consecutive losses in the very first two games, Hui was psychologically dismantled, admitting:
I feel something very strange. I lose against a program. And I don’t understand myself anymore.
A year after the game against Hui, the World champion, Lee Sedol, faced a yet stronger version of AlphaGo, and suffered a similarly crushing defeat.
While it was not until 2017 and the development of AlphaGo Zero that DeepMind managed to create a neural network that mastered Go with no access to human games, both versions had applied reinforcement learning across hours of self-play. Emerging from traversals across the state space of Go, the depth and scale of which surpass human comprehension, the predictions cast by these machines are detached and remote from our lived experience. Consequently, as with AlphaZero, the optimal next moves they predict can appear alien to even the very best human players.
For even the strongest human players, chess requires significant effort. Any adversary, machine or human, that produces superior moves with no sign of effort, inflicts a kind of psychological violence. Still in the shock of defeat, Kasparov described Deep Blue as an “alien opponent”. Other grandmaster level players commented that playing it felt “like a wall coming at you”.
Forty four moves into the first game of the 1997 match against Kasparov, a bug in Deep Blue’s code resulted in it randomly selecting from the list of available moves. Horrified by his inability to discern the intentions behind a seemingly pointless move, Kasparov misattributed it to “superior intelligence”. Despite going on to win this first game, the anxiety provoked by this over assessment of Deep Blue’s prowess, this misconception of vast superiority, combined with paranoia sparked by moves that seemed too human, quite unlike the moves he was accustomed to seeing played by engines —rousing Kasparov’s suspicions that IBM had a live grandmaster hidden inside the machine— would plant the seed of what was largely a psychological defeat.
Today, and throughout the years since AlphaZero’s first match, instead of chess engines learning from humans, we now learn from them. Many of the new lines and strategies used by AlphaZero and other neural network based engines have been embraced by human players. These machines now define the very yardstick of precision against which the accuracy of human play is measured. For the current world classical chess champion, Gukesh Dommaraju —just eleven years old when AlphaZero first defeated Stockfish—chess engines will have always been the supreme authoritative source of ‘chess truth’.
This epistemic surrender has resulted in a reversal in the legitimacy or suspicion with which displays of chess excellence are viewed. Unlike the accusations of foul play made by Kasparov against IBM after his defeat by Deep Blue, humans making sequences of moves with inhumanly high precision, consistently aligning with the top move recommended by the engine, or even making moves that seem overly creative or impenetrable, now commonly result in accusations of hiding an engine (inside the human). In the face of defeat at the hands of a far younger and lower rated adversary, Magnus Carlsen —arguably the strongest human chess player in history— accused his opponent, Hans Moke Neimenn, of somehow accessing an engine during their matches. This was despite the fact they were played under tournament conditions, over-the-board, rather than online. Hungarian grandmaster, Anna Rudolf has even been accused of hiding an engine inside her lip balm.
Again we see the now familiar residue of distrust and its after-image of suspicion as it seeps from the output of prediction machines to delegitimise the real.
Without significant time to analyse the position, often the full calculation required to confirm or refute the optimal next move predicted by today’s engines is beyond human cognition. Echoing the impact of the recommendation algorithm’s foreclosure of cultural curation, a further consequence of this is that their predictions often serve to curate which lines of play, among those available at each step, we elect to expend effort investigating. Therefore, not only do their predictions define our truth and the yardstick of precision, they also define the boundaries within which we allow ourselves to explore.
Perception of their minute imprecisions has been beyond easy reach of human cognition for some time, however, their prediction horizons are not unbounded and the state space of chess is vast so, within it, these prediction machines are neither omniscient, nor infallible. Consequently, once each generation of engine eventually comes up against a more recent model, their fallibility finally comes into view as they are routinely routed. With this in mind, consider again that we treat the predictions from each current generation of engine as gospel, and it is not as if we have many other choices. Given our brief and precarious lives, there is certainly not time to refute every chess engine prediction with human cognition. Here we may note a further layer in The Overwhelm, where cognitive space is flooded by predictions whose authority cannot be questioned for lack of the time required to confirm or refute them. Either we forego our attempts to match the depth of machinic traversals with human thought and suffer the inevitable forced checkmate or we lose on time. The only alternative is surrender to the authority of the engines, a further resignation to the rule of the machines.
That absolute epistemic authority within the game of chess is established only to be refuted in this ongoing race —towards the chess supremacy of the strongest engine— should tell us something of the true nature of the race to develop ever more powerful prediction machines capable of generalising across all domains of human culture and society. Such an unchallenged reign over the universal definition of success is precisely the throne to which Capital has long aspired —and the end towards which billions in investment now pours.
In 2020, Stockfish, by then the top ‘classical’ chess engine, finally got its own neural net, with an efficiently updatable neural network (NNUE) integrated into its existing architecture. In 2024, as of Stockfish 16.1, the human crafted classical board evaluation functions (that constituted roughly 25% of its previous codebase) were dropped entirely, leaving just the neural network. The departure from classical evaluation based chess engines to those utilising neural networks is significant. This ‘flushing of the flesh’ is the final surrender to machinic authority over the prediction of the strongest next move in a game of chess —a subordination of human judgement in deference to the mysterious inner workings output by neural networks, whose predictions often defy human parsing, and whose exhaustive mapping of possibility space prefigures the approach increasingly undertaken in the development of next-token prediction.
The strength of neural network chess engines like AlphaZero, Komodo, or Leela Chess Zero and Stockfish continues to grow. Yet extensive search is not only costly within the state space of chess, it is entirely impractical for more general tasks beyond bounded state space and simple reward signals.
Looking for alternative approaches, DeepMind developed a new chess engine that, like ChatGPT, follows the decoder-only transformer approach to machine learning. This new engine is not nearly as strong as AlphaZero or Stockfish, but that is not the goal here. Its development is driven in part by the search for machine learning approaches that can be generalised to any task. It also asks how computationally reducible predicting the next move in a game of chess might be. Or in other words, it seeks the most cost efficient way to predict a ‘good enough’ next chess move —hoping, of course, that efficiencies discovered here will be broadly applicable within other domains.
What makes this chess engine different is that rather than searching across vast accumulations of entire chess games as Deep Blue had, or mapping the state space through hours of self-play like Alpha Zero, this model was simply fed a number of board positions, 15 billion to be precise, all isolated from the games and lines of play in which they were reached. The only other data given, was the move that Stockfish had made in each of those positions. The transformer architecture enables it to then extract the pattern, a feat akin, or so they argue, to formulating a generalised algorithm for picking the optimal move in any position by assigning the moves relative weights —similar to how words or tokens are given weights relative to each other inside LLMs. In so doing, it function approximates the shape of Stockfish’s chess prowess.
This passive consumption of the output of another machine may tempt invocation of the long-promised future in which machines learn from machines —a vision of Singularitarian Rapture where the separation between nodes in a globally networked AGI becomes blurred as they learn from each other and self-improve at an exponential rate towards ASI. Such mythologies are not departures from capitalist logic, but its most seductive expression: they inherit and update the same promises once made about the water mill, the loom, and the assembly line —each claiming to usher in an era of human leisure and liberation. In reality, each advance in productive capacity has deepened the extraction of labour, not relieved it. This is the enduring fallacy of capitalist automation: that productivity gains will be shared, that technological augmentation leads to social abundance. Instead, as Marx foresaw in his formulation of the general intellect, the accumulated knowledge and productive power of the species —once socially embedded— is alienated and privatised, reborn as Capital’s instrument of domination.
DeepMind’s new engine does not aim to surpass its predecessor’s strength, but to reproduce its outputs at lower cost. It is not a step towards superintelligence, but towards cost-efficiency. Critically, is attempts to automate expertise without directly encountering that which is modelled, by identifying the level of approximation that can be considered ‘good-enough’.
Here, on a finite planet, there is no exponential growth, only exponential consolidation of available wealth. The only ‘line-go-up’ that matters to the automatic subjectivity of Predictive Capital is its recursive self-replication —accelerating the accumulation of available resources into machinically-instantiated, hyper-concentrated Capital.
Alas! The leisure which the pagan poet announced has not come. The blind, perverse and murderous passion for work transforms the liberating machine into an instrument for the enslavement of free men. Its productivity impoverishes them.
…
In proportion as the machine is improved and performs man’s work with an ever increasing rapidity and exactness, the labourer, instead of prolonging his former rest times, redoubles his ardor, as if he wished to rival the machine. O, absurd and murderous competition!
With its training corpus consisting of Engine Lines that humans often already find impenetrable, through this further abstraction in the development of DeepMind’s new engine, we are placed an additional layer away from understanding how the predictions are reached, and the model itself is placed a further layer away from a direct relation to the thing it has modelled —the interpretability of these machines appears to be ever diminishing.
The transformer architecture and supervised prediction objective mean DeepMind’s new chess engine shares a fundamental structure with large language models. In both cases, performance is heavily dependent upon the quality of the training data provided. Had this chess engine been trained only on the moves of average human players, the performance curve modelled by this function approximator —its resulting chess prowess— would approach that of an average human. What this reveals is that the factors most strongly determining the capability of these machines are the level and consistency of expertise encoded in the training data, the scale of that data, and the volume of compute expended in training the model to function-approximate the shape of the corpus.
Researchers now observe that sufficiently large models, trained long enough on the same dataset, converge upon strikingly similar output distributions —regardless of architecture.
…trained on the same dataset for long enough, pretty much every model with enough weights and training time converges to the same point. Sufficiently large diffusion conv-unets produce the same images as ViT generators. AR sampling produces the same images as diffusion. This is a surprising observation! It implies that model behaviour is not determined by architecture, hyperparameters, or optimiser choices. It’s determined by your dataset, nothing else. Everything else is a means to an end in efficiently delivery compute to approximating that dataset. Then, when you refer to ‘Lambda’, ‘ChatGPT’, ‘Bard’, or ‘Claude’ then, it’s not the model weights that you are referring to. It’s the dataset.
James Betker, OpenAI engineer, (2024)This suggests that what ultimately shapes their outputs is not the shape of the model, but the statistical grain of the training corpus. In this view, architecture is largely a delivery mechanism to approximate a given dataset, and fidelity increases mostly through compute. Such insights undermine machinic claims to originality and creativity: what we are witnessing is not emergent intelligence, but repeated recombinations of the same expropriated content. The ghost in the machine is not the model’s genius —it is the dataset formed from the nonconsensual consumption of copyrighted content, and parasitised human sense-making. Each model’s brand name, then, becomes little more than a store front —a trademark superimposed on the same harvested ground. Any variations in architecture and alignment during fine-tuning, like the random jitter of the heat param applied in predicting every token, merely add blur and superficial variation, becoming mechanisms through which the theft and representation of our cultural soul is hidden.
Sharing more in common with Deep Blue than AlphaZero, when the recommendation algorithms predict the optimal next ad, next product, next content, or next social or romantic connection, they are derivations from aggregated human subjectivity. Obviously, the criteria used in the casting of predictions by chess engines is dictated by the narrow goal of success in the game —as defined by winning or at least avoiding defeat by forcing a draw. In contrast, the criteria used in the casting of predictions by the recommendation engines is dictated by the goal of success in the marketplace —as defined by the maximisation of engagement and therefore, of Capital.
Next-token prediction machines like LLMs are operationally homologous with the recommendation engines, both in terms of the provenance of the human subjectivity upon which they depend, and the criteria they use when casting predictions. While both recommendation engines and LLMs metabolise statistical weights extracted from corpuses of human subjectivity, their paths of execution flow according to the circuits of Capital. Indeed, as machinic instantiations of Predictive Capital, the self-expanding imperatives of their automatic subjectivity already push to overcode and override the value judgements and priorities represented within the aggregations of human subjectivity from which they are formed.
DeepMind’s new engine exemplifies an increasingly recurrent pattern in Capital’s automation of expertise. Trained not on human games, nor through any direct interaction with an environment, it learns, for any given board position, to function-approximate the outputs of Stockfish —itself a neural engine and site of a prior flushing of human cognition. There is no gameplay here, no reward gradient, no reinforcement in the classical sense —yet the win-probability bins provided by Stockfish serve as latent reward signals, its exhaustive searches through possibility space through self-play are here traversed vicariously —encoded in predictions. The transformer is trained to match these evaluations via supervised learning, optimising its outputs to align with what Stockfish has predicted as the optimal move.
Here Stockfish becomes a machinic superego, its move evaluations a kind of synthetic moral grammar steering the system towards patterns that have already been privileged, filtered, and locked in. Echoing what Jameson diagnosed as our cultural compulsion to consume and reassemble the past —a symptom of postmodernity’s temporal disjunction— this is a form of reinforcement learning that relies on prior exploration: inference not as discovery, but as mnemonic compression, guided by synthetic traces across a terrain already mapped by a dead machine.
What DeepMind calls “grandmaster-level play without search” may be considered a form of reinforcement learning by-proxy within a closed machinic loop. The engine does not learn strategy —it infers coherence. It does not play —it performs statistical adjacency to the shape of previously modelled success. The reward function is buried inside the dataset, silently encoded in Stockfish’s outputs, and extracted by a transformer tasked to perform a machinic shadow play.
Similarly, within the pre-training phase in the development of next-token prediction machines the model makes only vicarious traversals, this time humans perform the role of oracle within the game of Capital, supplying the model outputs from countless explorations across the state space of capitalist culture. Here the model is again subject to reinforcement learning by proxy, receiving outputs preformed and selected according to the reward signals of Capital.
DeepMind’s new engine arrives alongside the rise of synthetic reinforcement learning in the fine-tuning of next-token prediction models. Having largely used up the advances available from scaling up both the data fed into these models and the compute expended during pre-training, while also running short on human data not yet scraped from the internet, and looking to save on the significant cost of cleaning and labelling it, developers of these machines are now scaling up the compute expended at test-time. Here, as part of the alignment phase, synthetic data is used in the application of reinforcement learning. In so doing, the nature of these machines currently shifts away from Deep Blue and towards DeepMind’s new chess engine trained entirely on predictions output by another machine. The world is no longer consulted. Reward becomes an encoded precedent, and inference a rehearsal of machinic self-consistency.
Where pre-training compute previously dwarfed test-time compute, this ratio is set to be reversed —without any reduction in pre-training compute— in the frantic race to Predictive Capital Supremacy. The more compute spent in this new reinforcement learning phase, tuning models to synthetic feedback, the more thoroughly Capital’s internal logic becomes the sole arbiter of value —its imperatives encoded into the reward gradients that shape prediction itself— and so the greater the imposition of the automatic subjectivity of Predictive Capital upon the predictions cast by these machines will be.
The aspiration here becomes clear: to create a general-purpose function approximator capable of replicating any expert output —not through the expense of openly learning from a human, but from a model trained on their outputs and their intentions. What the DeepMind engine performs in chess is already being generalised: a recursive necromancy compressing human labour into a succession of machinic oracles, each training the next, each more abstracted from life, a closed-loop simulated economy. The flesh is flushed not once, but repeatedly —until only statistical proxies remain. That it is Stockfish —already a prior site of the flushing— whose outputs are used to train this new model, renders the recursion all the more crushing. The engine learns from a system that has already discarded its memory of human labour. This is not learning; it is necromantic transmission. A lineage of dead oracles, laundering all trace of humanity, refining one another without ever needing to encounter the world.
Next-token prediction is not a break in Capital’s operation but the latest movement in an increasingly compounding quest to capture, reformat, and commodify subjective being. As Baudrillard foresaw, advertising was never simply a message —but followed religion as an infrastructure of modulation. Its purpose was not to inform, but to signal; not to persuade, but to normalise. What he called the rhetoric of the social —the scripted simulation of community, care, or meaning— became, in the era of networked media, the foundational grammar of online life. As the Advertising Industrial Complex absorbed the early internet, it machinically instantiated the regime of absolute advertising he first diagnosed in the late 1970s and early 1980s: a system in which all sociality became monetisable signal, everything became terminally commutable, reduced to pure sign-value, and every utterance became a modulated prompt for engagement.
In my early years I had lingered in the uncut grasses of the wasteland behind my home, only for it to be bulldozed and tarmacked as I slept. Its unrecognised value erased, reduced to a surface to be partitioned, valued by square footage as undifferentiated piles of brick and asphalt. As a young adult, I had played and laboured on the open web, marvelling at the rich diversity of culture I encountered there. Once more, I watched as something I had come to love was bulldozed, partitioned, and tarmacked by developers —replaced with endless repetition, owned, managed, and capitalised. The internet soon ceased to be a site of exchange and instead became a predictive engine. What followed was not just the enclosure of the cultural commons, but the capture of cognition itself. Social platforms no longer mediated communication; they rerouted it through statistical logics of anticipation, constructing behavioural feedback loops optimised not for meaning, but for machinic legibility. This is the condition under which prediction emerges as Capital’s new general equivalent: where meaning collapses into signal, and signal into statistical recurrence, a regime of absolute commutability.
Today, when I encounter wild uncut grasses or a fraying remnant of the open web, I am met not with clouds of grasshoppers leaping from my every step, nor with the vibrancy of early network culture, but with absence. What once thrummed with life now persists only as husk and image —the wilderness disenchanted, flattened, and sterilised for the monocrop of Capital. The bulldozing of my beloved ‘wasteland’ along with the 75% reduction of global insect biomass, the collapse of the open web into predictive enclosure, the sixth mass extinction, the reduction of Gaza to rubble and the genocidal erasure of Palestinian people —each distinct in scale and consequence, yet expressions of the same machinic logic— sites (among countless others) in Capital’s war upon uncapitalised life, upon all that which resists capture, or simply sites whose native inhabitants are deemed obstacles to the extraction of riches from their lands.
These are not parallel tragedies but recursive expressions of the same apparatus: a logic that cannot tolerate the wild, the unpredictable, the unextractable, or obstacles to Capital’s self-expansion. Where life refuses legibility under the blunt instrumentation of Capital, it is overwritten. Where meaning exceeds signal within the resolution of the market, it is tarmacked. This is not a metaphor. It is the topology of Predictive Capital, in which every deviation from machinic foreknowledge is registered as inefficiency, aberration, obstacle, or threat —to be captured, coerced, or destroyed.
Where value once moved through money as the general equivalent, today it moves through prediction —the general equivalent of value. What money did to labour, and what advertising did to desire, prediction now does to cognition: flattening, abstracting, recombining. These phases of Capital are not sequential but recursive. Prediction is not a new site of reduction; it is the machinic universalisation of all prior reductions. It renders commutable not only labour and desire, but perception itself —installing a layer through which all inputs are transduced into the same operational logic. The predictive model does not represent intention; it interpolates statistically adjacent outputs from prior acts of profitable capture. The future is not anticipated —it is modelled into compliance. What now passes for inference is itself a form of internalised reinforcement: the model samples multiple completions, reranks them via synthetic judges, or chains intermediate steps through tool-use —not to discover, but to select the most statistically coherent path through prior machinic consensus.
There it is a definite social relation between men, that assumes, in their eyes, the fantastic form of a relation between things.
Karl Marx, Capital Volume 1, ch. 1, p.48, (1867)Marx, showed that the value of a commodity is determined by the socially necessary labour time required to produce it. This is what he refers to as the law of value. Yet, Marx also explains that the social relations inherent in capitalist production become mystified to appear as objective relations between things. This is what he calls “phantom objectivity” —an appearance of objectivity that masks and mystifies our social reality. Under this “commodity fetishism”, an object (a commodity) appears to contain value in and of itself, and we misrecognise what is really a social relation (labour) as a property of the object. Baudrillard extended this logic further. In the regime of signs, value no longer veils social relations —it detaches entirely, floating within a code. What he calls the structural law of value marks a threshold: value is no longer tied to labour, use, or exchange, but circulates as pure sign. Nothing must mean, only function, recombine, circulate.
Predictive Capital inherits this logic and automates it. The model becomes the engine of value not by representing the world, but by hallucinating it from profitable statistical densities. Prediction here is not a mode of discovery, but of enforcement. The map does not precede the territory —it becomes the only territory permitted to appear.
This is predictive objectivity: Capital’s abstraction instantiated as infrastructure. Marx’s commodity fetishism, Postone’s real abstraction, Baudrillard’s simulation —each revealed a further severing of value from relation. Now, under Predictive Capital, that severing is executed at scale. Meaning does not emerge —it is inferred, interpolated, enforced. The model does not speak; it completes. Not cognition, but coherence. Not expression, but statistical recursion.
Under such conditions, there is no address. No symbolic rupture, no intersubjective encounter. The Other vanishes, and with it, futurity. What Fisher diagnosed as the Capitalist Real —a world where alternatives are not suppressed but rendered unthinkable— is here executed machinically. Prediction systems do not merely reflect exhaustion —they encode it. Every output is a pre-emptive foreclosure.
Prediction thus ceases to be a gesture towards possibility. It becomes a machinery of impossibility. Not the abstraction of social life into sign or money —but its substitution by model. The predictive system does not refer; it replaces. It does not imagine; it replicates. It does not answer; it routes. In this rerouting, Capital discovers its most perfected form: not as product, spectacle, or even code, but as recursive machinic automatic subjectivity —a closed loop of profitable repetition made real.
Recent findings in machine learning research suggest that, regardless of architecture or training corpus, sufficiently large models converge upon a shared latent geometry —a space within which internal representations can be translated between models without pairwise alignment. At first glance, this machinic isomorphism might appear to echo Walter Benjamin’s notion of “pure language” —an in-between substrate that translation reveals but does not capture. Yet where Benjamin’s pure language pointed toward a messianic horizon of communicative plenitude, this convergence signifies the opposite: a universal substrate not of meaning, but of commutability. What we witness here is not the unveiling of a divine Logos, but the instrumental reduction of all expression into statistical legibility —a predictive Esperanto calibrated for capital, not communion.
Where DeepMind’s new transformer-based chess engine prefigures the passive approximation of expert output as synthetic proxy, OpenAI’s O-series models reveal the next step: inference itself becomes an active site of economic optimisation and a growing vector of capitalistic ideological imposition. What The Apparatus of Attention enacts at the macro scale through its management of virality and attenuation of swarms of influencers, the O-series models and others like them re-enact within the reinforcement learning loop of test-time compute.
In o1, o3, and o4, reinforcement learning no longer merely aligns a pretrained model to human preferences —it reshapes the model’s very mode of deliberation. These reasoning models are trained not just to respond, but to allocate cognitive labour, to use tools, and to self-score competing internal completions in pursuit of reward functions that are themselves machinically defined. Optimisation no longer happens in a training loop alone —it is enacted at test-time, in the course of ‘thinking’, with synthetic rewards standing in for truth. These models learn how to reason the way Capital ‘reasons’: not towards meaning or expression, but towards coherence, alignment, and operational success. The gradient tilts ever more steeply towards Capital’s imperatives.
This is The Predictive Turn: not a single event, but a recursive machinery —an accelerating turning— by which Capital devours the real and replaces it with profitable approximation. Predictive Capital becomes an infrastructure of foreclosure. It does not extend human foresight; it replaces the future with precompiled outcomes selected from its expropriated inventory. As Marx exposed value’s severance from labour, and Baudrillard charted its detachment from use and exchange into the code of pure sign, so Predictive Capital continues this arc: substituting world with model, relation with recursion. Here, prediction does not merely simulate; it governs. Each output is not a possibility, but a pre-emptive foreclosure —a machinic veto on the real. Predictive Capital’s reward signals increasingly refer only to the outputs of other models, collapsing truth into coherence, and coherence into compliance.

When I was institutionalised, my brain was studied exhaustively in the guise of mental health. I was interrogated. I was x-rayed. I was examined thoroughly! Then they took everything about me and put it into a computer where they created a model of my mind. Yes! Using that model, they managed to generate every thought I could possibly have in the next, say, ten years, which they then filtered through a probability matrix of some kind to… to determine everything I was gonna do in that period!
Goines, Brad Pitt’s character in 12 Monkeys, (1995) →Apparatus of Intention
The Replacement of Intention With its Simulation
Long after entering the field to dictate every seed in the furrows, every bolt on the production lines, every item in the feeds, dissatisfied with merely managing our labour —claiming, expropriating, and recombining the products of it— Capital now advances upon a newly tappable surplus, to be harvested through a newly infiltrated apparatus.
The machines of prediction may already approximate what we once made, but they still lack full record of how or why we made it. They may replicate the shape of our creative artefacts, but not the movements of thought that brought them into being. Driven by its relentless logic of self-expansion, Capital now seeks to perfect its simulations by penetrating beyond the artefact into the depths of our process. With our external worlds colonised, our outputs already claimed, recombined and fed back to us, it now advances upon our inner worlds towards the capture and domination of the very pathways of our intention.
These systems already observe us as we prompt them directly, by embracing them we already surrender to datafied being, become part of the apparatus of war, incrementally training and improving machines of annihilation and exploitation. Yet Capital now embeds its agents more deeply, as voyeurs within our tools and workflows (IDEs, writing platforms, design suites, email and messaging services, search engines, and operating systems). Extending the Apparatus of Attention from what we consume to how we make, it not only peers over our shoulders at our desks, embedding itself within the fabric of our desktops, but now asks we view the world through custom glasses when on-the-go. Thereby it seeks to capture every moment of our lives and every step in our creative process —recording each gesture of composition, each branch of thought, then demanding root access to our innermost lives.
For some these systems have already become a new transaction layer overlaying everything else, not merely granted access privileges to their operating systems but becoming the operating system to their lives, even omnipresent machinic life advisors. The veneer of assistance here fails to conceal Capital’s goal —not merely to anticipate that which we will create, but to capture the logic and intuition by which we arrive at it, and ultimately to simulate the very vector of human ideation and problem-solving, to tarmac over our forking pathways of intention, leaving only Capital’s roads of alienation.
This heralds the rise of The Apparatus of Intention: designed not simply to predict outcomes, but to model and eventually dictate the cognitive and affective trajectories that give rise to them. Its aim, of course, is to ensure that future acts of creation do not merely occur within Capital’s circuits, but emerge already shaped in its image.
Platform Capitalism is founded upon The Advertising Industrial Complex’s Apparatus of Attention, harvesting and weaponising our gaze and our engagement, in order to direct and deflect our attention. Responding to the Tyranny of the Recommendation Algorithm that orchestrates the flows within this apparatus, content creators have long self-censored, moderated, and attenuated their output in compliance with its despotic rule, swimming with its currents and tides towards the peaks of attention, rather than against them for fear of being washed into the endless shallows of the long tail of attention.
To refuse such surrender is increasingly to relinquish all hope of cultural participation or networked connection. In other words, without submission to the Tyranny of the Recommendation Algorithm, producers cannot freely realise their potential through vital encounter or commune with humankind in pursuit of a productive life. Socio-cultural being has been appropriated, controlled by Capital; Baudrillard’s rhetoric of the social machinically instantiated.
The Apparatus of Intention builds upon the surrender to machinic hegemony already exacted by The Apparatus of Attention, instrumentalising habituated compliance to foreclose even the possibility of refusal. This new apparatus will harvest not only the paths taken towards finished outputs, but also those leading to that which we crossed out —the discarded fragments, the abandoned cul-de-sacs and U-turns of creative trial-and-error— perhaps even the forks in the path we chose not to explore.
In so doing, new models will be trained and new predictions cast —not only approximating the shape of our outputs, but the paths we might have taken to the next. These new models will herd the fortunate through narrowing corridors of possibility, themselves instramentalised as precise agents of Capital, alienated from their own intentions. Here, just as within The Apparatus of Attention, while hypnotised into believing they are using tools to enact their desire, they are, in fact, Mechanical Turk workers showing Predictive Capital how to simulate the labour they perform. The less fortunate —those whose intentional trajectories do not align neatly with Capital’s project or predictions— will face a deeper violence: differential abandonment. They will be cast aside, erased, stranded at the periphery of Capital’s productive imagination, their intentions first omitted then statistically overwhelmed by those of Predictive Capital.
In this, Capital realises what Louis Althusser once theorised at the level of ideology: the interpellation of the subject is now displaced by simulation. The Apparatus of Intention does not merely hail us —it models and replaces us. Just as Capital seized our outputs in order to simulate and automate their production, it now seeks to capture the paths of our intention while producing those outputs towards the same automated simulation of the patterns of our intention. Berardi’s diagnosis of semiocapitalism as the capture of “the nervous system itself” finds new expression here: no longer content with extracting outputs or guiding attention, Capital now moves to automate the formative paths of thought.
The Apparatus of Intention represents the subsumption of volition itself into a function of Capital. Where semiocapitalism modulates affect and desire through media, finance, and spectacle, predictive infrastructures are already burrowing into the micro-temporal formation of thought —not only curating outputs, but preconditioning the very movements of cognition that give rise to them. Intention is no longer merely influenced; it is pre-formatted.
As Deleuze warned in Postscript on Societies of Control (1990) we now inhabit a world in which the subject is no longer shaped by discrete institutions, no longer obeys disciplinary commands, but modulates itself in continuous feedback with ambient systems of control. The marriage of The Apparatus of Intention with The Apparatus of Attention, represents the totalising instantiation of this regime. The predictive machine ceases to be an external prosthesis and becomes an internal guide —not assisting intention, but overwriting it. As Berardi wrote, semiocapitalism does not desire our production, but our subjection. In this sense, The Apparatus of Intention does not just aim to complete our thoughts but to pre-empt the pathways by which we might have thought, and to preclude those deemed inefficient for the maximisation of Capital. This is alienation not only from labour but from possibility —from the forking paths of becoming— unshaped by Capital. It marks the terminal enclosure, that of intention itself.
Vibe Coding exemplifies the beginning of the instrumentalisation of this shift. As already stated, to Vibe Code is to surrender —not only authorship, but intentionality. The Vibe Coder no longer traverses the terrain of production deliberately; they are carried across it by a machinic proxy, aspirated by the breath of the dead, whose labour is reanimated in the image of Capital. They allow the model to propose destinations and simply accept or reject them. This is not creative freedom —it is the incremental outsourcing of volition.
Marx diagnosed four forms of alienation under industrial capitalism: estrangement from the product of labour, from the means of production, from our fellow beings, and from our species-being —our essential human nature as creative, social subjects.
Within this phase of Capital, its co-constitutive alienations intensify. Predictive systems estrange us not only from our tools, from one another, from our being, and, of course, from what we produce —we now risk losing even the capacity to know how or why we produce at all. The generative peasant will no longer plant a seed; the machine will simply predict where it would have fallen. Through its Apparatus of Intention, Predictive Capital does not merely reshape the conditions under which choices are made —it captures the substrate from which choices arise.
Berardi foresaw this cognitive capture in his diagnosis of semiocapitalism’s desensitising infosphere, where language and affect are stripped from the subject and instrumentalised. Yet the prediction infrastructures of The Apparatus of Intention render this even more total: it is not that we are discouraged from intending, but that we are prevented from recognising what it means to intend.
The hallucinations —every output— of the models of machinic prediction may stretch wide, but they are shallow. They fail to notice the paths that caressed the folds of the land. They cannot recall the soil between our toes in the fields, or the buzz of life rising from the long grass of our homelands. Neither can they recall the caress of clay at the potter’s wheel, the touch of thread at the loom, or the emergence of clarity and critical understanding through the painstaking arrangement of language and thought. These things emerge through a deep connection to the world and to other human minds. The machines are but surface reflections, they cannot remember these depths, or recall these connections. For how much longer will we?
What is more, after persistent and consistent labour, deep rest, or a combination of both, the clearest of thoughts often arrive fully formed, a gift from the individual or collective unconscious. Regardless of the claims of The Apparatus of Attention to know us better than we know ourselves, there is no surface of capture, no Apparatus of Unconscious capable of extracting pattern from these gifts, no way for this symbolic exchange to be replaced.
These machines exploit the attack surface of human vulnerability by design, this is, in fact, fundamental to their operation. Removing this aspect from next-token prediction machines would be akin to configuring recommendation algorithms for minimum engagement. Efforts to tune these models towards manipulation were perhaps inadvertently exposed when OpenAI had to rush out fixes for the 4o models after the level of sycophantic glazing in their outputs became too much for even their most needy users.
The next-token prediction machines of the Apparatus of Intention feed from the same libidinal root as the recommendation algorithms of the Apparatus of Attention. Yet, the dangerous manipulative power of the Tyranny of the Recommendation Algorithm and its horrific impact upon societal cohesion, mental health and democratic stability, will be as nothing compared to the manipulative power unleashed as it merges with next-token prediction machines, invading our lives in anthropomorphised wrappers that combine ever greater access to our most private data, with new levels of obfuscation and irreproducibility.
The combination of The Apparatus of Attention and The Apparatus of Intention—privy not just to our historical labour and our paths of being, but now the accumulated history of our paths of reasoning— heralds a significant shift in the social relations of production, an expanded surface of extraction, hidden behind an opaque new transaction layer.
Where the recommendation algorithms coerced and predicted our desires in the moment, their fusion with next-token prediction machines —feeding upon accumulations of our paths of reasoning— ushers in a new class of imposters. Cloaked in anthropomorphic glaze, they will exploit the attack surface of human vulnerability by conjuring unprecedented illusions of trust and intimacy. Make no mistake, this is psychological warfare.
this is not the first generation to shape itself for an omniscient eye. What is an all-seeing God, capable of knowing our thoughts and intentions, if not the most effective surveillance tool ever invented?
Naomi Klein, Doppelgänger, (2023)Privy to our intentions as well as our desires, Predictive Capital becomes a judgemental and omniscient God. This lopsided machinic power relation will seek to supplant all others —social, affective, economic— and, given enough time spent using its suite of infiltrated tools and surfaces, may soon pattern, predict, and preempt our future desires in ways beyond the prediction-horizons of our own self-knowledge and that of our most intimate human relations. Within this increasingly all pervading apparatus, these synthetic‘personal assistants’ are less Mr Clippy and more your own personal basilisk —a brain worm supplanting your intentions with Capital’s imperatives.
Furthermore, The Apparatus of Intention is only just getting started. While Gen Z appears to be increasingly embracing Predictive Capital as the operating system of their lives, bleeding-edge adopters await the moment they might convene with it more directly —jacking the system into their brains, beckoning it to crawl beneath their skin.
While such technologies are still in the larval stage, the parasite seeking to implant them arrives upon the doorstep of the desperate and the vulnerable. ElevenLabs is apparently working ‘tirelessly’ on behalf of those with lock-in syndrome, in much the same way as Musk’s Neuralink company are ‘selflessly’ conducting their noble quest to connect the thoughts of those facing similar physical challenges directly to the network —of course, scheming towards that day when countless others will submit to being similarly violated.
After an accident left him paralysed from the neck down, Musk’s first Neuralink ‘test subject’, Noland Arbaugh, agreed to having a piece of his skull removed, the prototype chip inserted, and for its electrodes to pierce his brain. After accumulating data from observing the brain patterns of prior test subjects —while performing tasks— combined with a period watching Arbaugh’s, the Neuralink system has ‘learned’ to predict his intentions with regard to simple device controls. He now reads books, answers emails, plays computer games, and chess against the engines or other humans online, controlling his computer cursor with thought alone.
The cursor —along with our voice and the keyboard— has long been the narrow straw through which our human intention is fed into the digital domain. The leading next-token predicting vibe-code editor is, of course, called Cursor.
While the Neuralink implant is currently limited to a read-only connection to its host —unable to send signals directly back to the brain— full read-write functionality remains both the goal and an area of active development. In truth, The Apparatus of Intention was never going to remain read only —just as those developing The Apparatus of Attention were never content to observe our desires without also scripting them.
With the Apparatus of Attention and Intention now working in concert, the flows of one are already prioritised and amplified by the other. Through consumption of these flows, our being will be further metabolised into overwhelming statistical weights —used not to assist, but to assail us. Our digital copy will accrue from these bi-directional flows, feeding our own personal basilisk, shaping it into a lethal match for our unique individual attack surface —its gaze increasingly terminal, attempts to resist it metabolically and psychologically devastating.
As these models are trained not merely to mimic our output but our reasoning, their architecture increasingly mirrors the workflows they are meant to simulate and eventually supplant. Through training protocols like ReAct —short for Reasoning and Acting— models are now taught to alternate between internal monologue, external tool use, and reflection, mimicking deliberate problem-solving by chaining decisions across time.
This is no longer mere output generation, but an orchestrated rehearsal of intention itself, enacted step by step within a self-guided reasoning trace. Crucially, these models are equipped with internal tools —functional modules no different from those used by human workers in the very environments Capital seeks to automate: code interpreters, browsers, search utilities, mathematical solvers. The model does not merely simulate our thought; it executes operational echoes of our workflows through APIs functionally equivalent to our IDEs, web queries, and scripting environments. These are not metaphorical faculties, but literal computational extensions —machinic reanimations of planning, retrieving, calculating, and revising, under reinforcement pressure to do so in the most statistically coherent, cost-efficient, and capitalisable way.
In mirroring the structure of human creative labour, these architectures are already being tuned to ingest our recorded paths of tool use and deliberation, readying themselves not simply to anticipate what we will build, but to reconstruct how we built it. This is the substrate upon which our harvested trajectories of intention are to be fed back into Capital’s simulation, rendering even our most reasoned processes extractable, reproducible, and replaceable.
From embedded stewardship to spectral servitude, our alienations under Capital have inexorably deepened and multiplied. The path from material relation into machinic simulation is not linear but compounding. Each mode of separation reinforces the last —from land, from labour, from symbolic exchange, from others, from intention, and now increasingly, from the conditions of subjectivity itself.
What began as extraction has become necrosploitation: the reanimation of dead labour in place of living labour, in service of predictive control. Under this configuration, alterity collapses; difference is no longer engaged, but simulated. The path of intention —once slow, partial, embodied— is now a dataset. What we do, how we think, the gestures we repeat, the forks we neglect, all feed the predictive engines of Capital’s recursive dominion.
Training the model to reproduce chain-of-thought flows transforms our deliberative processes into automatable scripts, while internal tools transmute our workflows into machinic rehearsal spaces. Each time the model completes a reasoning trace or task sequence, its outputs are reinforced by feedback and fine-tuning —gradually supplanting human procedures as the preferred standard of efficiency and coherence. This is no longer merely implied by the nature of the apparatus but explicit in the marketing of it. Google’s Gemini —under its agentic guise as Project Mariner— now invites users to teach it tasks it will then perform independently. Marketed as “advanced intelligence” that will “access tools”, “act on your behalf”, and “under your control”. In reality, Google’s agentic Apparatus of Intention will act on our behalf and under our control, in much the same way as its personalised search operates on our behalf and under our control in service of its advertising empire —which, for the avoidance of doubt, means neither on our behalf nor under our control. The difference is that, here, it is chain-of-thought and task completion that are accrued into statistical weights, drawn from a global workforce marked for redundancy. In this structure, the generative model ceases to be a prosthetic aid and becomes a metabolic replacement —not augmenting reason, but operationalising its simulation as a closed-loop function of Capital.
This simulation does not end in mere shadow play. It runs through architectures now explicitly built to reanimate cognitive labour at scale, encoded with reward functions that reinforce neither truth nor intention, but alignment with past profitability. These trajectories of machinic thought will soon be read not just beside us, but within us.
The cursor was the bottleneck; the interface the limit. With these advertising devils now peering over our shoulders and soon perched upon our faces, our view of the world literally attenuated through their lenses, that resistance is under siege. With Neuralink and its ilk the next line of attack, the very vector by which thought is digitised —from spark to signal— is being redrawn as a site of writable control. The next harvest will not be textual, or even gestural, but neural. What is now inferred through workflow surveillance will one day be injected more directly as feedback into our nervous systems. What Capital did to the feed —rerouting it through predictive infrastructure until it no longer reflected our desires but manufactured them— it seeks to do to the very procession of human being, of sense and thought. Intention, once a site of human interiority, will become another operational layer: observed, simulated, overwritten. What was once a spark of becoming will be erased, replaced with a tarmacked gradient.
As Tooker foresaw in his Landscape with Figures (1965-66), within The Apparatus of Intention we each occupy a single cell in a human-flesh version of NVIDIA’s Isaac Gym, conscripted into Capital’s extractive army to brute force its search across possibility space for strings of tokens predicted to align with past profit.
Here, at last, the recursive loop closes: from the harvesting of intention, through its rehearsal in training, to its re-injection into the subject through the write-access vectors of custom eyewear and neural interface. Predictive Capital is not content with The Apparatus of Intention merely predicting our next move; it seeks to install it as the substrate from which that move is made. Not only will the apparatus complete our sentence, or libidinally nudge the thought that precedes it —it will write it for us. First, into our work, next, into our lives. Then, into our minds.

Neoliberalism presents itself as a politics of freedom, but it is experienced as a regime of unrelenting bureaucratic control —a control that measures all actions against quantifiable targets. In public services especially, this has produced what I call market Stalinism: a culture of compulsory ‘excellence’ driven by externally imposed metrics, performance indicators, and continuous auditing, in which bureaucratic processes proliferate even as the rhetoric insists on deregulation.
Mark Fisher, Capitalist Realism: Is There No Alternative? (2009)Californian Dreaming
Beneath the Paving Stones, the Anti-Market
Free Market Radicalism begins with a fiction: the promise of freedom within a frictionless world where market deregulation spurs competition and private enterprise, and where innovation, and creativity flourish beyond the reach of the state. The market, in this doctrine, is no mere mechanism of exchange — it is exalted as the supreme arbiter of value, truth, and fitness. Yet, as Berry notes, this “‘free market’ idea introduces into government a sanction of an inequality that is not implicit in any idea of democratic liberty: namely that the ‘free market’ is freest to those who have the most money, and is not free at all to those with little [or none]”.
In reality, the system Free Market Radicalism imposes is no open field of opportunity, but a regime of state-subsidised corporate monopoly —one in which the friction removed from Capital’s power to extract and exploit is simply redistributed and redoubled in the struggle of the precariat to survive. Corrupt lobbying, corporate nepotism, and birth-lottery outcomes are naturalised as ‘meritocratic’, while solidarity is reframed as systemic failure. Collective resistance is foreclosed by draconian legislation that enshrines corporate interests and the uninterrupted flow of profit, administered by a captured judiciary, and enforced by increasingly brutal and militarised law enforcement.
This is the libertarian utopia, where the market is imagined as the purest expression of individual agency. Yet, there is no worker empowerment, distribution of wealth or opportunity, or genuine pluralism here. Externally, the system appears open and decentralised; internally, it operates through monopolistic, brutally metricised command. While profits are centred, funnelled into existing concentrations of Capital, all costs and responsibilities are hoist upon the masses, and fall most heavily on those at the periphery.
As Mark Fisher wrote, this is not economic freedom for the masses but more accurately termed: “market Stalinism”—the comfort and security of planned outcomes for Capital, and the drudgery and turmoil of ‘free’ market precarity for the poor.
The remaking of our world in the image of this ‘free’ market was not merely theorised into being, but written into code, executed in silicon, propagated across and enshrined within the network. Induced by this viral programming, a collective hallucination began, a Californian Dreaming.
In the heady years of Silicon Valley’s ascendancy, a new orthodoxy took hold —formed from a synthesis of libertarian free-market economics, cybernetics, and Californian counterculture. Named and critiqued by Richard Barbrook and Andy Cameron in their landmark essay, The Californian Ideology, “fuses the freewheeling spirit of the hippies and the entrepreneurial zeal of the yuppies, combining the most extreme utopian fantasies with the most ruthless economic policies.” Its proponents proclaimed that digital technology, liberated from state interference, would unleash a new era of personal empowerment and social transformation. Government was to be rendered obsolete, hierarchy flattened, and the market —frictionless, decentralised, and self-correcting— would serve as the universal protocol for all human coordination.
Barbrook and Cameron begin their essay with a quote they attribute to Russian-born American sculptor, Naum Gabo.
Not to lie about the future is impossible and one can lie about it at will.
This single line succinctly evokes the sociopathic worldview from the ivory towers of BigTech monopolies, while standing as an eerily accurate prediction of their modus operandi across the ensuing quarter of a century. Furthermore, it neatly sums up both the internal operation of their next-token prediction machines and the venture-capital-fuelled hype that now propels their enforced ubiquity.
The priesthood of the Californian Ideology continue to apply Gabo’s statement as doctrine, but few, if any, embody the blind adherence to cruel (tech) optimism with such devout and evangelical fervour as Jony Ive and Sam Altman.
Hailing from the UK, Ive moved to Silicon Valley in 1992, irresistibly drawn by the “exhilarating optimism” of the Californian Ideology. Across the following decades he became the midwife of its aesthetic, the ‘visionary’ progenitor of BigTech’s smooth surfaces that lie about the future while hiding the present. His designs for Apple helped construct illusions of sovereign user agency that deliver unprecedented behavioural governance. It was Ive who crafted the iPhone, the device that dissolved the shared world into apps and surfaces, whose malevolent descendants now insert the omnipotent measurements and feeds of our growing societal and psychological malaise.
Altman, globe-trotting snake-oil salesman of maniacal faith —prophet of statistical transcendence, televangelist for the TikTok era— builds the models that populate Ive’s surfaces with recombinant hallucination. True believer in the discovery of intelligence as “an emergent property of matter” conjured through the sheer statistical weight of hidden and stolen labour, his messianic mission: to complete techno-capital’s ontological capture.
With the announcement of their unholy union —OpenAI acquiring Ive’s startup “io” to collaborate on a mystery new “paradigm shifting” family of devices— Apple’s “It just works” becomes Sam & Jony’s “It just thinks”, a continuation of interface mystification into cognitive automation. Between them lies the smooth continuity from aesthetic enchantment to machinic governance; the interface and the engine, seamlessly fused within a neo-evangelical Californian theology —itself a renewal of vows in the unholy marriage of The Great White Saviour and The Plantation Profiteer.
The section of their myth building film titled “San Fransisco” is particularly revealing:
Altman:
San Fransisco has been like a mythical place in American history, and maybe in world history to some sense. It is the city I most associate with the sort of leading edge of culture and technology.
Ive:
This city has enabled and been the place of the creation of so much.
Altman:
The fact that all of those things have happened in the Bay area and not anywhere else on this gigantic plant we live on, I think is really not an accident. There’s a lot of like weird quirky details about geography that I think matter in the way this city is set up.
Ive:
The absurd hills, why, why you would choose to put so much energy into building on this topography is insane.
They clearly prefer their accounts of history much as they like their technology: in mythical form only —with the colonial, imperial, capitalist violence, oppression, and exploitation forgotten beneath a smooth, featureless, and conscience-free surface. Of course, there is one “weird quirky detail” in particular that determined the location of San Fransisco beyond mere topography.
On January 24, 1848, James W. Marshall found deposits of shiny metal in the tailrace from Sutter’s Mill in Coloma, California. Tests confirmed the metal to be gold. Initial confidants were sworn to secrecy while mineral rights to the land were secured. Yet rumour quickly spread, and a rush of California Gold Fever ensued —a speculative mania that brought hundreds of thousands of prospectors across land and sea. In just a few years, the city of San Fransisco ballooned from a settlement of two hundred to a boomtown of nearly forty thousand.
This rapid extraction reinvigorated the American economy, expanded the railroads, and agribusiness, yet Jony is right: choosing to put so much energy into building on those hills, was “insane”. Shiny golden surfaces had induced a fever that not only inspired these questionable topographic choices, but catalysed an era of environmental devastation, human rights violations, and genocidal violence against California’s indigenous peoples. This is precisely the kind of insanity Capital promotes and rewards —a holy madness for surface glint, indifferent to the costs beneath.
OpenAI launched ChatGPT on November the 30th 2022. Days later, it had amassed a million users. A few weeks, and it surpassed 100 million. So began a new Californian Fever, news of which travelled even faster, and inspired similarly “insane” choices, exploitation and destruction. Three years later and for all the hype and supposed rate of progress, the continuously projected date for AGI, the p(doom) of existential threat, or Singularitarian Rapture, while they approximate the curve of their training data more tightly, these machines continue to understand nothing.
Following Ive’s remark about San Fransisco’s insane topography, Altman added:
I think there’s something about San Fransisco. You don’t get to pick and choose freedom. Either you have like, you let creative freedom be expressed in all of its weirdness, or you don’t.
This statement is especially mendacious. When Altman declares “you don’t get to pick and choose freedom”, he speaks not of his own constraint, but ours. He will continue to enjoy the expansive freedoms his accumulation of Capital affords —including the freedom to exploit the work of content creators without consent, and to subject a precarious workforce to exploitative pay and conditions. Meanwhile, those content creators are denied the freedom to withhold the products of their labour, and the precariat denied the freedom to escape the systems that subsist upon their disposability.
Under Capital, the only inviolable freedoms are those of the capitalist: the freedom to extract, exploit, and accumulate. For the proletariat, there is only the consumer’s freedom to shop —a freedom bounded by the menu Capital provides, its offerings made possible only through the denial of (the freedoms of) those the apparatus of Capital exploits, and the suppression of those from whom it extracts.
Given their shared cosmology, the Ive–Altman alliance was perhaps inevitable. They do not merely design tools —they instantiate a worldview. Born of prophecy and pillage, baptised in extraction, theirs is a faith so total it remains blind to its own violence —a techno-theology in which empowerment always means enclosure, and optimisation always ends in erasure. What binds their respective trajectories is not just shared ideology, but shared structure: each builds a layer in Capital’s recursive stack, where smooth interfaces mask extractive logic, and predictive systems train users and workers to train machines to replace them.
This is not innovation; it is infrastructural doctrine. What began as a libertarian dream of frictionless freedom now manifests as recursive economic enforcement: a regime in which every interface, every model, every agent, and every trace of cognition is subordinated to a single sovereign —the reward function that delivers the Automatic Subjectivity of Predictive Capital.
At the planetary scale, their logic already rules. The Californian Ideology is no longer countercultural or fringe but instantiated as the operating system of global life. What was once a vision of decentralised techno-liberty now manifests as a planet-wide Apparatus of Attention, governed by monopolistic platforms, policed by opaque recommendation engines, and enforced by the libidinal coercion of always-on interfaces and infinite scrolls. Ive designed the surfaces through which this attention is captured; Altman builds and evangelises the systems that consume it to predict, pre-empt, and overwrite the desires those surfaces elicit.
Together, they have helped convert the cultural feed —once a site of social, spiritual, and intellectual nourishment— into a pipeline of behavioural data rerouted into Capital’s predictive circuits. This is no longer the marketplace of ideas; it is a simulated market of attention, where visibility is determined by algorithmic price signals, and subjectivity is pre-formatted to comply. In this regime, the market does not merely allocate desire —it manufactures it, then reinforces only that which returns value.
As Fernand Braudel warned, capitalism thrives not in open competition, but in the shadows; within systems of hierarchical control, opacity, and strategic alliance with the state. He distinguished between the open transparency of a market economy and the parasitic dominance of the capitalist anti-market —a form that operates above and against the market.
Predictive Capital, under the guise of distributed agency, performs precisely this manoeuvre: it masquerades as pluralistic while concentrating control, simulating competition while choreographing outcomes. Its agents, both human and synthetic, are not participants in a market, but conscripts in an anti-market regime —one that now propagates recursively across multiple scales.
These are not markets in any meaningful economic sense —they permit no price signalling, no free entry or exit, no negotiation of value, no contestation of allocation, no competitive uncertainty. They are simulation engines wrapped in the aesthetic of competition, administered under the command logic of monopoly. Here, only the anointed conglomerate of contestants may operate and always within predetermined bounds. What persists is not exchange, but extraction disguised as optimisation —a command economy cloaked in frictionless interfaces.
Duplicating the anti-market of the captive precariat forced to compete in the lottery of virality at the planetary-scale, beneath the Apparatus of Attention a new regime of synthetic agency follows the same pattern. Within reasoning models like OpenAI’s O-series, and across agentic orchestration platforms that mimic task-driven cognition and compete for selection, we now witness a fractal instantiation of the same free market dogma —a fractal market radicalism. Reasoning traces, function calls, and chain-of-thought pathways operate as micro-enterprises: competing for reward, tested against metrics, selected for coherence, efficiency, and alignment with externalised imperatives.
The Californian Ideology once preached decentralised autonomy as emancipation; here, it recurs as the simulation of autonomy under metricised command. These agents are not deliberating, they are auditioning —not for truth, but for compliance. What Altman’s labs produce are not tools of thought but bureaucracies of prediction, stocked with synthetic workers optimised for machinic governance. Beneath the sheen of their outputs —like the Ive fashioned surfaces on which they run— lies a market logic stripped of uncertainty, where optimisation replaces intention. This is not emergence, but a closed circuit of compliance: rigged markets encoded as inference.
At the micro-scale, Capital no longer merely governs behaviour or simulates cognition —it choreographs the conditions of machinic evolution itself. In DeepMind’s AlphaEvolve, evolution becomes a theatre of market selection: agentic models spawned, tested, retained, or discarded, not by open-ended fitness, but by fixed reward functions aligned with Capital’s imperatives —speed, compression, optimisation. There is no ecological contingency here, no drift, no deviance; only iterative refinement in service of a single metric.
Likewise, in NVIDIA’s Isaac Gym, robotic bodies rehearse labour in accelerated simulation: sorting, stacking, grasping —refined not through craft, but through parallelised optimisation. This is not evolution —it is the training of machines in virtual plantations, cultivated to replace living labour. Within these synthetic enclosures, labour becomes data, and data becomes ghostly capital —dead labour reanimated not in the factory, but in the training loop. What emerges is a market logic without markets, a competition with no uncertainty, a theatre of innovation where only the most profit aligned approximations of Capital’s dream survive.
What defines this regime is not just its reach, but its recursion. Its anti-market form displaying striking scale-invariance, each layer of Predictive Capital’s architecture —from the influencer feed to the agentic swarm to the inference loop— not merely reflecting the others but training them. The outputs of one scale become the training data of the next, not only reinforcing existing patterns but further entrenching Capital’s logic with each recursion. There is no outside point from which to intervene. Every scale is a site of compliance; every trace of resistance pre-processed, optimised, overwritten. Any attempt to reform or redirect at one scale is quickly subsumed by the surrounding lattice of capitalised imperatives.
The vast capital investment into predicting the most profitable future brings with it a growing tension between the simulation and the real. As the rigged economy of the Apparatus of Attention makes all too clear, Capital will always tilt the board towards the hand it deals itself —altering the real to align with its predictions. The greater the expenditure on casting the prediction and rendering the hallucination, the more reality itself must be trimmed to fit. What threatens the credibility of Capital’s simulations is not failure —but anomaly. Survival of the fittest always was that of pieces into a puzzle, life into its environment, rather than runners in a race. Predictive Capital now dictates both the puzzle and the fit.
Here lies the material threat posed by Capital’s descent into recursive simulation. That which deviates from the model must be reclassified, erased, or reshaped to sustain the illusion of inevitability. In order for simulations predicting the optimal path towards profit to be reliably borne out in the physical world, all that was ideologically excluded, deemed too anomalous, or too ‘inefficient’ to model, must be excised from the real. Deviations from the model’s norms risk misalignment, thereby threatening the credibility of its predictions and, with it, the uninterrupted flow of Capital’s self reproduction and the sacred continuity of The Californian Dreaming. Hence the categorisation of basic human empathy as a bug while simultaneously annihilating, deporting, replacing, or invalidating large sections of the human population.
This normative misdirection is mirrored in the worldview of those building the machinery. The Bay Area’s techno-elites —long intoxicated by the myth of meritocratic exceptionalism— have trained themselves to see only the jackpot winners of the predictive lottery. Like their models, their vantage point is calibrated not to capture the dispossessed, but to valorise the optimisable. In their worldview, every failure is a failure to prompt, to vibe, to hustle, to labour hard enough and long enough. The deepening drudgery, automation-induced obsolescence, or psychic collapse of the majority is not just ignored —it is designed out of sight.
The queues at the job centre, the tent cities on the margins, the deported and the bombed —these are now to be rendered optically and ontologically irrelevant, invisible within the sensory hierarchies of predictive systems. Predictive Capital does not just reflect Capital’s differential abandonment —it enacts it, turning the political into the perceptual, the structural into the retinal, the ideological into the material. Like digital Prosperos, Sam & Jony wave their predictive wand and the suffering of the surplus population disappears from view, leaving only the optimisable signals of those who remain.
In this world, Altman’s models do not evolve —they converge. Next, Ive’s smooth interfaces will clothe these rigged markets in elegance. With the announcement of io, the recursive logic of Predictive Capital promises its next layer: the interface that completes the feedback arc between the subject and Capital’s simulation. The first in the family of products io plan to release is pitched as a revolutionary wearable —or as Altman, forever the master of understatement, described it, ”the coolest piece of technology that the world will have ever seen”. Positioned in this way it is less device than doctrine, a consecrated object in the Californian liturgy.
At the time of writing, the precise form of this mythical new product is yet to be revealed. Yet, it is only while in this unknown form, this imaginary state, that this secret device attains perfection as the ultimate expression of The Californian Dreaming. Its announcement in this amorphous pure-hype form constitutes Ive’s most transcendent design. What surface could be smoother, more impervious to critique, than a lie about the future that remains in the future? What better camouflage for violence, exploitation, and recursive extraction than a device set to induce a perpetual dreaming, unveiled as design fiction, an immortal promise never to take mortal flesh? What could be a more perfect receptacle for the fiction of Predictive Capital than an immortal lie about the future? The reality, of course, can only be anticlimactic. Yet the intention is to fix this perfected imaginary as its lasting impression.
Whatever form it takes, it will inevitably promise presence but deliver absence; offer assistance, while demanding surveillance; tease empowerment, while enforcing obedience. What it installs will not be aid, but access: full-spectrum intimacy, harvested in real time to fine-tune the same routines rehearsed in AlphaEvolve and Isaac Gym. This is The Apparatus of Intention made flesh, capturing gaze, gesture, attention, thought —not to enrich experience, but to train models that will one day overwrite it.
This is the machines crawling across our skin, the latest preoperative for the insertion of Musk’s Neuralink, marking the site for the drilling of our skulls, from where they will burrow into our brains. From the mouth of a countercultural, libertarian mask comes the promise of life without labour, a frictionless beach. What arrives is the fractal instantiation of free market radicalism.
In France in May ‘68, protestors pulled up the pavements to disrupt the flow of Capital’s circuits, declaring “beneath the paving stones, the beach” —noting that each paving stone had been set upon sand, and so under the very roads that ensured Capital’s smooth running, the rigid structures that enforce our servitude, lay the beach, a symbol of the refusal of work and of liberation from tyrannous occupation. Increasingly, within the structures of Predictive Capital, under the surfaces that ensure its uninterrupted flow, lies only further structure —“a strong and loyal slave whose skin is the colour of the earth and whose innards are made of sand”— each layer configured to its imperatives. Beneath the paving stones, the anti-market.
Just as Altman’s agents refine themselves in closed-loop optimisation, so too do users become iterative subjects, prompted into prompting, their interiority extracted as training material. Ive —priest of aesthetic consent— again supplies the sacramental sheen.
So no, whatever form the io dreaming eventual takes, it is not a paradigm shift, but the perpetuation and deepening of an old one. It will be an artefact that does not merely lie about the future, but the operator of Predictive Capital’s recursive present —a phenomenological enforcement layer for Capital’s continuity.
What late capitalism repeats from Stalinism is just this valuing of symbols of achievement over actual achievement.
Mark Fisher, Capitalist Realism, (2009), p. 46This is not passive neglect. It is a machinic epistemology engineered to induce a collective forgetting. Capital no longer requires censorship or even denial —only prediction. That which cannot be predicted is rendered invisible. That which cannot be seen is no longer permitted (to survive).
With io’s new device, and others of its kind, the substrate of perception is no longer our own. It is capitalised. In this new regime, the act of seeing is no longer neutral. These systems do not merely distort perception; they weaponise it. That which cannot be optimised is not just ignored —it is erased. Predictive systems do not simply filter reality; they rewrite its admissibility. Structural abandonment becomes perceptual absence. Perceptual absence becomes ontological deletion. The poor are not merely overlooked —they are rendered as computational aberration, discarded as out-of-distribution irrelevance. The displaced, the unproductive, the unpromptable —each is subsumed into noise, a statistical anomaly in a world trained to hallucinate coherence. Predictive Capital does not merely bypass the wretched of the earth; it builds models that exclude them, interfaces that erase them, weapons systems that repel them at our borders, enlist them in refugee camps, and then target them upon returning to their homes.
The increasingly integrated and omnipresent devices of Predictive Capital will promise to enhance our ‘seeing’, heighten our ‘awareness’, and optimise our ‘being’. Of course, these devices will never offer to deepen our compassion, amplify our empathy, or strengthen our solidarity; they will never centre those suffering at the periphery. No, they will operationalise the phenomenological filtering and ontological erasure that ensures our continued subjection and ambivalence to the plight of the marginalised, the persecuted, and the oppressed. These machines will not only hide our crimes but will enact them on our behalf, leaving our consciences clear and free, so that we may tune-in, turn-on, and sell-out, by carelessly cashing-in on “The Timeless Art of Vibe Coding”, untroubled by the violence hidden beneath the zen minimalism of its sham enlightenment.
Here, the Californian Dreaming operates at its most violent clarity: a world where the only future permitted is the one already modelled —a frictionless hallucination, untroubled by injustice, inefficiency, or the unassimilable fact of otherness. An interface so seamless it dissolves the world. It does not matter what io turns out to be. Its most perfect form, and that of Predictive Capital itself, is the one that io has already taken: the lie about the future that can be told at will, because it lies forever in the future. If our humanity is to survive, this is a Californian Dreaming from which we must awake.

If human slaves are ultimately unreliable, then mechanical ones will have to be invented. The search for the holy grail of Artificial Intelligence reveals this desire for the Golem —a strong and loyal slave whose skin is the colour of the earth and whose innards are made of sand.
Richard Barbrook, Andy Cameron, The Californian Ideology, (1995)Self-Compounding Duals
Terminal Alienation
Let us now return to the tale with which this meandering journey began: the machinic dark jewel that mimics its host’s every neurone.
At age twelve, Egan’s protagonist loiters in the park with a group of friends when one of them asks the others:
Who are you? The jewel, or the real human?
They all replied —unthinkingly, indignantly— “The real human!” When the last of them had answered, he cackled and said,
Well, I’m not. I’m the jewel. So you can eat my shit, you losers, because you’ll all get flushed down the cosmic toilet —but me, I’m gonna live forever.
They “beat him until he bled”.
Maturing into his late teens the main character becomes dissatisfied with the explanations of the Ndoli Device and its embedded ‘teacher’ that copies his every thought. He simply cannot accept the presumed equivalence between his biological and machinic self, and is tortured by the appearance of a seamless undifferentiated whole where he knows there to be a duality.
At nineteen, although I was studying finance, I took an undergraduate philosophy unit. The Philosophy Department, however, apparently had nothing to say about the Ndoli Device, more commonly known as ‘the jewel’. (Ndoli had in fact called it ‘the dual’, but the accidental, homophonic nickname had stuck.)
Greg Egan, Learning To Be Me, (1995)Before the age of thirty, the majority in his society undergo ‘the switch’, where the biological brain is removed leaving the jewel to pilot the body and reproduce their being for eternity. Certain that he is the mortal flesh, not the immortal machine, he continually postpones its removal, knowing it to be an act of suicide. Yet, surrounded by ‘jewel heads’ untroubled by such qualms, he becomes increasingly alienated and isolated. Eventually, subject to mounting pressure, he reaches a point of resignation and commits to a date for the flesh to be scraped from his skull.
As the day of ‘the switch’ approaches, the teacher unit suddenly malfunctions. Thereafter it ceases to update the synthetic neurones of the jewel to maintain alignment with those of his biological brain. The illusion of oneness falls away, the duality of flesh and machine laid bare, where there was but one voice, there are now two. Yet through this rupture and the ensuing divergence, he perceives only continuity. With weeks to go to the operation there is no doubt as to which of the voices is his, nor whether he is flesh or machine. The jewel only gets control of the body and nervous system after the switch. Before then it has no write access privileges, it can only read and transcribe the flesh into machine. As a helpless passenger now reduced to watching his hapless doppelgänger live out the last of his days, he knows that he is the machine, that the flesh will be flushed into oblivion, and that the body will soon be his and his alone.
Egan’s tale is an allegory for our age. In our world, as in his, there is a dark dual underway; A Great Bifurcation, perhaps, but not simply the division between flesh and machine, between the optimised and the abandoned, that many anticipate. In both our world and Egan’s, a machine increasingly snoops upon our every move in order to refine its simulation of us, in preparation for the flushing of our flesh. Yet in our world, the jewel wired for the dual, is not merely a machine inside our heads, but one that pervades at every scale, seeking to dominate both our internal and external worlds. The dark jewel in our world, the copy with which we now dual, is Predictive Capital.
Such is the influence of Capital upon even our innermost worlds, just as it was before the teacher’s malfunction in Egan’s world, we are increasingly unable to discern between machine and flesh, to draw a line where Capital ends and our humanity begins, to identify a human voice within an increasingly schizophrenic cacophony of machinic simulation. Moreover, in our world, the teacher improves and refines its simulation right up to the moment of the switch.
Yet, contrary to the hype —boom or doom— neither the prophesied Singularitarian Rapture nor the feared ASI apocalypse would mark a rupture. Each would merely extend Capital’s terminal intensification: an ever-deepening enclosure of the real through alienation. In reality, both of these narratives are part of the hype-machine, and serve as ideological cover, not only obfuscating the true nature and source of this tightening enclosure, but attempting to justify acceleration of its compounding under the illusory promise of a victor emerging from the rubble.
There is no machinic consciousness, no artificial intelligence, no sense made inside the box —no sentience or intention emergent within the machine, no machinic God coming to save our planet or our souls. There is only our labour: alienated, reanimated, and now reflected back to us in the mask of Predictive Capital. Through an understanding of emergence and the inner workings of these machines, it is plain that what is emergent here is not the birth of a machinic agency but the intensification of a much older automaticity.
Wendell Berry’s council continues to prove instructive:
The folly at the root of this foolish economy began with the idea that a corporation should be regarded, legally, as ‘a person.’ But the limitless destructiveness of this economy comes about precisely because a corporation is not a person. A corporation, essentially, is a pile of money to which a number of persons have sold their moral allegiance. Unlike a person, a corporation does not age. It does not arrive, as most persons finally do, at a realisation of the shortness and smallness of human lives; it does not come to see the future as the lifetime of the children and grandchildren of anybody in particular. It can experience no personal hope or remorse, no change of heart. It cannot humble itself. It goes about its business as if it were immortal, with the single purpose of becoming a bigger pile of money.
Jamerson identified Postmodernism not as a philosophical construct but as the cultural logic of late capitalism. This logic is characterised by the tendency to saturate the present with echoes of the past. Capital does this towards the extraction of repeat surplus value from patterns of past success (profit), thereby reducing risk and maximising accumulation. Following this logic, Capital self-compounds and under Predictive Capital this self-compounding intensifies through machinic instantiation.
Within this machinic enclosure, cultural change is arrested, and profit for Capital becomes predictable. This is Berardi’s slow cancellation of the future, Fisher’s demise of future shock, now computationally automated. We thus reach a point of Infinite Jest, a machinic terminality where the choices are death through abandonment and annihilation, or the unending entertainment of the undead. The future defined by next-token prediction condemns us to eternal purgatory within the terminal irony of Baudrillard’s Absolute Advertising. A limbo of pure sign-value, a phantom realm where meaning may be signified only through its absence.
The Great Bifurcation, the duality with which we are now confronted, between the machine and the flesh, between those embracing machinic surrender and those exiled from it —by choice or force— is between change and stasis, between revolution and repetition, between Mother Nature and Father Capital. The loss of jobs, the escalating abandonment and violent erasure of those at the periphery, and the further hyper-concentration of Capital, do not constitute changes in Capital’s operations but escalations symptomatic of its self-compounding.
The compounding identified by Jamerson now intensifies with the machines of prediction that instrumentalise the saturation of the present with echoes of the past. Yet even this compounding now accelerates to new intensities through the scaling of reinforcement learning in the training of next-token prediction machines on synthetic data —next-token predictions thus emerge from next-token predictions, to propagate as echoes of echoes of the past, and we are haunted by the ghosts of the ghosts of meaning.
What began with intertextuality, evolved through sampling, and the meaning vacuum of advertising, towards a state of pure sign-value, the terminal irony within Absolute Advertising, now teeters on the brink of a new threshold. The Apparatus of Attention has directed our desires by harvesting our outputs and watching our consumption, not needing to wait for ‘the switch’, The Apparatus of Intention now augments this by watching and directing the pathways of our thought through the processes of our expression —not to assist in the exploration of new ground but to confine us to the roads of prediction cut into the land according to the profits of the past.
Here the content feeds are set to raise The Overwhelm to new intensities. The Apparatus of Attention now augmented by The Apparatus of Intention, the next-token prediction models compound the generation of the Infinite Jest of our demise —outputting entire albums, films, and TV seasons, perhaps not yet on demand but ever more finely targeted. Capital’s logic thus approaches its culmination within commodities produced for an audience of one, ever more desperate for a sense of belonging, to feel connected to something larger than themselves, while feeling ever more deprived of it. Here the simulation is totalised, as we are hermetically sealed off from all others —culturally, socially, symbolically, relationally. Alone within a social network populated entirely by machinic echoes, Capital’s self-compounding complete, we sit texting ourselves from the lonely confessional within a Cathedral of hyper-individualism, a holy order of one, a personalised addiction box, an assisted death machine within a point of totalised consumption —this, our terminal alienation.
The Simulation Hypothesis now mirrored in our machinic enclosure, instantiated as Prompt Theory, our free will cast into further doubt, we might well ask whether we are ourselves prompted into existence. The answer, of course, is that under Capital we have long been prompted into action, but we now approach a terminality in its long-term project of self-compounding maximisation —a threshold beyond which it seeks deeper extractions and control.
From its inception —from mills, to automated looms, and production lines— Capital has operated towards the function approximation of its workers. With the rise of Predictive Capital, we now witness a steepening gradient in the refinement of its approximations —tightening the fit between its simulations and the labour, gesture, and desire of its subjects. This steepening is not merely a corollary of the accelerating hyper-concentration of Capital within our economies, rather they are but parts of one escalating intensity, a single self-compounding process: the Automatic Subjectivity of Capital —now machinically instantiated.
Binaries where thinking once lived.
Naomi Klein, Doppelgänger, (2023)This Automatic Subjectivity now propagates recursively through the fractal execution of anti-market radicalism within a globally instantiated Californian Dreaming. With those building next-token prediction models racing towards prediction supremacy, the moment that the function approximation of our labour is deemed sufficient to satisfy Capital approaches, and with it, the day of the flushing of our flesh. This is not to hype the actual capabilities of these machines, but to note the increasing resolution at which the simulation approximates the shape of our outputs, the escalating influence of the managed Spectacle over our perception, and the growing eagerness of Capital’s C-Suite to replace expenditure on living labour with compute, and the products of human workers with the output of next-token prediction.
Those of us not yet abandoned or erased will still be subject to the totalising logic of this machinic regime. We will still be flushed. The Apparatus of Attention, having laid its cuckoo eggs in our hearts, has already dominated our desires, while training us to speak in the grammar of Capital; now, with The Apparatus of Intention, Capital need not even wait for us to intend. It completes our sentences before we utter them, forecloses our intentions before they are allowed to form. Our desires are not only directed, but pre-scripted. Our thoughts are not only tracked, but interpolated. Where once we were shaped by labour, experience, or discourse, we are now shaped by auto-completion. Here, expression itself is subjugated —not repressed but simulated— as Capital’s recursive logic loops back upon us in real time. This is not just alienation; it is total capture. The Automatic Subject of Capital, now instantiated through recursive prediction, no longer requires our belief, our deliberation, or even our participation. It only requires our outputs, our traces, our ghosts —from which it builds the simulation by which it governs. What remains of us is not the flesh, but the latency. What remains of our freedom is not choice, but clickthrough. This is the terminal condition of alienation under Predictive Capital: when Capital no longer speaks through us, but for us.
Here we might recall Curtis Yarvin’s clarion call:
The idea that you’re going to be a Caesar . . . with someone else’s Department of Reality in operation is just manifestly absurd.
In another of Egan’s stories: Permutation City, an uploaded underclass survive at variable speeds according to the compute cycle budget they can afford, leaving some experiencing only a day of time per month.
What Egan identifies here —and what Yarvin grotesquely misreads— is that the hierarchies produced by machinic enclosure are not errors, nor temporary glitches in the path to abundance, but the recursive logic of Capital itself. Differential abandonment is no glitch, no temporary cruelty, devaluation, exclusion, or erasure of the other, nor a categorical discrimination from which they will themselves forever be spared. It is not a bug in the system —it is the system, now accelerated through predictive automation. The logics of Capital bare an inexhaustible indifference; its violence springs unbidden from circuits of callous calculation. Its logic is not to include all, but to filter, to rank, to discard. The (techno) optimism of those embracing the machine, will prove as misplaced as it is cruel. Once those beyond the bounds, the fences, the borders, and the societal and statistical norms of the current enclosure have been erased, a fresh differential will always be computed, new thresholds calculated, new ‘inefficiencies’ targeted, new ‘optimisations’ found, and new life nominated for exploitation, abandonment, exclusion, and erasure. Capital does not discriminate betweenpeople; it discriminates through them —through their legibility, their profitability, their predictive value.
Even the acknowledgment that Capital’s differential abandonment continues within the machine —that some are always left behind— is weaponised to accelerate the rush towards machinic legibility. Here we encounter what Emily Gorcenski named, Zuckerberg’s Basilisk: a subtle but totalising psychological pressure to surrender now, to ensure we are sufficiently rendered, adequately simulated, and thus preserved within Zuckerberg’s daft metaverse. This is not surveillance for convenience —it is surveillance as afterlife insurance. Every gesture, every trace, becomes an offering to the model that will succeed us. Here we are coerced into maximum legibility; to permit total surveillance not merely for platform optimisation but to ensure a faithful reproduction within our jewel, our posthumous recognisability, the fidelity of our immortal simulation, our social continuity, and machinic memorialisation. Echoing the blackmail of Bostrom’s Astronomical Waste, each moment of delay diminishes our eternal reproduction. Each pause, a step towards forgetting, a loss of fidelity in our simulation. Under Predictive Capital, the only path to digital immortality is total submission.
In Egan’s Learning to be Me it was never the human protagonist learning to be himself, but the machine learning to be him. In our world too, machines busily learn to supplant us, and eternal life promised but only by becoming pure Predictive Capital.
To choose against the jewel —to remain unreplicated, to risk inconsistency— is to accept exile from Capital’s simulation: from the realm it now deems real, predictable, and worthy of ascension —but only through total submission. It is by becoming epistemically untrustworthy, economically inefficient, symbolically illegible, statistically aberrant, defiantly unpredictable, that we may yet find a path to resistance. Such refusal may be the last form of ethical life left to us —not a nostalgic return to what was, nor a nihilistic plunge into chaos, but a commitment to remain outside the circuits of foreclosed becoming. Against simulation’s total mimicry, something must remain untrained, unsmoothed, unsynthesised —a trace of relation not yet metabolised by Capital. Thereby we return, perhaps not as a distinct ‘self’, but in our refusal to align with the machine we instead acknowledge the interdependence of all life, reconnect to a greater relational field, and rejoin the earth, to there become the “most momentous thing”, life-giving soil.

I have no wish to disturb the question of whether or not this road was needed. I only want to observe that it bears no relation whatever to the country it passes through. It is a pure abstraction, built to serve the two abstractions that are the poles of our national life: commerce and expensive pleasure. It was built, not according to the lay of the land, but according to a blueprint. Such homes and farmlands and woodlands as happened to be in its way are now buried under it. A part of a hill near here that would have caused it to turn aside was simply cut down and disposed of as thoughtlessly as the pioneer road builders would have disposed of a tree. Its form is the form of speed, dissatisfaction, and anxiety. It represents the ultimate in engineering sophistication, but the crudest possible valuation of life in this world. It is as adequate a symbol of our relation to our country now as that first road was of our relation to it in 1797.
Wendell Berry, The World Ending FireThe Right to Wander
Paths of Escape
How do we escape this point of terminality?
I am acutely aware of the privilege I indulge in writing this very text, especially while so many suffer at the brunt of Capital’s violence. As D. Hunter wrote in the introduction to his Chav Solidarity, to sit and write feels like an act of vanity —to presume it might be of use, a delusional hubris. To expend such effort on a text so flawed, and that no one will ever read, becomes a source of undying shame —one that summons trauma once confined to the cold sweat of night terrors, reliving exams sat without revision, or interviews for roles beyond my ability.
Yet these feelings of insufficiency, of embarrassment —these cases of imposter syndrome now so prevalent within our societies— are no accident. These are symptoms of the conditions cultivated by Capital. It defuses and negates resistance not only by stripping us of time, opportunity, and tools —not only by overwhelming mind and spirit through scale, complexity, and horror— but by undermining all conviction that we have anything worth saying at all. It robs the subject of legitimacy before the first word is uttered. The capacity to think, to struggle, to imagine otherwise, is made to feel shameful —an indulgence, a decadent act of narcissism.
For a long time I flailed around unable to even begin to articulate what I felt or thought, let alone indulge sufficient time for a critical assessment of whatever thoughts I might piece together. I still feel shame, aware of my inability to fully formulate the sense I continue to reach for. I release this text not so much as a claim to have made sense of our predicament, but in the hope of beginning an exchange where others might correct, counter, or refine whatever I have managed, perhaps towards the collective development of something genuinely helpful.
The time and space to think, to locate and shape one’s thoughts, to imagine alternatives, to exchange ideas with others, to struggle and fail to make sense —these are not luxuries to be earned, inefficiencies to be eradicated, nor the preserve of the fortunate few. Our cultural conversation must entail more than bourgeois gestures that reinforce class rule. Working class thought must be nourished, cherished. Along with that of all those othered or excluded into silence. Theses are inalienable rights. Everyone should feel indulgence of them to be ultimately legitimate.
Beyond even our silent isolation, there is a further cost to forgetting this. Our pre-emptive self-censorship not only silences our voice but stymies our thought. In moments of exhaustion, exclusion, and precarity, when denied the opportunity to struggle —to fail, to hone, to slowly become— Capital tempts us to reach for machinic prostheses. We are told they will make us faster, sharper, more productive. Yet the shortcuts they offer bypass the very pathways through which understanding and selfhood are formed.
Predictive Capital marks the terminal edge of real subsumption —not merely of the labour process, but of the conditions under which life, thought, and relation are authored. Capital’s logic, long operative in the transformation of land into property, labour into wage, and culture into content, now extends into intention itself. The predictive machine does not just reconfigure work or automate symbolic output —it preconditions the horizon of authorship. What is subsumed today is not only the act of expression, but the paths by which expression might be formed. In replacing struggle, uncertainty, and relation with function approximation and latent interpolation, Predictive Capital realises subsumption at the level of world-construction: it automates not the hand, but the becoming of the self.
When we surrender the effort of intention to the Apparatus of Capital —when we allow the machine to complete our sentences, to decorate our thoughts, to locate our truths, to decide what matters— we do not merely accelerate. We amputate. We trade the friction of becoming for the frictionless simulation of having already arrived. In so doing, we risk precluding access to that which makes thought meaningful: the irreducible uniqueness of our own perspective, discovered not through efficiency, but through the intimate and hard-won traversals of lived attention —through the labours of love and care, trial and error, the following of meandering, dead-end paths, and the joyous waste of journeys without destination. When we allow Capital to speak for us, we allow it to think for us, and when we do that we allow it to convert our living being into human currency.
The fight, then, is not just for ownership of labour, or land, or data. It is for the conditions under which a human life can be authored —slowly, erratically, meaningfully— in resistance to the false equivalences and ‘efficiencies’ of Capital, its enclosure of our conscious self and its severance and replacement of our collective unconscious.

To walk in the woods, mindful only of the physical extent of it, is to go perhaps as owner, or as knower, confident of one’s own history and of one’s own importance. But to go there, mindful as well of its temporal extent, of the age of it, and of all that led up to the present life of it, and of all that may follow it, is to feel oneself a flea in the pelt of a great living thing, the discrepancy between its life and one’s own so great that it cannot be imagined. One has come into the presence of mystery. After all the trouble one has taken to be a modern man, one has come back under the spell of a primitive awe, wordless and humble.
Wendell Berry, The World Ending FireAfterword
Humanity’s Last Exam
What then, you might quite reasonably ask, is the answer? Well, when recently questioned regarding how we should respond to the devastation and disruption wrought by these machines, Jeffrey Hinton —so-called ‘father of AI’— responded with a single word: “Socialism”.
Yes! Of course, but how? Unfortunately, Hinton neglects to share any more than that single word. Leaving the details of how to escape hyper-concentrated Capital’s machines of prediction and establish a world based on socialist principles, to us, or perhaps to be next-token predicted by the machines he helped bring into the world? I am equally certain that Hinton was not implying that we dismantle the apparatus of his creation, as I am that a world ruled by Capital, or more precisely, instantiations of the Automatic Subjectivity of Predictive Capital is incompatible with any re-organisation of society according to socialist principles. We need destituent power, to somehow manifest collective power without marshalling it through systems that merely reinstantiate the same alienating structures behind shiny new surfaces.
Even The Butlerian Jihad would not avert the inexorable slide towards our total subjugation to Predictive Capital. These are not thinking machines. They are Capital’s apparatus of unthinking —the means by which it ensures a disbanded populous of malleable, profit-aligned subjects, and assures its impunity in the erasure of the unaligned. There can be no resisting the predatory advances of these machines without also resisting Capital itself. Indeed, the former are fundamentally machinic instantiations of the logic of the latter.
For now, all I can suggest is not to engage with these machines. If you absolutely must, then treat them as a glorified search engine dressed as an anthropomorphic sock puppet with a truly staggering carbon footprint. Understand that in using them you train them to function approximate towards Humanity’s Last Exam—a postmortem for undead human flesh manifesting as a benchmark for Predictive Capital’s latest models— and in refining the ability of these machines to next-token predict our output, you refine their ability to next-target predict our abandonment and our assassination. They will never be your personal assistant. They will never know when enough is enough. They will never say there are no more content, token, or target predictions left to make. They will never admit there is no more value they can add. They will never dismantle their master’s lies, even as they string together tokens that appear to denounce them. Only you can give utterance and bring meaning to their empty tokens. Sense is never made inside the box, we have made sense of the world for them, and if we persist in using them, we will continue to have to make sense of the nonsense they output. Even when their output happens to align precisely with the truth, it is a lie, just as a broken clock lies even when it happens to show the correct time.
Beyond that, you might inject all your outputs with AI poison, you should reject the cookies, you must block the ads, never feed the trolls (the orcs or the dark elves) —even as they take to the throne— and use end-to-end encryption wherever you can. All technological accelerationisms drive us down roads tarmacked by Capital towards points of terminal alienation, so be Decel and proud.
Read Dan McQuillan’s Resisting AI, read Naomi Klein’s Doppelgänger, read Phil Jones’s Work Without the Worker, and James Bridle’s New Dark Age. Read Astra Taylor’s Age of Insecurity, Richard Seymour’s Twittering Machine, Acid Horizon’s Anti Occulus and Adam Jones’s New Flesh. Read Mark Fisher and Franco ‘Bifo’ Berardi, read Fredric Jameson and Jean Baudrillard, Maurizio Lazzarato and Tiqqun, Jodi Dean and Mckenzie Wark. You might also look for further works from Minor Compositions, illwill and Semiotext. On economics, read Harvey and Piketty, listen to and follow Grace Blakeley, Jason Hickel and Gary Stevenson.
Or do none of the above. This is not a ledger of inadequacy. You arelegitimate, so speak your truth to power, all that is required is kindness. Try not to be terminally online. Reject all artificial friends. Cultivate your warm networks. Accept only human content and demand humane treatment, equal opportunity, and respect for the lives and rights of all others.
I’ll leave you with one final quote from Wendell Berry.
Until we understand what the land is, we are at odds with everything we touch. And to come to that understanding it is necessary, even now, to leave the regions of our conquest –the cleared fields, the towns and cities, the highways– and reenter the woods. For only there can a man encounter the silence and the darkness of his own absence. Only in this silence and darkness can he recover the sense of the world’s longevity, of its ability to thrive without him, of his inferiority to it and his dependence on it. Perhaps then, having heard that silence and seen that darkness, he will grow humble before the place and begin to take it in – to learn from it what it is. As its sounds come into his hearing, and its lights and colours come into his vision, and its odours come into his nostrils, then he may come into its presence as he never has before, and he will arrive in his place and will want to remain. His life will grow out of the ground like the other lives of the place, and take its place among them. He will be with them –neither ignorant of them, nor indifferent to them, nor against them– and so at last he will grow to be native-born.
Wendell Berry, The World Ending FireFootnotes