‘Do you yet understand we’re living in a sci-fi novel”? That was the question posted on X last month by Mike Solana, a protégé of Peter Thiel. “Seems that way,” replied Elon Musk. He should know. Thirty years ago, you might have scoffed at a sci-fi book set in 2025 about a multibillionaire whose companies operate a vast fleet of self-driving electric cars, a social media network, a chain of satellites beaming the internet to terminals all over the world, and a private rocket programme bigger than Nasa’s; who is also developing brain implants and robots; and who ultimately intends to colonise Mars — all with the assistance of an artificial intelligence chatbot.
The only person who might have envisioned a future as outlandish as our present is the Seattle-based author Neal Stephenson. Near the beginning of Stephenson’s 1995 novel The Diamond Age, there is an exchange between a computer engineer named Hackworth and an “equity lord” — a tech billionaire, as we would say — named Finkle-McGraw. The engineer alludes to some research work he has been doing.
“What sort of work?” “Oh, PI stuff mostly,” Hackworth said. Supposedly Finkle-McGraw still kept up with things and would recognise the abbreviation for pseudo-intelligence, and perhaps even appreciate that Hackworth had made this assumption. Finkle-McGraw brightened a bit. “You know, when I was a lad they called it AI. Artificial intelligence.” Hackworth allowed himself a tight, narrow, and brief smile. “Well, there’s something to be said for cheekiness, I suppose.” I think a lot about PI these days, not least because of the catastrophic effect it is having on actual intelligence. The Diamond Age, like so many of Stephenson’s novels, offers us a troubling glimpse of a future we have already reached. Software has eaten the world. Venture capitalists and engineers reign supreme. But his networked society has reverted to tribalism. The most powerful of the phyles (ethnic tribes) are the Anglo-Saxon “Neo Victorians” who have reverted to the social strictures of the mid 19th century. There is a slum-dwelling underclass of tribeless thetes. But one little girl finds her way out of the Shanghai gutter when she is given a stolen copy of a highly sophisticated interactive book, The Young Lady’s Illustrated Primer, which a modern reader will recognise as a large language model (LLM) or a chatbot. Immersive, interactive, and adaptive, it gives Nell the education she would otherwise never have received. Sam Altman, the CEO of OpenAI, speaks during a news conference in Tokyo earlier this year. He says he is ‘completely unbothered’ that ChatGPT is now more intelligent than humans ALAMY Dreamt up by Stephenson 30 years ago, such a primer now exists in multiple, competing forms and is available to anyone with an internet connection. Small wonder that Sam Altman, the founder of OpenAI — whose ChatGPT launched the AI Age just two and a half years ago — says we are on the brink of a new Renaissance. “Do you think you’re smarter than [the GPT o3 model] right now?” Altman asked the Financial Times rhetorically in a recent interview. “I don’t … and I feel completely unbothered, and I bet you do too.” Altman has every reason to want to soothe us: he needs our subscriptions. Yet it would be strange to be completely unbothered by the speed with which young people are adopting AI. As Altman himself has noted, “older people use ChatGPT like Google. People in their twenties and thirties use it as a life adviser.” And college students “use it like an operating system. They set it up in complex ways, connect it to files, and have detailed prompts memorised or saved to paste in and out.” AI usage is already spreading faster than internet usage at a comparable stage. The number of ChatGPT active users is now 1 billion. Google’s Gemini has over 400 million active monthly users. And the use cases for AI keep multiplying. The management consulting firm McKinsey has a chatbot named Lilli. Boston Consulting Group (BCG) has Deckster, a slide deck editor. Rogo, funded by the venture capitalists Thrive, is a chatbot for investment banking analysts. The language app Duolingo is replacing contract workers with AI. Without a doubt, the models keep getting better. Humanity’s Last Exam (HLE) is a test designed to evaluate PhD-level reasoning and deep research skills. When HLE was released in January 2025, the best model scored 8.1 per cent. Just a month ago, Google’s Gemini 2.5 Pro hit 21.6 per cent. On the other hand, the computational power (compute, for short) needed to train LLMs keeps growing. When ChatGPT launched, it required around 3 per cent of the training compute required by today’s leading-edge models. Just two and half years from now, according to Peter Gostev, head of AI at greeting cards company Moonpig, the models will need 30 times more compute than today and 1,000 times more than ChatGPT when it was launched. The number of ChatGPT active users is now 1 billion, with nearly all students now using it to help with homework assignments GETTY As Toby Ord, senior researcher at Oxford University’s AI Governance Initiative, has noted, the financial costs and energy use “need to go up exponentially in order to keep making constant progress”. Capital expenditure and research and development by Meta, Microsoft, Amazon, and Alphabet are expected to rise to $596 billion in 2025, up 35 per cent from 2024. We have seen nothing like this since the construction of the railways in the 19th century. Partly because AI works so well and partly because it costs so much, we are also in the early phase of large-scale job destruction. Google, Microsoft, Apple, Tesla, Meta, Nvidia and Palantir have all essentially stopped hiring. We are already seeing absolute job losses in areas such as professional writing and manning call centres. “Artificial intelligence is going to replace literally half of all white-collar workers in the US,” Ford Motor chief executive Jim Farley told the Aspen Ideas audience last week. Within a few years, AI could destroy even more white-collar jobs than the blue-collar jobs destroyed by China after it joined the World Trade Organisation in 2001. The AI revolution has a geopolitical dimension, too, as it is now the crucial field of superpower competition in Cold War II. DeepSeek, China’s champion LLM, may still lag behind the best US models in terms of performance, but not by much — and it is far (as much as 17 times) cheaper. Constrained by US restrictions on their access to the most powerful semiconductors, the Chinese AI companies are seeking to win the global application and adoption race. No one should underestimate the risks of an AI arms race. Ask yourself: after the discovery of nuclear fission, which did we build more of in the past 80 years: nuclear warheads or nuclear power stations? Today there are approximately 12,500 nuclear warheads in the world, and the number is rising as China adds rapidly to its nuclear arsenal. By contrast, there are 436 nuclear reactors in operation. As Gertjan Vlieghe of Millennium, a hedge fund, recently noted: “Today, more than 95 per cent of the destructive power of the world’s weapons arsenal comes from nuclear power. But less than 5 per cent of our primary energy comes from nuclear power.” Will AI be used for malign purposes such as the design of lethal pathogens? Is the Pope a Catholic? Nevertheless, I believe the economic and geopolitical consequences of AI pale alongside its educational consequences. In a recent paper for the Manhattan Institute, Frederick Hess and Greg Fournier asked: “What Do College Students Do All Day?” The answer is not “studying”. Estimates of the amount of time spent by US students on all “education-related activities” range from 12 to 19 hours per week. According to the sociologists Richard Arum and Josipa Roksa, this represents a decline of roughly 50 per cent from a few decades ago. Hess and Fournier calculate that a student with an average course load should spend “at least 36 hours attending class or doing homework each week”. Today’s students are nowhere close to that. The decline in study hours is not because students are moonlighting to pay their way through college. (According to the National Centre for Education Statistics, just 40 per cent of full-time undergraduates had part-time jobs in 2020, compared with 79 per cent in the mid-1990s.) One very big reason today’s students are spending so little time studying is AI. As James D Walsh recently put it in New York Magazine, “Everyone is cheating their way through college.” Just two months after OpenAI released ChatGPT, a survey of 1,000 college students found that nearly 90 per cent of them had used it to help with homework assignments. OpenAI has worked out how to hook the remaining 10 per cent. This year it made ChatGPT Plus — a subscription to which costs $20 a month — free to students during finals. To quote Altman once again: “Writing a paper the old-fashioned way is not going to be the thing.” The remarkable thing is how open everyone is about this. “College is just how well I can use ChatGPT at this point,” a student in Utah told Walsh. “With ChatGPT, I can write an essay in two hours that normally takes 12,” said Sarah, a freshman at Wilfrid Laurier University in Ontario. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow,” gushed Wendy, a freshman finance major at one of New York’s top universities. “You just don’t really have to think that much.” Whatever university professors and administrators may think, they do not have effective tools to detect the use of AI in the papers students submit. A June 2024 study used fake student profiles to slip wholly AI-generated work into professors’ grading piles at a UK university. Only 3 per cent were detected as the work of LLMs. AI detectors such as Turnitin and ZeroGPT are simply not accurate enough. Walsh fed one of Wendy’s mostly AI-generated essays into the latter, which wrongly estimated it to be 11.74 per cent non-human work. When he ran the Book of Genesis through the app, “it came back as 93.33 per cent AI-generated”. The author Neal Stephenson’s 1995 novel, The Diamond Age, offers us a troubling glimpse of a future we have already reached in which software has eaten the world ALAMY Writing in the New Yorker, D Graham Burnett noted the fatalistic mood that grips many colleges in the face of the AI onslaught. “On campus,” he wrote, “we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be, ‘We’ll just tell the kids they can’t use these tools and carry on as before.’ This is, simply, madness.” Talking of madness, it seems unlikely that tolerating the wholesale outsourcing of studying will do the students themselves much good. Gen Z is already notoriously susceptible to mental health maladies, real or imagined, thanks to — Jonathan Haidt argues — their childhood exposure and addiction to social networking apps on mobile devices. Enabling “the anxious generation” to shirk the acquisition of skills such as sustained reading, critical thinking and analytical writing cannot be expected to help matters. Indeed, it would be astonishing if reliance on LLMs at university did not lead to arrested cognitive development. To quote Robert Sternberg, a psychology professor at Cornell University, “The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence, but that it already has.” A new study by MIT researchers — “Your Brain on ChatGPT” — suggests he is right. What, then, should universities do in response? At the University of Austin, of which I am a founding trustee, I recently suggested five essential steps: 1. Create quarantined space in which traditional methods of learning can be maintained and from which all devices are excluded. Call this “the cloister”. In another Neal Stephenson book — Anathem (2008) — a future world has responded to the calamity of a nuclear war by banishing scientists (the “avout”) into “concents” (monastic communities). The avout are banned from possessing or operating advanced technology. It turns out that only the skills honed in the concents equip the avout to contend with the threat posed to Earth by an alien starship from a parallel world. Medieval monks studying and reading the bible; the university of the future will need to resemble much more closely the enclosed word of the monastic orders than the open-access colleges of today ALAMY Reading Anathem, I found myself thinking that the university of the future will need to resemble much more closely the enclosed word of the monastic orders than the open-access colleges of the present day — institutions so readily accessible to outsiders that non-student agitators after October 7, 2023, found it quite easy to organise pro-Palestinian “encampments”. Today’s students need to be protected not only from such influences but also from the temptations of AI. To repeat, that does not mean that the university of the future should prohibit the use of LLMs. On the contrary, we would want our students to excel at writing well-crafted prompts. But one cannot learn to ask good questions — what Germans call the art of Fragestellung — without first submitting to the discipline of the cloister, acquiring the skills that can nowadays be acquired only in strict seclusion from AI. I see very little prospect of such a radical new regime being adopted at any of the established universities, as they are, by definition, universities of the past. However, I shall be arguing strongly that we take this approach at the University of Austin from the outset of the next academic year. Students may rest assured that no monastic habit or tonsure will be required of them — nor oaths of celibacy. But strict prohibitions on devices within the cloister, including wearable and implanted technology, will have to be insisted upon if the rapid advance of Pseudo Intelligence is not to plunge all of humanity into a new Dark Age. This essay is based on a talk given at the Austin Union on June 11


2. Inside the cloister, allocate time to a) reading printed books b) discussion of texts and problems c) writing essays and problem-sets with pen and paper, and d) assessment via oral and written examinations.
3. Require time in the cloister to be around seven hours a day, leaving time on board the starship, as well as vacations, for the use of AI.
4. 4. Revise admissions procedures to ensure the university attracts students capable of coping with the discipline of the cloister as well as the opportunities of the starship.
5. 5. These suggestions might seem like an over-reaction to the challenge posed by AI — in effect, a return to the monastic origins of the European university in the medieval period. However, my inspiration for the cloister is not history but science fiction. In this model, the starship is as important as the cloister.
.png)

