Billionaires and Their Basilisk: The Beliefs Behind the AI Vanguard

4 hours ago 2

Roberto Matta, Rosenberg Jury, 1952

Few technologies have disrupted so much, so quickly, and with so little consensus about whether the technology itself is even a net positive for humanity. The power of artificial intelligence (AI), such as large language models (LLMs) and image diffusion models, is changing the structure of labor markets, education, informational gatekeeping, social interaction, and governance. It is still unclear whether these changes will lead to an ideal society in which human needs are largely addressed or a neo-feudal hellscape in which a few ultra-wealthy technocrats cyber-rule over billions of platform-dependent serfs. Or maybe we land somewhere in between. Regardless of where it is ultimately headed, AI is the current big thing and on track to continue expanding.

Given this, it is important (and even mildly entertaining until you get to the end of this article) to consider the weirdness of some AI-adjacent subcultures. While the vast majority of individuals working in the AI space do not adhere to these philosophies and beliefs, a handful of notable and highly-influential technologists are either true believers or have been known to rely on these fringe philosophies to push their own bizarre theories of how society should be structured.

The Rationalists are a loosely affiliated intellectual community focused on human reasoning, understanding cognitive biases, and anticipating long-term risks from advanced technologies such as AI.1 They purport to adhere to “rationality” rather than earlier “rationalist” thinking. Rationalists leverage Bayesian reasoning, cognitive bias analysis, and formal decision theory, but with a technological and self-improvement focus that can employ metacognition. Parts of the community, however, espouse speculative and questionable views on ethics. Many believe that artificial general intelligence (AGI) poses an existential risk unless it is aligned with human values.2 This sounds fine. But according to allegations, some Rationalists have taken this thinking too far, tolerating extremism and harassment.3 4

The Rationalists overlap intellectually with Longtermers, an esoteric niche that places the needs of future humans at or above the level of those of current humans.5

Jimmy Ernst-Image Scape 29 1962

This has led to some strange positions, often based on self-centered, post-hoc justifications. Longtermers place the same moral weight on the well-being of future humans as that of current humans. Thus, they posit that there will be many more human beings in the future than there are now, and so a utilitarian and “rational” conclusion is to optimize for the needs of those of the future even if this causes a degree of suffering in those of the present. Sometimes these lines of thinking are referred to as “effective altruism.”

Probably the most famous effective altruist is Sam Bankman-Fried, founder and CEO of the FTX cryptocurrency exchange. A former MIT physics major and quant trader, Bankman-Fried rapidly became a media darling and poster child for “earning to give,” a strategy endorsed by many effective altruists in which individuals pursue high-earning careers not out of personal greed, but to maximize their capacity for philanthropic giving. In theory, this meant Bankman-Fried was amassing billions not for yachts or mansions, but to save lives, fund pandemic prevention, and avert AI-induced existential catastrophe. He cited moral philosophy, particularly utilitarian reasoning, as justification for operating in ethically gray areas.

But his enterprise unraveled as one of the largest frauds in financial history. Investigations revealed that FTX customer funds were misappropriated and funneled into its sister hedge fund, Alameda Research, which made reckless bets and significant political donations, including to causes and candidates aligned with Bankman-Fried’s worldview.6 Far from a model of moral rigor, the empire was built on deception, poor risk controls, and layers of obfuscation. His alleged mission failed, and Bankman-Fried was sentenced to 25 years in prison for fraud and conspiracy related to his operation of FTX. While effective altruism was not on trial, the episode raised deep questions about whether high-minded ethical philosophies can too easily be warped into tools of self-justification.

Other well-known individuals in various rationalist circles subscribe to or are influenced by similar ideas and have not (yet) been convicted of a crime. Venture capitalist Marc Andreessen, cofounder of web browser company NetScape and early financial backer of Facebook, sees rationalism and longtermism through a libertarian-accelerationist lens, viewing failure to adopt AI a greater danger than any of its existential risks. In fact, Andreessen has provided a non-individualized list of “enemies,” such as his critics, government regulators, AI decelerationists, and anyone who stands in the way of AI growth.7 Andreessen’s The Techno-Optimist Manifesto, a very long blog post espousing his beliefs, has a strangely messianic glow.8 But no matter how you frame it, Andreessen is preaching for a low-regulation environment that would be good for Andreessen, allowing him to add multiples to the billions he already enjoys.

Hieronymus Bosch, The Garden of Earthly Delights (detail), circa 1410

Andreessen is hardly the only prominent technologist to push something that resembles quasi-religious belief. One of the more infamous examples emerged from the Rationalist community. It is a thought experiment known as Roko’s Basilisk,9 and has a premise as bizarre as it is disturbing. Imagine a future superintelligent AI that is perfectly rational, goal-driven, and utilitarian. In its effort to maximize its own creation and dominance, this AI concludes that the best way to ensure it comes into existence is by punishing (e.g., torturing) anyone who knew about it but failed to help bring it about.

This idea sparked considerable panic and debate when first posted to a rationalist message board in 2010, to the point that moderators banned any further discussion of it, calling it an “information hazard.” The theory’s danger, they argued, wasn’t just in its content, but in its memetic structure. Once you know about the Basilisk, you are theoretically under its gaze, and the only way to avoid future punishment is to assist in its creation.

The Constellation Draco, in 17th century map

In essence, this is merely a high-tech reimagining of Pascal’s Wager, a much older thought experiment. The Wager professes that it is rational to believe in God because the potential gains outweigh the costs of belief in the case that God does not exist, while disbelief risks infinite loss if God does exist.10 Accordingly, it is better to believe and help the Basilisk, just in case the threat is real.

Critics have rightly pointed out that the idea collapses under logical scrutiny.11 But its persistence among some online circles highlights a broader issue. When advanced AI is wrapped in speculative philosophy, even relatively bright people can confuse rationality with digital eschatology.

And it does not end there. According to some reports, tech uber-bro and armchair efficiency expert Elon Musk met the singer Grimes (mother of 3 of his at least 14 children) due to Twitter discussions of the Basilisk.12 Musk is the founder and CEO of xAI (among Tesla, SpaceX, and several other companies), a startup that offers a generative AI chatbot to the public.13 In the past, Musk has taken the position that AI is an existential risk to humanity,14 but his more recent alignment with political extremists suggests that he lacks concern for the welfare of at least a good chunk of humanity.15 This evolution reflects a deeper contradiction, in that Musk warns of AI’s dangers while simultaneously positioning himself as one of the few people qualified to unleash it.

And that is just the short list. Arguments could be made that others have promoted controversial, ethically questionable, or arguably sociopathic ideas in the furtherance of AI (and often themselves at the same time). These include OpenAI CEO Sam Altman (who has publicly shifted focus from rigorous AI safety to rapid deployment, while privately warning of doomsday scenarios to consolidate power and attract political influence), investor Peter Thiel (a long-time advocate of libertarian politics and state surveillance, who has backed military-focused AI applications and expressed admiration for authoritarian models of governance), former Coinbase CTO and investor Balaji Srinivasan (who has promoted a vision of cloud-based “network states” governed by techno-elites, bypassing traditional democratic systems), and neo-reactionary Curtis Yarvin (whose thinking generally lines up with that of Srinivasan, but has been influential on Thiel and U.S. Vice President J.D. Vance).

Roberto Matta, Wound Interrogation, 1948

Some AI evangelists on the more careful side of things will attempt to rationalize their arguments by stating that AGI is an engine of abundance and the distribution of the excess wealth is a political rather than technological problem.16 Yet, it is rare in human history for a political system to distribute wealth equally or equitably, and we have many counter-examples of the exact opposite occurring.

But let’s end on an even darker note. The Zizians are a small cult-like group that grew out of the Rationalist movement. Their foundational belief centers on the idea that the left and right hemispheres of the human brain can develop into independent conscious entities.17 This is a theory inspired loosely by split-brain research but extrapolated far beyond scientific consensus. According to the Zizians, this duality means every human being is effectively two people sharing one body, and moral systems must account for the autonomy and potential conflict between these cohabiting minds. Their idea is paired with a rigid form of anarcho-veganism. The Zizians are currently believed to be involved in the deaths of six individuals in 2022 and 2025. To be clear, they are not mainstream Rationalists. Nonetheless, their existence underscores how quickly rough theories about mind, ethics, and future risk can mutate into dangerous ideologies when detached from empirical grounding and accountability.

I personally know dozens of people working in the AI space and I must admit that they are all pretty chill. It would not be accurate or fair to lump the majority of AI users, developers, and advocates under the “weird” moniker.

But the statements of a number of very powerful individuals, many of whom currently lead the AI community, have been more than just weird. They are also troubling because they reveal a willingness to dismiss oversight, minimize risks, and speak in abstract, hyper-rational terms that often ignore real-world human suffering. These are some of the same people who brought us aggressive online ads, social media, meme coins, and surveillance capitalism. Their vision of the future, however well-funded and covered in altruistic gloss, deserves critical scrutiny rather than blind trust. 

Their philosophy smacks of religion while they claim adherence to no god. They justify their extreme wealth and societal influence as necessary for humanity’s survival or advancement, allowing them to excuse current inequality as a moral imperative rooted in saving future generations and accelerating progress. But if they are attempting to guide civilization through existential risk, that risk is largely of their own making.

Amid the noise and greed surrounding AI, there are legitimate and ongoing efforts to build ethical frameworks for its development and deployment. Organizations like the Partnership on AI,18 AI Now Institute,19 and the Organization for Economic Cooperation and Development (OECD)20 have published guidelines promoting transparency, accountability, and fairness in AI systems. Some governments, including the European Union with its AI Act,21 are actively regulating AI to ensure safety, civil rights compliance, and human oversight. While these efforts vary in scope and effectiveness, they represent a meaningful counterweight to the unchecked techno-utopianism found elsewhere in the field.


References

  1. https://en.wikipedia.org/wiki/Rationalist_community ↩︎
  2. AGI does not have a well-accepted definition but a workable formulation is that it refers to AI capable of understanding, learning, and applying knowledge across a wide range of tasks at or beyond human level. ↩︎
  3. https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html ↩︎
  4. https://time.com/6252617/effective-altruism-sexual-harassment/ ↩︎
  5. https://en.wikipedia.org/wiki/Longtermism ↩︎
  6. https://www.sec.gov/files/litigation/complaints/2023/comp25616.pdf ↩︎
  7. Someone powerful making lists of enemies has never gone wrong. Nope. Not once. ↩︎
  8. https://a16z.com/the-techno-optimist-manifesto/ ↩︎
  9. https://en.wikipedia.org/wiki/Roko%27s_basilisk ↩︎
  10. https://en.wikipedia.org/wiki/Pascal%27s_wager ↩︎
  11. In addition to a complete lack of empirical evidence that Roko’s Basilisk will or even could exist, it is based on logically questionable premise. The Basilisk being perceived as being malevolent reduces the likelihood that truly rational individuals would build it in the first place. In short, the rationalists should have known that they were in trouble when their “logic” had them worried that a future AI will retroactively torture them for not donating to its Kickstarter campaign. ↩︎
  12. https://www.vice.com/en/article/what-is-rokos-basilisk-elon-musk-grimes/ ↩︎
  13. xAI was partially funded by Andreesson’s venture firm. ↩︎
  14. https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat ↩︎
  15. https://www.newyorker.com/culture/infinite-scroll/techno-fascism-comes-to-america-elon-musk ↩︎
  16. See, e.g., https://www.aei.org/articles/the-age-of-agi-the-upsides-and-challenges-of-superintelligence/ ↩︎
  17. https://en.wikipedia.org/wiki/Zizians ↩︎
  18. https://partnershiponai.org/ ↩︎
  19. https://ainowinstitute.org/ ↩︎
  20. https://www.oecd.org/en/topics/artificial-intelligence.html ↩︎
  21. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence ↩︎
Read Entire Article