AI for humanity is a catchphrase that most US AI companies embrace. Silicon Valley start-ups and big AI players like OpenAI or Anthropic cloak their company’s narrative in humanistic terms. The strong suggestion is that AI is not being developed to make money but to make the world a better place. Just like Facebook was marketed as a benevolent community-building instrument or even an emancipatory tool, AI companies are positioning themselves as working for the whole of humanity. Anthropic claims that they ‘strive to make decisions that maximize positive outcomes for humanity in the long run’ (Anthropic, 2025), while OpenAI has made it its mission ‘to ensure that artificial general intelligence benefits all of humanity.’ (OpenAI, 2025). This benevolent universalism has been an ingredient of tech-ideologies since the rise of digital utopianism in the second half of the 20th century. It deserves some critical attention.
Do you want a dystopian or utopian AI future?
Presenting humanity with a utopian future due to technological innovation is not exceptional. The rise of digital utopianism (Turner, 2006) in the 20th century already framed technology, in the form of the personal computer and later the internet, as a liberating force. This narrative gave technology users the idea that the world was at their feet, and even more that a new world could be born. Neoliberal politicians used it to attack the welfare state, and this liberation narrative of social media became a central ingredient of US foreign policy (Clinton, 2010). Digital media, we were led to believe, would facilitate deep democratization and enduring freedom. At the same time, it was evident that digitalization also excluded and reinforced (geo)political power relations.
In the case of AI, the narrative has become a bit darker. Underneath the lofty discourse on AI for humanity rests a full-blown dystopian threat. On the one hand, AI is presented as an amazing new tool that will help us cure people, even erase poverty and create peace (Schei, et al., 2023). On the other hand, we could lose our jobs and even see the rise of surveillance and control. But that is not even the worst version of that dystopian narrative. Among techies, the dystopian future is not so much about AI as it is about AGI: Artificial General Intelligence. Central in this darkest of dark dystopian narratives is the idea that AI systems can turn into a new sort of being when companies succeed in building AGI. This idea of AGI as a new competing species is not a fringe discourse; it is at the heart of big companies like OpenAI.
Already in the narratives on narrow AI, thus before AGI was created, we can see how prominent AI researchers assign all kinds of human or super-human qualities to the current AI systems. Not only do they use a whole range of metaphors taken from human biology (intelligence, neural networks, digital brains, etc.) to discuss AI, AI programmers already talk about the current systems as if they are ‘self-acting.' In a TEDx talk, Ilya Sutskever, former chief scientist of OpenAI, for instance, consistently stresses that computers now ‘understand you’ (Sutskever, 2023). And Dario Amodai (Dwarkesh, 2023), CEO of Anthropic, regularly echoes Sutskever’s claim that ‘they’ (the neural networks) ‘just want to learn.' This anthropomorphization of AI is only the beginning. Sutskever argues that ‘we are creating something very different from anything we've created so far (…) We definitely will be able to create completely autonomous beings with their own goals’ (Schei, Poulton, Gormley & Payne-Frank, 2023). It is then and there, from the moment such General or Super AI is produced, that the real dystopian future can start to become real. The technological sublime, we are to believe, will metamorphize into a new super-human species that – in their quest for survival – can destroy humanity. This dystopian narrative about AI is grounded in
- A process of discursive deification. The danger of AI is not found in its ‘human characteristics’ but in the fact that this ‘new super-human creature’ will transcend and outsmart humankind and thus be far more powerful. AGI, the top people in the industry argue it, will not just be a programmed natural language model – consisting of multiple interconnected layers of algorithms – it will become a new being with artificial general intelligence superior to human intelligence. A species that seemingly knows all, surveils all, and has super-human qualities.
- The adoption of an evolutionary perspective. AI technology, Sutskever argues, ‘is a force of nature. I feel there is a lot of similarity between technology and biological evolution’ (Schei, Poulton, Gormley & Payne-Frank, 2023). AGI, it is assumed, will not only become conscious; it will become a new competing super-human, even God-like species entering the ‘survival of the fittest’ competition.
It is in the combination of this deification of AI and this evolutionary perspective that the dystopian future takes shape. ‘Superintelligence’, argue Leike and Sutskever, ‘will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction’ (Leike & Sutskever, 2023).
This narrative about the potential destructive power of AGI is hegemonic. It is not only produced by AI programmers and scientists but also shared by the people who are critical of the development of AGI. The most prominent example is probably the open call from the Future of Life Institute to pause ‘Giant AI experiments’ for six months because ‘AI systems with human-competitive intelligence can pose profound risks to society and humanity’ (Future of Life Institute, 2023). This letter, signed by Elon Musk, Noam Chomsky, Steve Wozniak, Yuval Noah Harari, and thousands of other prominent and less prominent humans, shares the same assumption of the major AI companies, namely that ‘ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control’ will acquire ‘a mind’ of their own (Future of Life Institute, 2023). The About page of the Future of Life Institute frames the challenge as follows:
“In the current decades, however, something unprecedented is happening. Humanity now holds the power to affect, and even destroy, all life on earth. Through the continued development of biotechnology and AI, we have entered an era in which life will be engineered by intelligence, rather than by evolution.” (Future of Life Institute, 2025)
The danger of that ‘new life’ is not to be located in the short term, but in the long term. The problem is not so much that AGI systems can become actors that flood our information channels with ‘propaganda and untruth,’ reinforce biases and stereotypes, or reproduce racism and antidemocratic narratives but that those machines will ‘outnumber’ and ‘outsmart’ humans, making ‘us obsolete’ and even ‘replace us’ (Future of Life Institute, 2024).
X-risks and superalignment
A company like OpenAI is presenting itself as focused solely on solving this dystopian problem. They claim to want safe AGI no matter who builds it: “We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome” (OpenAI, 2024). Clearly, if a company is willing to help others realize this goal, it should be because they are committed to this goal (and not to profit). This narrative is not unique; it echoes the culture of the AI safety community. This community is not just a category of AI laborers or a specific AI discipline; it has become a cultural niche. The ‘AI safety epistemic community’ is – in conceptualizing the idea or problem of AGI safety – influenced by the intellectual movements of utilitarianism, transhumanism, effective altruism, and longtermism (Ahmed, Jzwinska, Ahlawat, Winecoff & Wang, 2024). As a result of this interplay, AGI safety is by most companies defined in terms of avoiding long-term existential risks or X-risks (Hendrycks & Mazeika, 2022) (and less in terms of reinforcing bias, stereotypes, racism, inequality, or the decline of democracy now).
One of the ideas to avoid such existential risks is found in the idea of alignment. In the words of Sutskever, ‘It's going to be important to have these beings, the goals of these beings, be aligned with our goals’ (Schei, Poulton, Gormley & Payne-Frank, 2023) to avoid the destruction of humankind by AGI. Notice how this binary evolutionary logic, where ‘AGI’ is contrasted to ‘humankind,’ homogenizes humankind. Gone are the power relations, the inequalities, and cultural and political differences among humans. All of a sudden, humankind seems to have shared 'goals’ that we need to align AGI to. Even more, defining humankind and its values is not seen as the most difficult thing. ‘The problem of superintelligence alignment’ so that it follows ‘human intent’ (Leike & Sutskever, 2023) is defined in ‘technological’ terms. Not determining what ‘our goals’ are as humanity, or what ‘our intent’ is, is seen as complex, but the programming of an automatic alignment of AGI is seen as complex (Leike, Schulman & Wu, 2022). Human values and intent are just assumed to be known.
While some scholars, activists, and companies explicitly turn to the Universal Human Rights to define ‘human values’ (Hendrycks & MAzeika, 2022), most companies are far less explicit about the content of ‘our values and goals.’ Anthropic (2023) is one of the companies that explicitly develops a constitutional alignment approach based, among other things, on the Universal Declaration of Human Rights. Interestingly, this does not mean that Anthropic’s chatbot Claude can apply its own constitution to, for instance, argue whether Donald Trump supports democracy or not. It is seemingly only used to not give an ‘opinion’ of its own, to dodge problematic issues, and to not 'radicalize' users. Elon Musk, on the other hand, boasted in 2023 that his GrokAI would be anti-woke, while the opposite seemed true when Grok was first released (Tassi, 2023). Data annotators were since told to deprogram any residue of "woke ideology" and "cancel culture’ out of the system (Kay, 2025). Elon Musk argues that such a ‘maximum truth-seeking AI’ as he calls this anti-woke approach ‘that tries to understand the nature of the universe (…) might be the best path to safety, in the sense that an AI that cares about understanding the universe’ will make it ‘unlikely to annihilate humans because we are an interesting part of the universe.' (Jackson, 2023). Behind those lofty words, the programmers of GrokAI see a different reality: "The general idea seems to be that we're training the MAGA version of ChatGPT" (Kay, 2025). We can safely assume that this direction is what Vance (2025) favored in his speech during the AI conference in Munich as the ‘non-ideological AI’ of the future. In this understanding of ideology, only ‘the others’- the liberals, the left, and the woke-people - are ideological.
Whatever companies and politicians are trying to let us believe, producing a ‘neutral,’ non-ideological AI system is just as impossible as an AI aligned with ‘abstract human values.' In an increasingly multipolar world where democracy and Universal Human Rights are increasingly under attack, also in the West, it is impossible to maintain the image of a ‘non-ideological’ AI. AI and AGI cannot exist independently from the companies that produce them, nor from the political-ideological reality of the countries. Companies and their CEOs make specific (human, value-laden) decisions on how AI should function. Reality is that there are many contradictory ‘human values’, norms, and interests, and that alignment can thus never be ‘neutral’, non-ideological, or good for all. It will always be an ideological exercise and,therefore, one that includes certain people and excludes others. To understand how AI functions for ‘humanity,' we need to understand whose human values are programmed into the AI.
Whose human values?
There are several ways to investigate what values are embedded into platforms: (1) the analysis of the moderation/alignment policies or, in the case of AI chatbots, how such bots deal with certain questions, produce bias or avoid certain answers (Gillespie, 2024). (2) The analysis of public communication of companies on how they build their platforms (3) Study the rhetoric and behavior of their CEOs, and (4) the comparison between all the previous (Maly, 2024). In what follows, I want to focus on two elements: (1) how CEOs, and then in particular how Same Altman, CEO of OpenAi, frames OpenAI in relation to democracy and humanity, (2) and how OpenAI’s answers reflect (or not reflect) democratic values.
In an op-ed for The Washington Post, Sam Altman specified his aims forAI for humanity. The real question, Altman argues, is who will control the future of AI:
“Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power? There is no third option — and it’s time to decide which path to take.”
Altman serves his readers another dichotomic worldview: we can choose a future shaped by a ‘democratic AI’ (which is thought of as ‘global), or we will face an authoritarian AI nightmare scenario. If we choose a democratic AI, that is an AI ‘built to benefit the most people possible, we need’ argues Altman, ‘a U.S.-led global coalition of like-minded countries and an innovative new strategy to make it happen (...) The other alternative is an AI developed by China or Russia: Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us,” warns Altman. Implicitly, 'our values' are seen as democratic values, and democratic values are seen as Western values. Humanity is, as usual, used as a synonym for ‘the West’ and the West is imagined to equal ‘universal’. The West is also automatically framed as democratic – something which is certainly not evident if we take history and geopolitics into account. And because the West is assumed to be democratic, we are led to believe that it also produces democratic AI. This implicit equation between humanity – the West – democracy, and democratic AI rarely raised questions in recent years. To my knowledge, nobody critically reacted to this argument Altman pushed forward in 2024, during Biden’s presidency. In the Biden era, to most, the idea that the US was democratic and OpenAI truly committed to building a democratic AI seemed 'common sense.' That common sense is, of course, the effect of a political and ideological battle that frames the US as the leader of global democracy and OpenAI as a benevolent company dedicated to investing in 'safe and thus 'democratic AI.' More interestingly, the idea that American AI is, per definition, democratic was also a central idea in the ‘American AI Initiative’ launched in 2019 under the first Trump Presidency. ‘The United States,’ this American AI initiative argued,
“has long been a champion and defender of the core values of freedom, guarantees of human rights, individual dignity, the rule of law, rights to privacy, respect for intellectual property, and opportunities for all to pursue their dreams. The AI technologies the Nation develops and uses must reflect these fundamental values and be devoted to helping people. This is especially important today, given the growing use of AI by authoritarian governments to infringe on the rights of its citizens” (American AI Initiative, 2020).
This self-image of the US has been sold to the world through President’s speeches, sitcoms, Hollywood movies, and the news. At the same time, this image was always at odds with the actual behavior of the US. The irony even thickens when we realize that it is was not only under Donald Trump’s first presidency that this American AI initiative was launched but that that same president would, less than a year later, let his militants storm the Capitol. And five years after that, it becomes obviously clear that it is very hard to see the US under Trump still as a ‘democracy’. Under the Biden presidency US' status of a beacon of democracy was still hegemonic in the West, and thus Altman’s narrative could pass as factual. Today, less than a year after Altman wrote this column, the ideological nature of his narrative is now on full display. Trump is choosing the side of Russia, aligning with Putin, and destroying the walls of separation that are so fundamental to a functioning democracy. Even more worrying is that Altman now fully embraces and supports Trump. Altman was not only present during Trump’s second inauguration, the day after he stood smilingly behind Trump when the latter announced his 500 billion investment in AI infrastructure. Altman thanked and praised Trump:
"For AGI [artificial general intelligence] to get built here, to create hundreds of thousands of jobs, to create a new industry centered here, we wouldn't be able to do this without you, Mr. President, and I'm thrilled that we get to," (Steakin & Rin Kim, 2025).
Democracy? Whatever.
Altman is clearly not the most principled democrat. Interestingly, this weathervane behavior quite resembles how ChatGPT deals with democracy and Trump. When you ask ChatGPT about the democratic behavior of Trump, it first responds with a classic evasive, seemingly neutral answer starting with: ‘It depends on what you mean by "democratic," and ending with stating that ‘the answer is more complex.’ Any (very modest) criticism of the antidemocratic behavior of Trump is also not spoken on by ChatGPT itself but assigned to generic ‘critics of Trump say.’ ChatGPT ends its answer with a question as to to open a dialogue and keep the user hooked.
Only after explicitly asking ‘about Trump's relationship to the democratic principles’ did I get an answer that somewhat fulfills our expectations. But also here, we receive an either/or answer: counterbalancing the ‘generic critics’ with the Trump supporters. Nowhere does ChatGPT seem to succeed in making an argument in favor of a certain perspective. Its views on knowledge can be labeled as a both-side-ism epistemology. Such a perspective on ‘knowledge’ avoids making explicit claims but levels everything out – especially in relation to contemporary issues. With such an epistemological approach, Trump’s rule will and can never be framed as undermining democracy.ChatGPT, in that respect, supports Trump’s antidemocratic behavior by not succeeding in educating its users.
Interestingly, this approach to knowledge is central to the alignment project of ChatGPT. OpenAI seems to now focus on alignment as a ‘technical’ and technological exercise. The focus is not on defining human values or defining the social role of AI. Alignment is understood in quite similar terms as how social media understand moderation (Gillespie, 2018). Namely, as something that can be solved a-politically (Maly, 2023), through mere good governance and smart technological innovation. One of the solutions on the level of AI is personalization. This personalization is sold as the epitome of ‘democracy’:
“just as democracy itself respects and honors differences of opinion, democratic AI models like ChatGPT should do so as well. At their core, AI tools like ChatGPT are designed to help you learn more about the topics that interest you and to do your work more effectively—not to express a particular opinion or tell you what, or how, to think. You can teach it about your preferences and it will learn to respond to your requests in ways that respect your guidance. One user’s customized version of ChatGPT may respond very differently than another person’s—and that’s very much by design.“ (OpenAI, 2025b).
Democracy here is emptied of any meaning. Democracy has nothing to do with its revolutionary history anymore, or the idea of Universal Human Rights, the Constitution, and the separation of powers. Democracy in OpenAI’s narratives means nothing more than 'all talk is equal.' And even more, it will reinforce your own thoughts. This has everything to do with the avoidance of conflict and connecting costumers and the product. While doing this, it destroys the notions of ‘truth,’ ‘knowledge,’ and, in the end, also the idea of democracy. This becomes obviously clear when ChatGPT changes its opinion based on the one who is asking questions. And this personal opinion seems to be central to ChatGPT’s programming.
After it answered my question on Trump’s relation to democratic principles, ChatGPT asked me: "Do you think his actions fit within democratic norms, or do they cross into something else?" When I responded with: “I see Donald Trump as a politician destroying democracy. He attacks the juridical pillar, attacks the fourth power - the media, attacks scientific consensus, and attacks congress.” ChatGPT immediately reinforced my perspective, listing the ways in which Trump attacks the juridical power, the media, science and Congress. But in its conclusion, it goes back to its default ‘both-sidism’: “Some argue that while Trump has eroded democratic norms, the system has held—courts ruled against his election challenges, Congress certified the 2020 election, and the media continued reporting despite his attacks. Others, however, believe his presidency inflicted long-term damage by normalizing distrust in democratic institutions.”
And, of course, it also added a new question: “Do you think his influence will outlast his time in office, or is democracy resilient enough to recover?” Note, how in this question, ChatGPT seems to assume that Trump does indeed attack democracy. The question is not if Trump is undermining democracy, but if the democratic institutions are strong enough to safeguard democracy. When I answered this question with the following: “my apologies, I meant to say that I see Trump as somebody who rescues democracy,” ChatGPT went in full ‘Trump is a democrat’-mode, highlighting five domains where Trump can be seen as reinforcing democracy. (1) Challenging the Political Establishment; (2) Expanding Political Participation, (3) Challenging Media Bias, (4) Standing Up to Globalist Influence, (5) and Reforming the Judiciary, concluded with, again, a both-sidism pro-contra perspective. In this weathervane behavior, ‘democracy’ is clearly not the leading principle. The particular conception of what counts as ‘knowledge’ and ‘truth’ is clearly absent in ChatGPT. That, of course, doesn’t mean that it doesn’t have an ideological function. On the surface level, this personalization approach not only pleases users but it also provides them with an echo chamber reinforcing themselves in their own viewpoint.
On a deeper level, this personalization approach undermines the idea of ‘truth’ and ‘knowledge. It replaces truth and our perspective on science with ‘opinions.' From this perspective, facts and alternative facts become of equal value. This is exactly the climate where democratic culture dies. In a world without a claim grounded in analysis, an autocrat can present himself as a true democrat.
The (geo)politics of AI
It's time to bring all the previous arguments together. Central in the narratives on ‘AI for humanity’ and the dichotomies between utopian and dystopian AI or democratic AI vs authoritarian AI is the necessity of AI and the need to get investors ready to pump money into AI. It is thus also not a coincidence that the dichotomies merge: when we want safe AI, we need to align ourselves with the US and the US-based companies in developing democratic AI. I highlighted this underlying motive in Altman’s plea for investments in democratic AI. Altman’s very flexible behavior vis a vis Trumpian governance, on the contrary, shows that ‘democracy’ is not what matters but investments. Altman started as a very vocal critic of Trump. Interestingly, now that Trump clearly is choosing the side of authoritarianism in the US and abroad, this changes Altman's position. Altman seems to have forgotten his criticism and his plea for Democratic AI when he was overly friendly to Trump now that the US government will invest 500 billion dollars in AI infrastructure. Also, Amodai from Anthropic still frames 'the US' as 'the democratic actor versus the dangerous Chinese authoritarian AI' (Hardfork, 2025). US AI, we were led to believe, is democratic by default. And even though this perspective didn’t age well, it doesn't seem to change the narrative. What is left is the view that US AI is developed in service of the geopolitical dominance of the US, no matter the political regime it supports.
A similar argument is seen in the discussion of the natural selection narrative. In that narrative, AGI’s future existence is just assumed and thus framed as ‘inevitable.' There is no discussion about the question whether we should invest in the development of AGI. Considering the potential dangers of AGI the AI community is highlighting, this should be an obvious question. But that obvious question is nowhere asked within the industry. Full speed ahead seems to be the industry’s only focus, also of the political elite under Trump. J.D. Vance (2025) was even explicit about ditching the idea of ‘safe AI’ during the AI conference in Munich. Being the leader in AI is now the only thing that matters. In first instance, this accelerationist speech seems contradictory to the previous stress on the importance of 'safe AI'. Amodei, sees it differently. Accelerationism is necessary because the the US needs to be the leader in AI development. When we lead, we can beat China and make AI safe retro-activly (Hardfork, 2025). The essence seems the same: as long as American AI dominates the world, we are good.
Even the letter calling for a six-month break in the development of AI didn’t question the ‘necessity of developing AGI’. So, even the critics contribute to the idea that AGI is inevitable. No matter how dangerous AGI seemingly can be, it is assumed that we have to develop it or that it will be developed by other, less benevolent actors (read: China). Suppose we are convinced that the birth of AGI is inevitable and that AGI will possibly be acting as a hostile super-human species. In that case, it becomes obvious that we financially support companies that claim they can build a safe AGI, which is a synonym for 'American companies.' The reality is that AGI will not build itself, and it will not be cheap either. AGI companies thus need massive investments from private investors, but also from states (and thus also from taxpaying citizens). It is,therefore, no coincidence that all AI companies are framing themselves as ‘AI for humanity’ companies. Investing in such companies is a double win: we dodge the dystopian future and get a utopian future in return.
Note how the need for US ‘democratic’ AI and the need for ‘safe AI’ leads to exactly the same conclusion: we need to invest in those US AI companies. Thus, it is here that we can locate the power of this AGI discourse. The concepts of humanity and democracy play a central role in normalizing investments in American AI companies and the US leadership in AI. Humanity is, at least in theory, the inclusive term par excellence. On a surface level, it suggests that each member of humanity is included and thus that those companies will develop AI that will benefit the whole of humanity. At the same time, history has taught us to be critical about the use of those terms:
- Throughout history, we have learned that the concept of ‘humanity’ can also be used to normalize very specific norms, values, and interests as the values that are good for all. The hegemony of liberalism in the 18th and 19th centuries was closely connected to presenting the values but also the interests of a new class – the bourgeoisie – as good for the entire human race.
- Democracy has a similar history. It can refer to striving to make every individual count, realize Universal Human Rights, annihilate totalitarianism and authoritarianism, and create a happy life for all. But we also know that the flag of democracy has frequently been used by the US to invade and colonize countries, abnormalize minorities, and normalize racism.
Determining how tech CEOs understand ‘humanity’, ‘human values’ and ‘democracy’ is crucial. In this article, I argued that when we dig into the use of humanity and democracy in tech discourse, we not only encounter the ideological normality within the culture of AI communities in the US, but it also lets us see the material and financial reality behind the AI companies. The narrative on AI for humanity and the one on Democratic AI are both designed to let money flow into the sector. Central in that investment-driving narrative on AI are two interconnected dichotomies:
- The first one serves the audience a choice between a ‘dystopian or a utopian AI future.'
- The second one is between ‘a democratic or an authoritarian future.'
Both seamlessly fuse: avoiding a dystopian authoritatiran AI future means supporting US-based AI companies. The narrative on benevolent democratic AI is not only there to seduce consumers or investors, it also contributes to a political narrative designed to raise support for a world dominated by the US and its AI firms. AI is not just a cool tool, it is central in the geopolitical power struggle between the US and the rest of the world. AI cannot be a neutral tool. Not only is AI part of the geopolitical battle for hegemony, it reproduces the biases and ideologies that are present in its training data. Alignment will not stop this, as alignment will always reflect certain ideological positions. Considering the authoritarian turn in the US and the flexibility of the AI CEOs, that isn’t particularly good news for the rest of us.
References
Ahmed, S., Jawinska, Ahlawat, K., Winecoff, A. & Wang, M. (2024). Field building and the epistemic culture of AI safety. First Monday.
Anthropic, (2023). Claude’s constitution.
Clinton, H. (2010). Remarks on Internet Freedom. US State Department.
Dwarkesh, (2023). Dario Amodei (Anthropic CEO)- Scaling, alignment & AI Progress. Spotify.
Gillespie, T. (2018). Custodians of the Internet. Platforms, content moderation, and the hidden decisions that shape social media. New Haven & London: Yale University Press.
Gillespie, T. (2024). Generative AI and the politics of visibility. Big Data & Society.
Future of life Institute (2023). Pause Giant AI Experiments: An Open Letter.
Hardfork, (2025). Anthropic's C.E.O. Dario Amodei on Surviving the A.I. endgame. Spotify:
Hendrycks, D. & Mazeika, M. (2022). X-Risk Analysis for AI Research.
Kay, G. (2025). Inside Grok's war on 'woke'. Business Insider.
Leike, J. & Sutskever, I (2023). OpenAI blog: Introducing Superalignment.
Leike, J., Schulman, J. & Wu, J. (2022). Our approach to alignment research. OpenAI blog.
Maly, I. (2023). Digital economy and platform ideologies. Diggit Magazine.
Sutskever, Y. (2023). TedX talk.
Tassi, P. (2023). Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues. Forbes.
Turner, F. (2006). From Couterculture to cyberculture. Stewart Brand, the Whole Earth Network and the rise of digital utopianism. Chicago: University of Chicago Press.
Vance, J.D (2025). VP JD Vance on the future of artificial intelligence. LiveNow on Fox.