The acronym TESCREAL will probably not mean much to most people. Yet this complex of Transhumanism, Extropianism, Singularitarianism, (modern) Cosmism, Rationality, Effective Altruism, and Longtermism has had a decisive influence on the Silicon Valley elite, above all Elon Musk. It is an interpretative perspective on various technological and philosophical currents that are present in the Valley, but are not necessarily pursued as a unified worldview.
What's missing: In the fast-paced world of technology, we often don't have time to sort through all the news and background information. At the weekend, we want to take this time to follow the side paths away from the current affairs, try out other perspectives and make nuances audible.
The philosopher Émile Torres developed the term TESCREAL together with the AI critic Timnit Gebru to make it clear that many of the dazzling, trendy ideologies from Silicon Valley have a common core: A deeply entrenched belief in the supposed superiority of a technological elite over the inferior masses. And although it looks as if the tech bros are currently fully occupied with plundering the US, this ideology still plays an important role.
The role of ideology
Analyzing ideology may seem very academic at first glance – especially as some of the concepts it contains are a little bizarre. However, it is important for two reasons: firstly, ideology always serves as self-assurance and justification of one's own actions to the outside world. Secondly, ideology is also always a kind of mental filter that helps to bring meaning and structure into the world and to distinguish the unimportant from the important. Understanding someone's ideology therefore always means understanding how he or she "sees the world". So let's take a look at different elements of TESCREAL.
Singularity through superintelligence
Supporters of the idea of a technological singularity assume that humanity's technological progress has continued to accelerate throughout history. This means that humans will soon be able to build an AI that can improve itself. From this point in time – the singularity – the further course of technological development can no longer be predicted. The AI would then develop into a superhuman "superintelligence" that, in the worst case scenario, would destroy humanity.
The idea of the singularity was popularized in the 1990s by mathematician Vernor Vinge and later developed further by Ray Kurzweil. It was particularly popular in Silicon Valley, where it was institutionalized by organizations such as the Singularity Institute (now MIRI). However, it is important to note that while singularity theory is debated, there is no empirical evidence of its immediate viability.
Timnit Gebru suspects a political rather than a technical agenda behind the project. Together with Torres, she draws a historical line from the American eugenicists to the transhumanists to the leaders of OpenAI, in which it was never about the future and the well-being of humanity as a whole, but about weeding out everything useless and superfluous.
The digital savior
Artificial intelligence plays a dual role in the TESCREAL ideology: it is simultaneously the greatest threat and the ultimate salvation. This paradoxical position can be explained by the almost religious belief in the transformative power of technology in Silicon Valley.
It is significant how AI security research has developed: instead of focusing on concrete problems such as discrimination by algorithms, prominent researchers are working on hypothetical scenarios of a superintelligence. OpenAI, co-founded by Musk, started out as a non-profit organization for "safe AI", but later turned into a for-profit company with Microsoft cooperation.
The irony: under the guise of developing safe AI for the benefit of humanity, a new concentration of power is emerging in the hands of the same tech elite. As Gebru points out, the greatest immediate dangers of AI do not come from a hypothetical superintelligence, but from its use to reinforce existing inequalities.
Intelligence
In fact, Gebru is not the first to point out the difficult legacy of intelligence research: As early as 1981, biologist Stephen Jay Gould criticized the concept of "intelligence" as an objective measure of general cognitive ability in his book "The Mismeasured Man". This concept and the associated tests were largely driven by researchers such as Charles Spearman, a high-ranking member of the British Eugenics Society. The statistical methods developed by Spearman were later developed further by Frank Rosenblatt for the first artificial neural network.
Eugenics
Like his teacher, the British naturalist Francis Galton, Spearman was convinced that political intervention was needed to ensure that intelligent people reproduced more than the rest – an idea that ultimately led to the Nazis' racial theory. But even after the end of the Second World War, eugenics was not completely discredited.
Transhumanism
In the 1990s, transhumanism developed in the USA, an ideology that took up the basic idea of eugenics, but no longer focused on biopolitics, but on individual improvements through AI or genetic engineering.
For transhumanists, human evolution continues with the help of technology. They believe that humans are increasingly merging with technology and will eventually be able to upload their minds into computers in order to become immortal. However, many modern transhumanists explicitly distance themselves from eugenics and instead emphasize individual freedom of choice and ethical responsibility.
.png)

