What is TESCREALism? Mapping the Cult of the Techno-Utopia.
A new paper by Timnit Gebru and Émile P. Torres traces the intimate links between Tranhumanism, Effective Altruism, Longtermism, and other crypto-authoritarian ideologies.
I credit basically two people with shaping my thinking about Sam Bankman-Fried’s philosophy.
First is investor Nic Carter, whose condemnation of Effective Altruism as an outgrowth of the moral abomination that is “consequentialism” has been clarifying for me. It all flows from a violation of the Kantian/Christian decree against instrumentalizing human beings.
My second major influence is Emile P. Torres, who, with punitively fired Google researcher Timnit Gebru, coined the acronym TESCREAL to describe the intersections between EA, transhumanism, and artificial intelligence “doomers” like the ever-embarrassing Eliezer Yudkowsky.
“TESCREAL” is an acronym standing for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. Torres and Gebru argue that these ideologies are deeply rooted in the eugenics movement, particularly its attempts to define “intelligence” instrumentally, and that TESCREAL’s implications are predictably authoritarian.
The two have been tracing those connections in public while they worked on a more robust presentation, and the full paper was finally released last month, in the journal First Monday. The paper is an invaluable map of the welter of ideologies that also extend into and align with Barbara Fried’s determinism, and help further articulate why the FTX scam was about much more than just one man’s criminal actions.
Instead, the FTX collapse – and its subsequent, ongoing rhetorical cover-up by elite operatives – show what happens when certain corrupt and corrupting ideologies – above all, the disguised techno-authoritarian ethos known as “longtermism” – find rich soil in the brains and spirits of privileged sociopaths. Bankman-Fried was in a position to rob thousands of people, and then did so with not just the ideological underpinning of longtermism to rationalize his actions, but with the institutional and social support of Effective Altruism, whose various arms had been wildly enriched by stolen funds funneled to them as “charitable donations” by SBF.
Judge Lewis Kaplan’s statements at SBF’s sentencing excellently summed up the broader risk here – that if Sam weren’t punished, people “willing to flip a coin on the continued existence of life and civilization on Earth” could gain influence in our society, and gamble with all of our lives.
But what Torres and Gebru most shockingly lay out in their paper is that this is exactly what is happening right now, in the form of the (substantially Peter Thiel-funded) discourse on Artificial Intelligence and its “existential risk” to humanity. The twinned (and equally hallucinatory) quest for “artificial general intelligence,” and hand-wringing over the consequences of AGI, are a long bet being made with trillions of dollars of investment capital, marshaled by deceptive rhetoric and based on deeply flawed understandings of technology and society.
Put less obscurely, the TESCREAL vision of Artificial Intelligence has captured a huge portion of money and energy from some of the more talented and driven people alive today, and there is a good chance it’s a full-stop dead-end that lumbers forward for a few more years before AGI recedes another half-century over the horizon.
This present reality has two faces. Gebru and Torres argue that the push to create “artificial general intelligence [AGI]” is leading directly to dangerously “unscoped” products like LLMs, which are designed almost entirely with an eye towards the quest for AGI. That means (in a point that remains implicit in their paper) that the pseudo-religious quest for AGI has supplanted real, useful applications and refinements of advanced data processing tools. The quest for AGI, and in turn for Ray Kurzweil’s essentially religious “singularity,” is why we have hallucinating chatbots today, instead of functional personal assistants or widespread automated cancer screening algorithms.
Equally important and even more explicitly, the AGI myth is why reality-based efforts to make existing AI algorithms safe for currently-living humans have almost zero traction among the loudest proponents of “AI safety.” In just the same way that Sam Bankman-Fried stole customer funds to make long-term bets, today’s AI leaders are actively and vocally dismissing the current, material risks of machine learning algorithms, and focusing instead on a long-term future that they confidently predict without a shred of actual evidence. (Just two baseless assumptions of the doomer fantasy are that A.I. will become self-improving, and that it will easily master nanotechnology.)
This patent display of foolishness might be the deepest underyling reason the tech industry had to purge Timnit Gebru. The vision of AI shared by people like Sam Altman is substantially derived from sci-fi like James Cameron’s Terminator, and going as far back as Karel Capek’s R.U.R., the origin of the word “robot.” Capek’s 1923 play far preceded anything like AI, making clear that the intentional, humanoid, thinking “robot” has always been primarily a metaphor for the much more complex dialectic by which man-made technology becomes a threat to human essence. The Singularitarians have made the childish error of mistaking these simplified storybook tales for the complexity of reality, and as long as Gebru and her cohort remain committed to describing how technology actually works, the collective fantasy of superintelligent yet incredibly dangerous AI is threatened.
In fact, the connection from eschatological Singularitarianism to Effective Altruism is not merely a philosophical parallel – SBF stole a lot of money and handed it over to Anthropic, the effective altruism-linked AI startup. EA, whatever its origins, has been materially and ideologically hijacked by what amounts to a conspiracy of tech elites to subvert capital markets into the service of what is, in substance, a far-right cult in scientific clothing.
Timnit Gebru, Actually Fired for Truth
Timnit Gebru’s co-authorship of the TESCREAL Bundle paper is notable because she has already been a victim of this cult and its delusions. The story feels recent to me (I wrote a bit about it for Fortune, I think) but bears repeating – not least because it’s one of those excellent cases that makes a mockery of conspiracy theorists by showing ideological and political discipline operating in broad daylight.
Gebru is highly and traditionally credentialed, with years doing electrical engineering for Apple and with a PhD from Stanford focused on computer vision (all very much unlike, I can’t resist noting, Yudkowsky). But In late 2020, Gebru was fired from her role as an AI Ethics researcher at Google for, critics have convincingly argued, pursuing research on AI bias that threatened to undermine the fantastical conception that Silicon Valley wants to push.
Gebru is also a refugee of the Ethiopian/Eritrian civil war, whose achievements in the face of adversity can only make her that much more an object of resentment for certain painfully average white men. Gebru published impactful and important research showing that AI facial recognition software had trouble recognizing non-white faces. A primary throughline of her research has been highlighting the ways that datasets strongly determine the behavior of AI models, and specifically how the racial and identity bias baked in to existing human communication and culture reproduces that bias in AI models – or even heightens them because of the lack of human controls.
It’s a fundamentally straightforward extension of the “garbage in, garbage out” truism that applies to all computing and data-processing systems. But her argument was and remains a threat to the tech industry, because there is no serious technological solution to it – at least not one that ‘scales’ in the way tech economies demand. The paper that got Gebru fired is summarized here at MIT Tech Review.
More subtly, Gebru’s stance is a threat to the debased religiosity that has been repackaged in Ray Kurzweil’s “singularity” thesis. This is the argument that artificial intelligence will someday achieve superintelligence and, in sum, solve all of humanity’s problems. This, as Gebru and Torres highlight in the new paper, is essentially an eschatology, or a theory of the end of the world, that maps neatly to the Christian Rapture. Kurzweil himself has increasingly seemed to acknowledge what was always implicit in the idea. Singularitarianism has even come to include, through the absurd “Roko’s Basilisk” thesis, the idea that opponents of the infinite redeeming power of AI (a.k.a. unbelievers) will be denied entry to the coming heaven on Earth enabled by AI.
Gebru’s research is instead focused on how machine learning, large language models, and the other forms of existing machine ‘intelligence’ actually work. The biggest takeaway from Gebru’s work, and from this paper with Torres specifically, is that the biggest threat to both TESCREAL ideologues, and the venture capital hype-men who are enabling them, is focusing on the current reality of AI instead of on the TESCREALists’ bizarre fantasies about it.
People like Sam Altman and Elon Musk would much rather talk about their confidently-predicted future ascension to (basically) Godhood, than about such banalities as where their training data comes from. Their strange framing of AI “safety” also in part lets them simply label some of their efforts as “charitable” in various ways (including OpenAI’s own strange initial structure). This, Torres and Gebru write, has turned AGI research from a low profile research area, into “a multi-billion dollar endeavor funded by powerful billionaires and prominent corporations.”
Framing AI as an eschatological myth not only helps keep the money train rolling, but also, as we’ll see below, lets its proponents dismiss any current harms in favor of a focus on the future. The AI doomers are to tech venture capital as the Effective Altruists were to FTX.
Both groups are using fear to steal from the future.
AGI, Transhumanism, and Eugenics
Gebru and Torres argue on several fronts that TESCREALism materially descends from, still carries artifacts from, and may even be a clandestine continuation of, the Eugenics movement that in the early 20th century sought a “scientific” basis for various shades of genocide. The pair helped unearth inflammatory statements by Nick Bostrom that likely helped get his institute defunded at Oxford, but more substantive is the question of how to define intelligence.
Defining intelligence becomes really important when you’re trying to build it! And specifically when you’re explicitly devoting resources to a machine you say will be more intelligent than any member of your own species, so much so that Kurzweil refers to the world after the Singularity as “Utopia” and David Pearce says it promises “the complete abolition of suffering” in humans.
But one of the proper giveaways of the entire push for “artificial general intelligence,” or AGI, is that even its proponents can’t actually define what they’re trying to build. This isn’t to slight their ignorance – nobody really has a solid grasp on how the human brain/mind work, and that’s exactly the point. Truly inquisitive and properly humble types would maybe take answering these questions a bit more seriously.
But that didn’t stop the eugenicists, who pioneered and continued to champion instrumentalizing ideas of human “intelligence” such as that of the measurable “intelligence quotient” (IQ). IQ went on to play a starring role in scientific racism for decades, including in Charles Murray’s racist propaganda piece The Bell Curve, which Gebru and Torres find has been defended and cited in work by TESCREALists. That’s because the enshrining of this “system” was, like so many gestures to rationality and science, little but the enshrinement and formalization of extant bias. Many such cases, as the kids say.
In fact, one of the most shocking things highlighted in Gebru and Torres’ paper is that The Centre for Effective Altruism had tested assigning people a metric called “Potential Expected Long-Term Instrumental Value,” a metric based in part on member’s IQs.
In other words, Effective Altruism has at least institutionally flirted with the idea of classifying human brains like cuts of meat, using a metric with a profound legacy in institutionalized and scientific racism.
Torres and Gebru show that this inability to define intelligence, while simultaneously believing with religious fervor that you have to create it, is the core destructive crime of the current AI push. “TESCREAList ideologies drive the AGI race,” they write – and as a religious rather than scientific rhetoric, these ideas create a perfect formula for fraud and hucksterism.
It’s how you wind up with people thinking that chatbots are conscious – when in fact they’re being duped by a 21st century Mechanical Turk, or perhaps Madame Blavatsky. It’s how you wind up with development talent and ethical debates equally displaced from things that are real, and that actually matter.
The Intellectual Cancer of Longtermism
The other vector of connection here is between eugenics and AI as tools for scientific human improvement. The Extropian, Transhumanist, Singularitarianist, and Cosmist movements are all focused on maximizing humanity’s potential, and on outward physical expansion through the universe. There are direct connections here through none other than the unfortunate Bostrom, who cofounded the World Transhumanist Organization. But I must say that I find this the least developed segment of Gebru and Torres’ work – the resonances here feel at least underexplored, and maybe strained.
Progress, and the imagination of progress, seem fairly native to the human animal, after all. What I think may be missing in the articulation is some discussion of conceptions of time, self-conscious planning, and projection. The extropians and Singularitarianists, negatively embodied now in the AI Doomer, want the far future to be front and center, and encourage us to behave as if we can accurately plan and allocate resources towards that far distant future – a general mindset promoted as “longtermism.”
Longtermism in practice relies on the conceit that we can predict the long-term future, and the long-term impact of our own actions. This entails two things: the predictable, rationalist universe I’ve talked about in relation to the work of Barbara Fried, and the instrumentalization of living humans for the achievement of long-term goals.
Gebru and Torres cite some truly shocking material on this front, including a paper coauthored by Will MacAskill that argues that “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects. Short run effects act as little more than tie-breakers.”
But even worse is the statement that, in the quest to build Artificial General Intelligence, even “‘a giant massacre for man’ could amount to nothing more than a ‘small misstep for mankind,’ so long as the relevant harms do not jeopardize our ‘vast and glorious’ future among the stars.”
The author of that pro-massacre sentiment is, unsurprisingly, Nick Bostrom.
I’m glossing over or omitting vast swathes of Gebru and Torres’ work, but this seems like a fair place to leave it: The quest for artificial intelligence, being used to rationalize bypassing conventional morality in pursuit of a glorious far future. This was Sam Bankman-Fried’s modus operandi, on the smaller, functional scale of running a giant con.
Transposed into an entire ideology, it seems obvious, TESCREALism and its moral privileging of the future is a path back to the gas chamber, the tactical nuke, to carpet bombing and the death camp.
Though in the shorter term, they’ll settle for robbing you blind.
I read most of the paper, still need to finish it. Lot of good points, not sure about all of it. But it and this piece got me started on drafting a comment about the non-existence of trolley problems. The belief that they are more than thought exercises underlie a lot of the ways-justifiy-the-means mentality. I spent days on that comment until it eventually snowballed into an entire story https://g8way.io/WazWieKElugDkoQhDrpUfqK4P-hEFoxTmPVqSnU49Nc .
I of course decided the best thing to do, considering the central character, was make a meme coin of the central character on Solana https://pump.fun/AfHrbGVbmtsy9PqW4ehGEEZ2a5cy6ZtntWqj3TBppump and then post about it on X/twitter https://x.com/awbvious/status/1806485148864762074 . I could have had OpenAI edit it to finish the triumvirate. But I'm not quite ready to let AI into my creative process until I can be confident the model sourcing is ethical.