“Doomers” vs. “Hyper-Racists”: The Contest of Terrible Ideas that Got Sam Altman Fired From OpenAI
In a fight between two kinds of techno-rationalist authoritarians, it’s hard to root for anyone.
There has perhaps never been a better moment for this meme.
Over the weekend, we saw what will probably go down alongside FTX as a moment when the extremely well-capitalized Effective Altruist movement put its ideas into practice, and got absolutely fucking rekt. Which is unfortunate, because they were attacking their just slightly more odious sibling ideology, “effective accelerationism” – ideas that, traced to their origins, are closely tied not just to scientific eugenics, but outright genocidal anti-humanism. This is all an internal conflict within a larger rationalist, futurist, trans-humanist set of ideas sometimes known as “TESCREALism.”
As the movies say, no matter who wins – humanity loses.
But first, the basics
At least as things appear now, Effective Altruism-aligned board members of OpenAI fired Sam Altman from his position as CEO on Friday, with little or no warning to basically anyone, including major OpenAI investors. Their motive was seemingly in part that Altman wasn’t sufficiently scared that future robot overlords might someday kill us all – in substance, that he was moving too fast. Entwined with that may have been worry about Altman’s side-dealings outside of OpenAI.
As I write this, the new, possibly interim CEO of OpenAI appears to be Emmett Shear, who has described AI development as “the creation of an alien god.” Both in substance and tone, that reflects how Effective Altruist fears of AI development have become corrupted by far-fetched imaginary threats. The “AI safety” they’re concerned about is almost totally disconnected from known, present issues like automated discrimination. Their absurd stance is still maybe preferable to the accelerationist stance, which is basically “yes please upload me robot daddy.”
In this case, luckily, the EAs didn’t nuke $8 billion in individual customer deposits. Instead, they may have destroyed far more than that in the equity-like commitments held by OpenAI employees. As is typical, Altman and his high-level allies, the real target of the action, weren’t seemingly harmed at all: they’ve already been picked up by Microsoft in a strange bit of business that is still clarifying.
While I find the EAs laughable on the AI matter itself, Altman and many taking his side in this battle are part of an even more suspect faction who have recently started calling themselves “effective accelerationists.” The substance of their ideas was adequately summed up by Marc Andreessen’s “Techno-Optimist Manifesto” essay last month, a document the New York Times’ Ezra Klein was emboldened enough to describe as “reactionary”.
But reactionary doesn’t go remotely far enough. Once you start digging, the roots of tech accelerationism lead down to some very dark places: the most prominent contemporary accelerationist is a man called Nick Land. Land’s writing and ideas are frankly fascinating, and some of his early work is genuinely great, in a perverse artistic sense. But since about 2010, Land’s ideas have led him to full-blown authoritarian racism. What Land himself, in fact, calls “hyper-racism.”
Today’s tech-world “accelerationists” would surely disavow anything so explicit, at least in public. But as Klein diagnosed, their beliefs don’t have to be labeled “hyper-racism” to have extremely authoritarian implications.
Of course, if you’ve followed the FTX saga, you’ll know that Effective Altruists also have some fundamentally authoritarian ideas, particularly that the smartest people in the world should have as much money as possible, because they’re better at allocating resources than any democratic process. (Again, FTX showed us how that tends to play out – with lots of craven self-dealing.)
That common anti-democratic ethos isn’t merely a coincidence, because Effective Altruism and “Effective Accelrationism” aren’t exactly competing worldviews – they’re more like schisms within a single broader worldview. As researcher Dr. Emile Torres has laid out, they both come from the “TESCREAL” axis of rationalist futurists, who believe in things like the AI “singularity” and a post-human future.
So again, there are no real winners here, at least on the ideological level. It is a fascinating moment for digging in to the various professed ideas on offer. But even more important, it may help us understand how those ideas relate (or don’t) to the actual practice of tech investing and development.
Why OpenAI Fired Sam Altman
There was a lot of speculation on Friday about what actually caused the break at OpenAI, but given the reshuffling over the weekend, it now seems clear that it really was an ideological fight related to the speed of distributing artificial intelligence technology.
In short, it seems that Altman was leaning into the overt Silicon Valley ethos of “move fast and break things.” It would be nearly impossible for the board of a normal company to fire a CEO with this mindset, as long as his financial performance was good – which Altman’s certainly has been.
But OpenAI is not a normal company. Instead, it’s a kind of Frankenstein entity with a for-profit subsidiary bolted underneath a nonprofit. The board of that nonprofit has full authority over the organization as a whole, and its mission is not to profit-maximize. Instead, it has a mandate to promote ‘safe’ AI development, and Altman’s firing appears to have been executed on those grounds.
The most precise and credible breakdown of this I’ve seen came from Toby Ord, a huge philosophical influence on both Effective Altruism and the broader “longtermist” tendency via his interest in extinction-level risks to the human species. Most importantly, Ord’s connections to Effective Altruism may give him real insight into the thinking of two of the OpenAI board members who moved to oust Altman.
As he understands it, the board’s decision may have been a genuine attempt to abide by the founding principles of the nonprofit top layer of the organization. (Though it’s worth keeping in mind that by the time he laid out this narrative, Ord was playing a bit of defense for his EA friends in the face of mounting blowback.)
https://twitter.com/tobyordoxford/status/1726347429803663741
There’s a more granular hint from Kara Swisher, who wrote that “the developer day was an issue” in the board’s ouster of Altman. Information about that November 6 developer day is here, and you can immediately see a few themes that would be troubling for people who really thought AI development was going “too fast” – broadly, the pitch was that GPT access was going to get cheaper, easier, and more widespread.
The day also included a lot of stuff along the lines of API plugins to potentially make the GPTs more composable, giving customers the ability to build their own applications on top of the core LLM. From the EA/Doomer perspective, the dev day must have looked like letting the genie out of the bottle.
Altman was simultaneously making parallel moves outside of the structure of OpenAI. Specifically, he was in talks with Softbank and Middle East investors about starting a chipmaker to compete with NVIDIA. He was also reportedly talking to Jony Ive about an “AI hardware” project.
This kind of outside deal-cutting is in principle a big red flag for any CEO – you’re supposed to be focused on the company you’re actually in charge of. Sam Bankman-Fried was also notorious for running multiple companies with unrelated cap tables, and Bloomberg’s incredible RUIN doc even showed a few smart VCs who chose not to invest in FTX on that basis. But it’s not often dealt with as harshly as this, and I agree with Shawn Tully that if there weren’t deeper problems, this probably wouldn’t have been an issue.
Another hint of those deeper problems resurfaced in this 2022 tweet from Altman. This is in reference to the release of a limited version of GPT at the time – ‘iterative deployment’. It seems that last year he was either still aligned (ahem) with the board, or at least willing to play along for a little longer.
https://twitter.com/sama/status/1599112352141840385
But even here, he went on to denigrate – I would say very fairly – the basic elitism of the Effective Altruist mindset as the assumption that “they are they only group capable of making great decisions,” and saying he is “proud of” “being the villian of the EAs.”
“A.I. Safety” vs. The Thirst For Annihilation
I’m unclear for now exactly what breed of “A.I. Safety” people we’re dealing with on the OpenAI board. Because there are real A.I. safety experts who have a lot of important things to say and have basically been ignored and institutionally sidelined, including particularly in the Timnit Gebru case at Google. More palatable to publications like Time, apparently, are the absurdist sci-fi ravings of clowns like Eliezer Yudkowsky, who believes we should drop nukes on server farms to stop runaway algorithms from turning us all into paperclips. Or something.
The fact that some of the board members are affiliates of the Effective Altruism movement does not give me much hope that they were the thoughtful kind of A.I. safety people.
Regardless, as I’ve said, the EA boobs are at least a little less worrisome to the rest of us than their current opponents, the “effective accelerationists” – not least because the accelerationists currently seem to have the advantage of numbers and sentiment. There have been many responses from tech leaders supportive of Altman, and denigrating the OpenAI board members as “decels” [decelerationists].
Now, the use of the word “accelerationism” in this context is extremely surreal – these people both do and don’t know exactly what they’re doing.
For me, “accelerationism” will always first and foremost mean the theoretical Marxist subversion tactic of pushing capitalism into overdrive so it destroys itself through its own contradictions. This extremely fringe standpoint argues that, at the supposed “end of history” in which the neoliberal capitalist order has conquered all, “the only way out is through.” The goal is not in fact progress per se, but progress so intense that it destroys the system and leaves a space open for communist revolution.
Which is just funny to see on a bunch of VC profiles. But it’s not what they mean.
A philosopher named Nick Land has a firmer claim to the Silicon Valley version of “accelerationism,” one which doesn’t really foresee any such breakdown in the capitalist system at all, but embraces its ability to continue – technology’s ability to continue advancing, media to become more granular and omnipresent, nature to be further destroyed and degraded, capitalism to colonize and mathematize every aspect of our lives. (Apparently the word was first coined, appropriately, by a science fiction writer.)
Mark Andreessen’s recent manifesto was “accelerationist” in basically this sense. But it’s clear that, as with “the metaverse” and practically every other idea ever thought about technology, the people with money are taking the parts of “accelerationism” that seem cool in blog posts, and simply ignoring other inconvenient elements.
The biggest of those elements may be that Nick Land is some combination of ironic performance artists, genuinely unhinged lunatic, and outright neo-Nazi (with a few extra steps).
First and foremost, Land talks about acceleration not in terms of utopia, but as an embrace of the suffering and death that capitalist acceleration creates.
“I have no interest in human liberation, or liberation of the human species,” Land has said. “I’m interested in liberation of the means of production.” In other words, Jules Evans writes, “accelerationism had nothing to do with expanding human potential.”
Land’s first book, it’s worth noting, was titled A Thirst For Annihilation.
So the idea that “e/acc” is merely about dispensing with regulation so that technologists are free to create a utopia is a total canard. This is made even clearer in later works, as Land gets progressively less ironic or veiled about any of this, and celebrates with increasing specificity the suffering and death inflicted on non-white or otherwise genetically inferior people by technological progress. That’s where “hyper-racism” comes in – Land has argued that “space colonization will inevitably function as a highly selective genetic filter”. Hard not to guess that’s somewhere in Elon Musk’s brain, as well.
So I’m not saying Land isn’t fully the crypto-fascist his biggest fans see in him. What I am saying is that he’s also putting on a rather grand and theatrical performance – he’s a big proponent of Crowleyan magic, and has in the past claimed to have risen from the dead and mastered the secrets of “Lemurian time travel”. It’s all too typical that the technologists have taken him up in purely literal form, while sweeping his most grim and evil conclusions under the rug.
In fact, his theatricality and impressionistic style is a big point in Land’s favor – or, more likely, a reason to really worry about his uptake. Nick Land is much, much cooler and smarter than the last guy the Silicon Valley Elite tried to turn into an Intellectual Dark Lord – Curtis Yarvin just never had the swag to pull it off. Land is sort of an upgraded Yarvin 2.0 (though I think Land is a bit older), and may even have coined the term term “Dark Enlightenment” around 2013 in an essay about Yarvin and American neoreaction. In that work, Land concludes that the West was facing an inevitable race war. Incredibly grim and awful shit.
That circles right back around to Brian Armstrong’s attack on “decels.” Remember that Armstrong, he infamously tried to ban political discussion at Coinbase in 2020, in a seemingly specific repudiation of Black Lives Matter. He also hired Italian Fascist sympathizers and alleged digital arms traffickers to perform a security function at Coinbase back in 2019. But hey, that may have simply been bad luck.
It’s also, you’ll notice, quite interesting the way the economic interests here just happen to align with something that gestures at being a worldview, and which actually (as Emille Torres has diagnosed) is rooted in a set of ideas that essentially aspire to be a secular religion, complete with an afterlife and an eschatology. Basically, a cleaned-up accelerationism validates the activities of venture capitalists and tech types. That includes absolute freaks like literal vampire Bryan Johnson, who has seemingly dedicated his existence to the transhumanist ideals of life-extension and biohacking.
But there’s plenty more to dig into there later. This is all a real maze to navigate – in fact, to me it’s evocative of the labyrinthine structure of influence and money behind FTX. The connections are shifting, new labels get made up all the time, and most of what people are saying can’t really be trusted on its own terms.
And to be clear, there’s not a direct line from each and every one of these “e/acc” figures to some secret Racial Holy War agenda. Rather, they are in many cases the unwitting standard bearer for philosophers they’ve never actually read – often the fate of those with only surface-level critical thinking skills.
But Land’s hatefulness isn’t just incidental to the accelerationist worldview, it’s integral to it, and will resurface again and again from the same basic pro-technology, anti-human premises. And the Effective Altruists are not so much better.
And so: Let them fight.