👁️ DeepSeek and the AI Murder Cult
Rationalism links a wave of murders, FTX embezzlement, and crashing markets.
Today we’re skipping the news to focus on two/three bizarrely connected events: a string of murders, the new Chinese AI model DeepSeek, and the FTX fraud. The madness manifested in the murders is continuous with the madness at the heart of the current paradigm of AI development, investment, and (imagined) use cases.
It also lurks at the heart of Sam Bankman-Fried’s rampant embezzlement at #FTX, of which $500 million dollars went to Anthropic, an “AI Safety”-fueled startup that employs Claire Askell, ex-wife of Effective Altruism founder Will MacAskill. $5 million in money stolen by SBF also went directly to the Center for Applied Rationality, one of Yudkowsky’s two organizations. Half a million in FTX funds also helped facilitate the purchase of a hotel that became the headquarters of a CFAR subsidiary called Lightcone Research, which notoriously featured several eugenicists and white supremacists at events.
That thread speaks to the undercurrent of violence, physical and mental, that lurks beneath the placid surface of Lake Rational.
Read More: Harry Potter and the Mantras of Authority: “Pop Bayesianism” and the Rationality of Power
It also helps explain, I think, why OpenAI and other U.S. artificial intelligence startups just got embarrassingly annihilated by a Chinese hobbyist: because they’re driven by some of the same ideas that have led fringe Rationalists into madness.
What follows is pretty woolly, so apologies for that.
Corrections: This piece previously referred to the shooting of a border patrol agent in Virginia - it took place in Vermont. Also, the below mugshots were taken after a 2019 Zizian protest against CFAR, not after the 2023 attack on Curtis Lind.
The Anti-Life Equation
There have now been at least EIGHT violent deaths over the past three years tied, to varying degrees, to splinter factions of the Rationalist movement founded by Eliezer Yudkowsky in San Francisco. The Rationalist community is eager to disown the perpetrators, and it’s true that the factionalists have been in conflict with the main group for years. More to the point, they seem simply insane.
But, I would tentatively argue, the source of the conflict is that these bad actors took Yudkowsky’s basic ideas, above all ideas about the imminent destruction of humanity by AI, and played them out to a logical conclusion - or, at least, a Rationalist conclusion. This wave of murder is just the most extreme manifestation of cultish elements that have bubbled up from the Rationalist movement proper for going on a decade now, including MKUltra-like conditioning both at Leverage Research - another splinter group seemingly pushed out of Rationalism proper following certain revelations - and within the Center for Effective Altruism itself.
This post is just the beginning of trying to unwind and map these and other threads, but broadly, the extreme edge of Rationalism and EA reveal a dynamic that runs through both of them: Apocalyptic predictions, asserted with a confidence buttressed by weaponized misuse of “Bayesian” logic, is driving young people insane by convincing them that they must take extreme steps to halt an imagined doom. We all kind of laughed when Yudkowsky floated the idea of physical attacks on data centers to slow down the development of AI, because otherwise “everyone on Earth will die.” But it’s increasingly clear that he really believes these things, and more than a few of his current and former adherents have taken him entirely literally.
Read More: Luigi Mangione: Disillusioned Techno-Rationalist?
On January 17th, 82 year old Curtis Lind was stabbed to death in Vallejo, California. Lind was set to testify in connection with a 2023 incident in which he was attacked with a sword by associates of an individual known as Ziz (formerly Jack LaSota), one of whom Lind shot and killed.
Then, on January 21st, a Border Patrol agent in Vermont was killed, allegedly by a pair of suspects including Felix Bauckholt, a German citizen who had overstayed an H1-B visa. Bauckholt appears to have shot and killed agent David C. Maland, and Bauckholt was himself killed in the exchange of fire. He was stopped with Teresa Youngblut, who was also shot but survived to face charges in connection with the shooting.
Bauckholt appears to have been recently employed as a quantitative trader at a shop called Tower Research Capital - and formerly an intern at Jane Street Capital circa 2018, just a couple of years after Sam Bankman-Fried would have left. At least according to one Twitter poster, Bauckholt may have transitioned and begun going by the name Ophelia B - and was also “somewhat of a Ziz fan.”
The role of trading as a job for so many of the people in this circle speaks to an array of social and ideological factors. EAs and Rationalists are effectively a Venn diagram forming something close to a circle, and members of both groups are drawn from, and regularly encouraged to get into, trading as a profession. Infamously, Will MacAskill circa 2013 sold Sam Bankman-Fried on the idea of “earn to give,” the (now largely deprecated) EA idea that you can be “more effective” in helping the world by getting a high-paying finance job than by getting your hands dirty as something as ignoble as a doctor or a teacher.
Trading also ideologically aligns with Rationalism because the financial system’s dynamics map to the way Yudkowsky himself needs the world to work for his system to hold together: In one of the more overtly insane claims underpinning the structure of Rationalism, Yudkowsky takes it as an a priori truth that the universe is calculable - a truly absurd claim that ignores modern physics. As I’ve recently pointed out, EA figurehead Toby Ord has now picked up that flag and is running with it. (The sentiment also aligns with Barbara Fried’s hard-line determinist model of human behavior.)
I have less information about them, but the fifth and sixth deaths tied to the Ziz group were the double murder of the parents of one Michelle “Jamie” Zajko. I don’t have good direct sources on this allegation yet. The seventh death, again linked to Ziz specifically, is the apparent suicide in 2022 of someone named Jay “Fluttershy” Winterford. Again, limited information there.
The eighth recent death I tentatively tie to Rationalist discourse is, of course, UHC CEO Brian Thompson. Luigi Mangione was not a Zizian or even a direct participant in Rationalist groups, but there is significant evidence he was exposed to the broader universe of Rationalist-adjecent ideas.
Read More: What is TESCREALism? Mapping the Cult of the Techno-Utopia
Finally, going back further, there are various other Rationalist-linked suicides, including one tied to allegations of sexual harassment. Most concerning of all, the Zizians are far from the only “high pressure” group to spin off from Rationalism - it seems, despite its lofty aims, to have become a hotbed for cult behavior.
Zizianism is Rationalist Praxis
So what were the motives behind this string of murder and death?
A Reddit comment from 2023 offered a tidy summary of Zizian ideology as “‘mental tech’ for thinking more clearly / effectively that also skews into tinpot biological theories of brain structure and trans-ness / gender identity with a side of tulpas / alters,” boiling it all down to “basically a much more radicalized and optimistic version of doompost-Yud who would rather build God themselves than work out how to do so as part of an organization.”
A Medium post under the name Sefa Shapiro has a deeper rundown of Ziz’s history as a member and then opponent of the Rationalists. A dossier of official and community documents on the situation has been assembled (by a rather notorious Rationalist whose name alone is too embarrassing for me to type). Ziz shared their ideas through a blog called Sinceriously dot fyi. Here’s an archive link.
A full rundown of Ziz’s ideas might or might not be productive, but the group and its activities reflect two things.
First, it is just one example of the way that Yudkowsky’s groundless, science-fictional assumption that AI is an imminent threat to the human species has a tendency to drive young people insane, particularly thanks to the added implicit assumption that the specific people associated with Yudkowsky and Rationalism have unique insight into the AI threat, and therefore a unique duty to fight it, by any means necessary.
Second, it illustrates how trying to rebuild ethics from first principles using only “logic and reason” only compounds the kinds of bias Yudkowsky has (correctly) identified as an existential threat (kek) to his confident claims about AI Doom. Yudkowsky for decades has waved around “Bayesian inference” as the solution to all problems of bias, and a golden road to rationally predicting the future, when that’s not actually what Bayes does for them at all. Instead, Rationalist practice is a kind of vibes-based, Bayesian-ish reasoning that Yudkowsky presents as a pseudo-religious revalation. But that very cultish hyping up of the super-powers of rationality is ultimately nothing but a thin veneer that papers over bias - and in that naive papering-over makes bias more dangerous for the movement’s adherents.
After sweeping away all pre-existing culture and ideas as implicitly irredeemable (because they lead people to disagree with Yudkowsky’s beliefs about AI doom), the false confidence of the Rationalists can lead them to extremes: to madness, to embezzlement, to suicide … and to excessive obedience to charismatic leaders who aren’t Yudkowsky, including not just Ziz but multiple other cult-like splinter groups.
DeepSeek Outcompetes the God Machine
Nearly simultaneously with the most recent Ziz-linked violence has come the release of DeepSeek, a Chinese-built AI model that has made huge strides in bringing down both training and query costs. Importantly, it’s not clear that DeepSeek is producing results that are any better than OpenAI’s or other models.
In fact, as an aside, this is where the market’s panicked reaction is ultimately nonsensical - the crash in stocks like Nvidia has come in response to increased competition in a category that currently creates no profit. We have fully entered the Baudrillardian stage of economic simulation, divorced from productivity.
It’s also misguided to take some of the claims about DeepSeek at simple face value. The idea that it only cost $5 million to train is misguided, in part because it relied on a lot of existing work. But it’s still clearly several orders of magnitude cheaper than the capital expenditures by the likes of OpenAI and Anthropic.
And here’s the thing: OpenAI, Anthropic, and others have gotten away with being profligate spenders because those firms, and the network of ideologues around them, have created a valuation premium based on pure mystique. And that mystique is of a very specific sort and source: it is propped up by the constant declaration that what’s coming next, just around the corner, is God.
Or, to use the pseduo-technical term for God the Rationalists prefer: Artificial General Intelligence.
This is the positive flipside of Yudkowsky’s AI Doom: his assumption that AI could become all-powerful became an enticing pitch for soulless husks like Sam Altman and Peter Thiel, who were glad to prop up-slash-grift off of the confident declarations of a fantasist.
This is why, as Gebru and Torres have pointed out, everything being built by OpenAI and others is “unscoped” - it’s being built not for any specific utility, but towards the infinitely broad goal of creating God. LLM architecture that has become the synonym and umbrella for all AI was the trigger for a huge marketing and investment push not because it’s tremendously useful, but because it’s broadly accessible and appealing. If “AI” were being sold in realistic terms, you would have at least two different buckets. LLMs would be back-seated as the interfaces and search engines they fundamentally are. Real focus would instead be on data analytics tools that are genuinely useful for things like detecting patterns in cancer rates or the like, looking at new proteins and other materials, etc.
But the Rationalists and fellow travellers seem willing to simply brazenly lie about the reality of LLM technology because, just like Sam Bankman-Fried, they believe that no morality supercedes the only thing in technology or human society that matters: making sure “good guys” build AI first, so they can make sure it’s “aligned.” (Yes, the Rationalists created and are now participating in the race to create the thing they claim to fear most.) They have no fucking clue what any of those things really mean, because they have a naive mechanistic view of the universe and the human mind alike - but those are the words they use.
The economic angle of this techno-religious fraud is obvious once you see it. As AI reporter Karen Hao points out, it was OpenAI who pushed hard on the idea that scaling compute was the golden road to turning LLMs into actual “intelligence.” They are incentivized to deceive the public about the real-world potential of LLMs because they don’t care about actual products, they simply believe they’re in a race to crunch numbers as fast as possilbe until God pops up out of a box and makes them the Kings and Queens of Heaven Forever. Capital efficiency doesn’t enter into it.
There are no Rationalists in China, as far as I know. So they have neither the hype nor the delusion pushing for a no-expenses-spared approach to building LLMs as a path to building the God Machine. Hence DeepSeek being a low-stakes side project of a hedge fund manager, instead of the Potemkin Robot of a multi-billion-dollar boondoggle.
In closing, I’ll just point out that DeepSeek exposes, but doesn’t solve, the deeper problem in all of this: LLMs, including DeepSeek itself, are being sold on the basis of things they can’t do. DeepSeek is a less expensive form of autocomplete, but any resemblance to reasoning, much less to consciousness, remains entirely coincidental.