👁️ The AI Timelines Scam, Expanded (Free Excerpt): The Future as an Emergency, Part 3
Eliezer Yudkowsky, Sam Bankman-Fried, and the Rationalism of Panic
I am extremely happy to announce that I’ve put the final words to the final chapter of Stealing the Future. There is still a substantial amount of editing and logistics remaining, and your support is particularly helpful at this moment thanks to some outside circumstances. But in celebration, this week’s draft excerpt is entirely free. I’ve also scattered in links to other important excerpts. Keep in mind that these are explicitly drafts, which is why they’re often paywalled for supporters.
This post, like the book as a whole, exists at the intersection of the theoretical and the practical - in this case, it shows how Bankman-Fried’s use of $500 million dollars of embezzled customer funds to invest in Anthropic AI was driven by Rationalism’s mathematical, engineering-inflected, ultimately deterministic understanding of human beings, and the universe.
I would also point curious readers to the short series that started my thinking along these lines, more than four years ago, before Sam Bankman-Fried had even come onto my radar: Venture Capital on Arrakis.
There will be one final excerpt from this chapter, likely next week, before I’m finally able to pivot to related topics here - above all, the threat of the unknowable in economics.

Pivotal Acts
Yudkowsky, according to an early autobiographical sketch, believed he could personally speed the arrival of the Singularity by twenty years, while also making sure the AI was friendly1. He dedicated his life to preventing AI Doom, founding the Singularity Institute for Artificial Intelligence (SIAI) - later renamed The Machine Intelligence Research Institute (MIRI). MIRI’s overriding goal has been the creation of an ‘aligned’ artificial intelligence - that is, one that shares human values, which Yudkowsky conceives as universally shared.
When Yudkowsky realized few people shared his anxieties, his project shifted. Clearly, if most humans didn’t share his conclusions, it must be because they didn’t think as clearly as he did - and this bias (disagreement with Eliezer Yudkowsky) needed to be eliminated. “AI Safety,” the movement’s terminology for creating aligned AI, is cited as core to the mission of the Center for Applied Rationality2. Yudkowsky’s own most influential work in this effort was a piece of Harry Potter fan-fiction, “Harry Potter and the Methods of Rationality,” and references to children’s fantasy and sci-fi books became building blocks for a great deal of Rationalist discourse.
But as much as as he touted the importance of logic and reason, Yudkowsky wasn’t above a little fearmongering to get his point across. He was and seemingly remains genuinely frantic about the arrival of AI Doom, which would become inevitable as soon as an “unaligned” superintelligent AI was invented. This “artificial general intelligence” was believed to be just around the corner - and it has been just around the corner for the two decades since.
These prophecies of the Singularity mirror those of the UFOs awaited by The Seekers, the small cult at the center of Festinger, Riecken, and Schachter’s 1956 study “When Prophecy Fails.” An offshoot of what became Scientology, the Seekers’ leader prophesied that they would be rescued from earth’s destruction by a flying saucer on December 17, 1954 - but when that did not occur, adherents’ beliefs only intensified. The continuing deferral of the Rapture of the Singularity, like UFO Doom and many other prophecies of the End Times, only demanded recalculation, refinement, better math.
There is an additional, particularly capitalist gravity to the techno-utopian Rapture: the movement’s prophecies are motivated at least in part by the dictates of investment finance. Jessica Taylor, a committed but admirably reflexive Rationalist, has observed that AI development projects tend to posit unrealistically short timelines for the arrival of AGI or simulated human minds, because it’s easier “to justify receiving large amounts of money … if it is, in fact, possible to develop AI soon. So, there is an economic pressure towards inflating estimates of the chance AI will be developed soon.”
Taylor cites examples of these prophecies’ failure, some predating Kurzweil’s full articulation of Singularitarianism, They include Japan’s Fifth Generation, a 1982-1992 attempt to build AGI that shut down in failure after spending $400 million ($1.3 billion in 2025 dollars); and the Human Brain Project, an attempt to build a functioning computer simulacra of the brain which ran from 2013 to 2023 and ended in failure at a cost of 1 billion euros. Neither these nor any other huge failure has dampened the utopians’ faith in the eventual arrival of the God AI - or their ability to attract massive funding in its pursuit.
Taylor bluntly describes this as “The AI Timelines Scam,” though it’s less a conscious “scam” than a set of incentives with inevitable consequences. The fear of an AI apocalypse is very, very real among rank-and-file Effective Altruists and Rationalists, sometimes to the point of being psychologically destabilizing. If positing very near-term timelines for AI’s risks and possibilities is good for raising money - and especially if you also happen to be an ethical utilitarian - then loudly proclaiming unrealistically short timelines is actually the “right” thing to do. The dire intensity of these predictions in turn attracts more people to the techno-rationalist movement, while making adherents more manic and committed to the quest to “save the world.”
On substance, Taylor points out, there is no clear way to predict the timeline of AGI, or even whether it can be achieved at all: “Basic facts, such as the accuracy of technical papers on AI, or the filtering processes determining what you read and what you don’t, depend on sociopolitical phenomena.” The same qualifications undermine other worries of extinction risk, or claims for the “effectiveness” of longer-term philanthropic efforts. But shorter and shorter AI timelines have become increasingly common in techno-utopian circles, suggesting an endemic interpretive bias aimed - consciously or unconsciously - at raising more money for MIRI, CFAR, and similar institutions.
In some cases these timelines get fantastically short - one likely contributor to Sam Bankman-Fried’s belief that he had to steal his customers’ money because his chance to make an impact would “not last more than five years.” At the same time, Rationalism shares with Effective Altruism a belief in the moral equivalency of present and future humans, a utilitarian ethics that directly ties present actions the the entire future of humanity. This massive weight of duty drives the maximizing approach of both movements, and clearly shaped Sam Bankman-Fried’s approach to leverage and risk. Apocalypticism specifically amplified the self-importance of the techno-utopian movement, its members, and its leaders, who were quick to declare their own world-historical genius - whether sincerely, or strategically.
This sense of infinite stakes and the Rationalists’ own unique responsibility has led Yudkowsky to theorize the necessity of “pivotal acts,” all focused on the predicted future. Activists and critics might inveigh against present-day problems like “robotic cars causing unemployment in the trucking industry, or … who holds legal liability when a factory machine crushes a worker” (or, for that matter, clear evidence that bias in actually-existing machine learning models is harming present humans). But, Yudkowsy writes, these things are merely “bad, [but] they are not existential catastrophes that transform all galaxies inside our future light cone into paperclips3.”
This is just one convenient alignment between techno-utopianism’s indifference to present economic conditions, and their funders’ hypercapitalist self-interests. EAs and Rationalists also flatter their sponsors by downplaying a risk you might be surprised they don’t regard as “existential”: human-caused climate change, a topic Bankman-Fried rarely, if ever, discussed publicly or prioritized for philanthropic support.
At its furthest extremes, the unwavering intensity of Rationalist’s panic over “A.I. Doom” fueled explicit calls for the antidemocratic seizure of power, justified by the techno-utopians’ intellectual superiority. Effective Altruism cofounder Toby Ord has advocated for giving a council of planning experts veto power over world governments. The leader of Rationalist offshoot Leverage Research, a nonprofit that has also received funding from Peter Thiel,[ https://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/] believed that “there were serious harms and dangers in the world [and] that some risks were both catastrophic and might occur soon4.”
According to one estranged member, Leverage’s presumption of doom fostered a “world-saving plan” including a militarized takeover of the U.S. government5.
Emergency Financing
According to a civil suit filed by the bankruptcy administrators of the FTX estate, FTX’s payments to CFAR began in March of 2022, with an initial tranche of $2 million sent directly from FTX proper. Soon after that, payments began flowing from the FTX Foundation. (By this point MacAskill and Beckstead had leadership roles at the FTX Future Fund, a longtermist subsidiary6 of the broader Foundation.) The suit alleged that the “FTX Foundation’s primary source of funds was Alameda monies that had been commingled with FTX customer deposits.” The suit further suggests a broader agenda: “In reality, very few of FTX Foundation’s donations directly benefited the needy. Its largest donations went to associates of FTX Insiders in the ‘effective altruism’ movement.”
This theft-fueled self-dealing fully aligned with Yudkowsky’s theory of the “pivotal act,” and more generally with the self-indulgence that congealed from Rationalism and EA’s declarations of service to the greater good. As one hypothetical example of a “pivotal act,” Yudkowsky mused that if “a genie … uploaded human researchers … these uploads could do decades or centuries of unrushed serial research on the AI alignment problem, where the alternative was rushed research over much shorter timespans; and this can plausibly make the difference by itself between an AI that achieves ~100% of value versus an AI that achieves ~0% of value.”
The not-so-gentle implication is that the most effective thing an organization like CFAR can do with its time and money today is researching how to upload its own researchers’ consciousness to a virtual environment where they will live forever. Less fancifully, it simply implies that the most genuinely philanthropic thing a Rationalist can do is pay themselves to be more Rational.
The timing of payments to CFAR is again suspicious. The initial March payment of $2 million was followed by a fairly tidy series of payments in July, August, and September. Then on October 3 of 2022, everything changes - instead of one lump sum or a series of smaller gifts, that single day saw ten different transactions, each sending either $150,000 or $160,000, plus one payment of $100,000, for a total of $1.5 million in a single day. This happened just as insiders including Bankman-Fried were becoming more acutely aware of FTX’s fragility.
Two of the FTX Foundation’s payments, the $500,000 tranches sent on July 13 and August 18, were of particular interest. According to the estate’s suit, these payments were sent from North Dimension bank accounts directly to a title company as a deposit for the purchase of a building called the Rose Garden Inn by a subsidiary organization of CFAR called Lightcone RG. The balance of the purchase price of the Rose Garden by Lightcone, totaling $20m, came via Slimrock Investments. According to Lightcone founder Oliver Habryka, this is an entity controlled by Jaan Tallin7.
These connections are relevant because of what Lightcone RG did with the Rose Garden Inn after the purchase was complete. The hotel, renamed Lighthaven, became the site for workshops and events - including events featuring advocates of scientific racism.
The Manifest conference was hosted by a firm tied to so-called “prediction markets,” which allow gambling on real-world events, and which are largely illegal in the United States. Manifest 2023 was held at Lighthaven, and as discovered by The Guardian8, featured speakers including Richard Hanania, who had written for avowed white supremacist Richard Spencer’s Alternativeright.com; and Malcolm and Simone Collins, a “pro-natalist” couple who openly referred to themselves as “hipster eugenicists.” Simone Collins was for a time an executive at a “secret society” co-founded by Peter Thiel9.
The next year, Manifest 2024 hosted a wide range of speakers, including Eleizer Yudkowsky and an array of tech-world gurus. But further down the agenda were less savory figures including Jonathan Anomaly, author of a 2018 paper called “Defending Eugenics”; Razib Khan, contributor to virulent extreme-right outlet VDare; and Brian Chau, an affiliate of the “effective accelerationist” offshoot of Effective Altruism, whose history of racist comments10 included disparaging police murder victim George Floyd.
This star-studded crossover event between rationalism, effective altruism, and eugenics is not as odd as it might seem. Daniel HoSang, a professor of American studies at Yale University, told the Guardian that tech, EA, and eugenics “converge around a belief that nearly everything in society can be reduced to markets, and all people can be regarded as bundles of human capital.”
That the event was hosted by a prediction markets firm further points to the Rationalist movement and Effective Altruism’s shared belief in markets as a source of truth - and specifically, truth about the future. Market logic is constantly at play in Effective Altruism’s calculations of “expected value.” Dollars are a simplified, superficially “rational” way to follow inputs and measure their future impact, the better to maximize it.
In addition to the specifically fraudulent source of funds, the FTX estate sought its clawback of the $5 million from CFAR on the basis of “undue enrichment.” Undue enrichment occurs when payments are not made in exchange for services of value, or when a payer is insolvent - that is, when a payment is actually a disguised theft of corporate funds.
In a July of 2024, CFAR filed a response to the clawback suit, making two interesting arguments. First, CFAR posits that because most of the funds flowed through the FTX Foundation rather than directly from FTX, the funds (regardless of original source) were remote from any claims by the FTX debtors. Second, against claims of undue enrichment, CFAR argued that “debtors received value from their philanthropic efforts.”
This may be truer than intended: Sam Bankman-Fried’s involvement in Effective Altruism was a key element of his public image, helping land his beaming face and tangled hair on magazine covers and TV interviews. This image in turn helped attract more deposits for him to steal, while burnishing his image thoroughly enough to get his foot in the door of the U.S. Congress, more than once.
So CFAR is probably right that Bankman-Fried’s $5 million donation got him quite a lot in return.
Terrible Purpose: Emergency and Elite Power
Bankman-Fried’s crimes exemplify in (relative) miniature what political theorist Jonathan White has described as “the democracy-harming effects of temporal pressure.” EA and Rationalism’s assertion of strong predictive abilities held by a small number of elites has clear authoritarian implications, while their increasingly exclusive focus on catastrophic outcomes amplifies all decisions to matters of life or death. In all of this, there is effectively zero theory of politics, illustrated on the one hand by the emphasis on billionaires unilaterally deploying capital; and on the other by Toby Ord’s simplistic call for a counsel of wise men to govern global extinction risk - with little more theory of how they would be selected than the “royalist” Curtis Yarvin has of the source of sovereign authority.
This lack of politics is entwined with the sense of emergency. “Feeling trapped in an airless present,” White writes, “The temptation is to seek the immediate breakthrough” - such as by taking over the U.S. government. Neither the Yudkowskyite Rationalists nor Leverage Research managed to accomplish this directly, but Peter Thiel did, with his championing of Donald Trump. Musk’s fumbling evisceration of government agencies and Trump’s execution of warrantless deportation raids are both justified in the name of a suddenly-imminent “emergency” - not imminent problems in the present, mind you, but threats that lie in the future, when a “population bomb” will rip power from those who genetically deserve it (that is, white people.)
What is TESCREALism? Mapping the Cult of the Techno-Utopia.
I credit basically two people with shaping my thinking about Sam Bankman-Fried’s philosophy.
It is grim, indisputable proof of Johnathan White’s diagnosis that “elitist claims on the future bolster elitist modes of rule,” because “The future can be used to pacify the public, and keep power out of its hands.”
This is illustrated in the science fiction that fuels so much of the techno-utopian mindset (along with young adult fantasy like Harry Potter). But as with most leveraging of art by the court of the techno-kings, there is a striking tendency to slide right past any troubling or nuanced messages.
Two key works in science fiction deal with the question of prophecy and foresight: Isaac Asimov’s Foundation series, and Frank Herbert’s Dune books. Foundation is rooted in the story of Harry Seldon, a generational genius who perfects the new mathematics/social science of psychohistory: “that branch of mathematics which deals with the reactions of human conglomerates to fixed social and economic stimuli. Implicit in all these definitions is the assumption that the human conglomerate being dealt with is sufficiently large for valid statistical treatment ... a further necessary assumption is that human conglomerate be itself unaware of Psychohistory analysis in order that its reactions be truly random.”
Asimov is fundamentally sympathetic to the idea that statistical modeling could approximate the ability to see the future. Though he introduces flaws and challenges along the way, he offers a relatively untroubled argument that mathematical prophecy, based on something very like the metrics of experimental truth, expected value, Bayesian probability, and p(Doom), could work passably well.
But Dune, by far the more sophisticated work, takes a more pessimistic view - Frank Herbert reputedly wrote it in part as a rebuttal to Foundation. Dune’s themes of insurgency and control echo the horrors of the contemporaneous Vietnam War, which was justified in part by the predictionist “domino theory” of Communism’s future spread. Herbert depicts the status quo of prophecy in his universe as entirely a cynical political project of an elite establishment. The legend of the Lisan Al Gaib that Paul Atreides leverages to gain entry into Fremen society is been seeded over many years by the secretive Bene Gesserit - essentially a longtermist CIA. The result for the Fremen themselves is recognizably the same nightmarish imperial capture and degradation that faced midcentury decolonization movements across the developing world.
On the other hand, Dune also presents us with real prophecy, but shows it to be practically useless - Paul sees the terror of the jihad he will unleash, but lacks the wherewithal to change his own path. It is Dune’s ultimate trick: we watch as the hero of Arrakis’ glorious future becomes the abominable demagogue of its declining present.
Like Sam Bankman-Fried, and like all constructed heroes, he entices the populace to its doom by offering magical powers - of knowledge, of direction, of safety. Knowingly or not, Bankman-Fried and Paul Atreides both promised to resolve the complexity of the human condition into a series of clear solutions. Sigmund Freud identified in the human animal “an extreme passion for authority” and desire to “be governed by unrestricted force.” In the pseudo-medieval world of Dune, that is mere military force and feudal hierarchy.
Sam Bankman-Fried embodied the slightly more nuanced disguise worn by authority in the 21st century: the appearance of infinite wealth.
https://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/
https://www.rationality.org/about/mission
https://arbital.com/p/pivotal/
Anders, Geoff, “Reports of past negative experiences with Leverage Research.” https://f18ca2f5-d224-434f-b887-78018b04b503.filesusr.com/ugd/51c82b_d6106e56d3024fbc9d196773318cf4a8.pdf
https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b
https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1
https://www.lesswrong.com/posts/dZsAgZaPeWpoMuBxC/who-is-slimrock-investments-pte-ltd-and-how-are-they-related
https://www.theguardian.com/technology/article/2024/jun/16/sam-bankman-fried-ftx-eugenics-scientific-racism
https://www.entrepreneur.com/leadership/the-bizarrely-authoritarian-us-education-system/425668
“
point curious readers to the short series that started my thinking
“
I went to the link, read it, but there was no easy way to go to the rest of the series. If nothing else DZM should add a comment with links to the rest of the series.
–
“
happen to be an ethical utilitarian
“
happen to profess to be an ethical utilitarian. Technically, ethical does explicitly mean related to ethics. Technically, one could have a set of ethics that says dolphins are superior and all humans should be their servants. And technically, when they chain you to Flippers’ feeding station to toss fish all day that could be “ethical.” But generally, ethical refers to generally accepted ethics and should be used that way.
–
“
one likely contributor to Sam Bankman-Fried’s belief
“
I still don't buy it. I don't buy that SBF bought the idea of AI Doom. SBF supposedly didn't care too much about crypto beyond very practical matters. His favorite crypto supposedly being Tether. For one, he loved to bleed farming protocols dry. And I think he was right to assume that ultimately the most profitable thing would be to sell any farming tokens acquired immediately. In Brady Dale's book, I believe SBF referred to farming with liquidity-incentive tokens as Ponzi. Which is extremely ironic as he outwardly tried to help Sushi, supposedly.
But as DZM put it in his Safemoon Substack article before this, what good is a token that is only valuable when sold? The voting rights are a joke. You get the right to suggest something that a handful of humans may or may not do. SBF's actions with Alameda using mercenary capital to drain farms, dump, and move on to the next suggest he was a cynic (at least about matters other than his own self assessment). I think he thought his timeline was short because he ruthlessly sold other people's tokens while at the same time claiming FTT was actually worth something! He controlled supply and could on a whim and a trust-me-bro determine at any time any of the rewards for holders.
No, I don't believe he believed in the AGI/singularity hype. None of that helps him continue grifting and frauding. Not believing it and exploiting others believing it might. But him actually believing it does not. And I think he “believed” in EA because it gave him an excuse to do awful things. He can believe in EA without actually firmly believing any one tenet (like AI Doom). And perhaps more importantly, EAers were some of his first backers. Mostly it could have been tit for tat, probably the Anthropic donations too. No, I'm sorry, I can believe the belief for other EA adherents, but I just don't see it for SBF.
–
“
present actions the the entire
“
present actions with the entire
–
“
advocates of scientific racism.
“
advocates of racism claiming to be scientific. Or “of ‘scientific’ racism.”
–
“
co-founded by Peter Thiel
“
Hmm, maybe this whole Future as Emergency would be better as the beginning of a different book. Namely a book on Peter Thiel by DZM. Which I would /love/ to read. I'd be a little worried for DZM if he went after that target (per Gawker), but man would it be an epic.
There would not need to be any (paraphrasing) “probably influenced SBF's thinking”. Thiel is funding activity that seemingly promotes the AI Doom narrative. As to why, the only possible explanations are a Punnett square of Believe and Exploit. B is to believe AI Doom is imminent, b is not. E is to exploit the propagation of the idea to some goal, e is not. BE, bE, be, or Be. And since he's an adherent of “Rationality,” perhaps we know E/e. Perhaps of course it is E. Perhaps that's the very meaning of “instrumental.” Perhaps it's merely a question of B/b. To BE or not? To bE? That is the question.
But maybe it doesn’t matter. Maybe SBF is right. Those that ask such questions are overrated. I think a literary work with such a question might actually be impactful though.
–
“
Dune’s themes of insurgency and control echo the horrors of the contemporaneous Vietnam War, which was justified in part by the predictionist “domino theory” of Communism’s future spread.
“
For some unspecified reason, Kissinger comes to mind when I think of Thiel. /So far/ there's been no war in the Vietnam sense, with a draft forcing young people to kill overseas. But there are still profits being made and power gained via the US military industrial complex. Are they profits from prepping for a future war? Or profits from a different kind of war that is ongoing? I think the “war on terror” was a ridiculous phrase. So I will not suggest a “war on freedom” or, more specifically, a “civil war on freedom.” But bad sh** is happening. And profits are happening. And it's scary to think how effective that “domino theory" was. And it's scary to think Kissinger lived so freaking long and almost to the end with a neutral or positive popular opinion by people with even a passing understanding of his legacy.
Yeah, a Thiel book. Specifically by DZM. The one who first brought my attention to Thiel years ago. I think the world could use it. And, sooner than later, please.