đď¸ The AI Timelines Scam, Expanded (Free Excerpt): The Future as an Emergency, Part 3
Eliezer Yudkowsky, Sam Bankman-Fried, and the Rationalism of Panic
I am extremely happy to announce that Iâve put the final words to the final chapter of Stealing the Future. There is still a substantial amount of editing and logistics remaining, and your support is particularly helpful at this moment thanks to some outside circumstances. But in celebration, this weekâs draft excerpt is entirely free. Iâve also scattered in links to other important excerpts. Keep in mind that these are explicitly drafts, which is why theyâre often paywalled for supporters.
This post, like the book as a whole, exists at the intersection of the theoretical and the practical - in this case, it shows how Bankman-Friedâs use of $500 million dollars of embezzled customer funds to invest in Anthropic AI was driven by Rationalismâs mathematical, engineering-inflected, ultimately deterministic understanding of human beings, and the universe.
I would also point curious readers to the short series that started my thinking along these lines, more than four years ago, before Sam Bankman-Fried had even come onto my radar: Venture Capital on Arrakis.
There will be one final excerpt from this chapter, likely next week, before Iâm finally able to pivot to related topics here - above all, the threat of the unknowable in economics.

Pivotal Acts
Yudkowsky, according to an early autobiographical sketch, believed he could personally speed the arrival of the Singularity by twenty years, while also making sure the AI was friendly1. He dedicated his life to preventing AI Doom, founding the Singularity Institute for Artificial Intelligence (SIAI) - later renamed The Machine Intelligence Research Institute (MIRI). MIRIâs overriding goal has been the creation of an âalignedâ artificial intelligence - that is, one that shares human values, which Yudkowsky conceives as universally shared.
When Yudkowsky realized few people shared his anxieties, his project shifted. Clearly, if most humans didnât share his conclusions, it must be because they didnât think as clearly as he did - and this bias (disagreement with Eliezer Yudkowsky) needed to be eliminated. âAI Safety,â the movementâs terminology for creating aligned AI, is cited as core to the mission of the Center for Applied Rationality2. Yudkowskyâs own most influential work in this effort was a piece of Harry Potter fan-fiction, âHarry Potter and the Methods of Rationality,â and references to childrenâs fantasy and sci-fi books became building blocks for a great deal of Rationalist discourse.
But as much as as he touted the importance of logic and reason, Yudkowsky wasnât above a little fearmongering to get his point across. He was and seemingly remains genuinely frantic about the arrival of AI Doom, which would become inevitable as soon as an âunalignedâ superintelligent AI was invented. This âartificial general intelligenceâ was believed to be just around the corner - and it has been just around the corner for the two decades since.
These prophecies of the Singularity mirror those of the UFOs awaited by The Seekers, the small cult at the center of Festinger, Riecken, and Schachterâs 1956 study âWhen Prophecy Fails.â An offshoot of what became Scientology, the Seekersâ leader prophesied that they would be rescued from earthâs destruction by a flying saucer on December 17, 1954 - but when that did not occur, adherentsâ beliefs only intensified. The continuing deferral of the Rapture of the Singularity, like UFO Doom and many other prophecies of the End Times, only demanded recalculation, refinement, better math.
There is an additional, particularly capitalist gravity to the techno-utopian Rapture: the movementâs prophecies are motivated at least in part by the dictates of investment finance. Jessica Taylor, a committed but admirably reflexive Rationalist, has observed that AI development projects tend to posit unrealistically short timelines for the arrival of AGI or simulated human minds, because itâs easier âto justify receiving large amounts of money ⌠if it is, in fact, possible to develop AI soon. So, there is an economic pressure towards inflating estimates of the chance AI will be developed soon.â
Taylor cites examples of these propheciesâ failure, some predating Kurzweilâs full articulation of Singularitarianism, They include Japanâs Fifth Generation, a 1982-1992 attempt to build AGI that shut down in failure after spending $400 million ($1.3 billion in 2025 dollars); and the Human Brain Project, an attempt to build a functioning computer simulacra of the brain which ran from 2013 to 2023 and ended in failure at a cost of 1 billion euros. Neither these nor any other huge failure has dampened the utopiansâ faith in the eventual arrival of the God AI - or their ability to attract massive funding in its pursuit.
Taylor bluntly describes this as âThe AI Timelines Scam,â though itâs less a conscious âscamâ than a set of incentives with inevitable consequences. The fear of an AI apocalypse is very, very real among rank-and-file Effective Altruists and Rationalists, sometimes to the point of being psychologically destabilizing. If positing very near-term timelines for AIâs risks and possibilities is good for raising money - and especially if you also happen to be an ethical utilitarian - then loudly proclaiming unrealistically short timelines is actually the ârightâ thing to do. The dire intensity of these predictions in turn attracts more people to the techno-rationalist movement, while making adherents more manic and committed to the quest to âsave the world.â
On substance, Taylor points out, there is no clear way to predict the timeline of AGI, or even whether it can be achieved at all: âBasic facts, such as the accuracy of technical papers on AI, or the filtering processes determining what you read and what you donât, depend on sociopolitical phenomena.â The same qualifications undermine other worries of extinction risk, or claims for the âeffectivenessâ of longer-term philanthropic efforts. But shorter and shorter AI timelines have become increasingly common in techno-utopian circles, suggesting an endemic interpretive bias aimed - consciously or unconsciously - at raising more money for MIRI, CFAR, and similar institutions.
In some cases these timelines get fantastically short - one likely contributor to Sam Bankman-Friedâs belief that he had to steal his customersâ money because his chance to make an impact would ânot last more than five years.â At the same time, Rationalism shares with Effective Altruism a belief in the moral equivalency of present and future humans, a utilitarian ethics that directly ties present actions the the entire future of humanity. This massive weight of duty drives the maximizing approach of both movements, and clearly shaped Sam Bankman-Friedâs approach to leverage and risk. Apocalypticism specifically amplified the self-importance of the techno-utopian movement, its members, and its leaders, who were quick to declare their own world-historical genius - whether sincerely, or strategically.
This sense of infinite stakes and the Rationalistsâ own unique responsibility has led Yudkowsky to theorize the necessity of âpivotal acts,â all focused on the predicted future. Activists and critics might inveigh against present-day problems like ârobotic cars causing unemployment in the trucking industry, or ⌠who holds legal liability when a factory machine crushes a workerâ (or, for that matter, clear evidence that bias in actually-existing machine learning models is harming present humans). But, Yudkowsy writes, these things are merely âbad, [but] they are not existential catastrophes that transform all galaxies inside our future light cone into paperclips3.â
This is just one convenient alignment between techno-utopianismâs indifference to present economic conditions, and their fundersâ hypercapitalist self-interests. EAs and Rationalists also flatter their sponsors by downplaying a risk you might be surprised they donât regard as âexistentialâ: human-caused climate change, a topic Bankman-Fried rarely, if ever, discussed publicly or prioritized for philanthropic support.
At its furthest extremes, the unwavering intensity of Rationalistâs panic over âA.I. Doomâ fueled explicit calls for the antidemocratic seizure of power, justified by the techno-utopiansâ intellectual superiority. Effective Altruism cofounder Toby Ord has advocated for giving a council of planning experts veto power over world governments. The leader of Rationalist offshoot Leverage Research, a nonprofit that has also received funding from Peter Thiel,[ https://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/] believed that âthere were serious harms and dangers in the world [and] that some risks were both catastrophic and might occur soon4.â
According to one estranged member, Leverageâs presumption of doom fostered a âworld-saving planâ including a militarized takeover of the U.S. government5.
Emergency Financing
According to a civil suit filed by the bankruptcy administrators of the FTX estate, FTXâs payments to CFAR began in March of 2022, with an initial tranche of $2 million sent directly from FTX proper. Soon after that, payments began flowing from the FTX Foundation. (By this point MacAskill and Beckstead had leadership roles at the FTX Future Fund, a longtermist subsidiary6 of the broader Foundation.) The suit alleged that the âFTX Foundationâs primary source of funds was Alameda monies that had been commingled with FTX customer deposits.â The suit further suggests a broader agenda: âIn reality, very few of FTX Foundationâs donations directly benefited the needy. Its largest donations went to associates of FTX Insiders in the âeffective altruismâ movement.â
This theft-fueled self-dealing fully aligned with Yudkowskyâs theory of the âpivotal act,â and more generally with the self-indulgence that congealed from Rationalism and EAâs declarations of service to the greater good. As one hypothetical example of a âpivotal act,â Yudkowsky mused that if âa genie ⌠uploaded human researchers ⌠these uploads could do decades or centuries of unrushed serial research on the AI alignment problem, where the alternative was rushed research over much shorter timespans; and this can plausibly make the difference by itself between an AI that achieves ~100% of value versus an AI that achieves ~0% of value.â
The not-so-gentle implication is that the most effective thing an organization like CFAR can do with its time and money today is researching how to upload its own researchersâ consciousness to a virtual environment where they will live forever. Less fancifully, it simply implies that the most genuinely philanthropic thing a Rationalist can do is pay themselves to be more Rational.
The timing of payments to CFAR is again suspicious. The initial March payment of $2 million was followed by a fairly tidy series of payments in July, August, and September. Then on October 3 of 2022, everything changes - instead of one lump sum or a series of smaller gifts, that single day saw ten different transactions, each sending either $150,000 or $160,000, plus one payment of $100,000, for a total of $1.5 million in a single day. This happened just as insiders including Bankman-Fried were becoming more acutely aware of FTXâs fragility.
Two of the FTX Foundationâs payments, the $500,000 tranches sent on July 13 and August 18, were of particular interest. According to the estateâs suit, these payments were sent from North Dimension bank accounts directly to a title company as a deposit for the purchase of a building called the Rose Garden Inn by a subsidiary organization of CFAR called Lightcone RG. The balance of the purchase price of the Rose Garden by Lightcone, totaling $20m, came via Slimrock Investments. According to Lightcone founder Oliver Habryka, this is an entity controlled by Jaan Tallin7.
These connections are relevant because of what Lightcone RG did with the Rose Garden Inn after the purchase was complete. The hotel, renamed Lighthaven, became the site for workshops and events - including events featuring advocates of scientific racism.
The Manifest conference was hosted by a firm tied to so-called âprediction markets,â which allow gambling on real-world events, and which are largely illegal in the United States. Manifest 2023 was held at Lighthaven, and as discovered by The Guardian8, featured speakers including Richard Hanania, who had written for avowed white supremacist Richard Spencerâs Alternativeright.com; and Malcolm and Simone Collins, a âpro-natalistâ couple who openly referred to themselves as âhipster eugenicists.â Simone Collins was for a time an executive at a âsecret societyâ co-founded by Peter Thiel9.
The next year, Manifest 2024 hosted a wide range of speakers, including Eleizer Yudkowsky and an array of tech-world gurus. But further down the agenda were less savory figures including Jonathan Anomaly, author of a 2018 paper called âDefending Eugenicsâ; Razib Khan, contributor to virulent extreme-right outlet VDare; and Brian Chau, an affiliate of the âeffective accelerationistâ offshoot of Effective Altruism, whose history of racist comments10 included disparaging police murder victim George Floyd.
This star-studded crossover event between rationalism, effective altruism, and eugenics is not as odd as it might seem. Daniel HoSang, a professor of American studies at Yale University, told the Guardian that tech, EA, and eugenics âconverge around a belief that nearly everything in society can be reduced to markets, and all people can be regarded as bundles of human capital.â
That the event was hosted by a prediction markets firm further points to the Rationalist movement and Effective Altruismâs shared belief in markets as a source of truth - and specifically, truth about the future. Market logic is constantly at play in Effective Altruismâs calculations of âexpected value.â Dollars are a simplified, superficially ârationalâ way to follow inputs and measure their future impact, the better to maximize it.
In addition to the specifically fraudulent source of funds, the FTX estate sought its clawback of the $5 million from CFAR on the basis of âundue enrichment.â Undue enrichment occurs when payments are not made in exchange for services of value, or when a payer is insolvent - that is, when a payment is actually a disguised theft of corporate funds.
In a July of 2024, CFAR filed a response to the clawback suit, making two interesting arguments. First, CFAR posits that because most of the funds flowed through the FTX Foundation rather than directly from FTX, the funds (regardless of original source) were remote from any claims by the FTX debtors. Second, against claims of undue enrichment, CFAR argued that âdebtors received value from their philanthropic efforts.â
This may be truer than intended: Sam Bankman-Friedâs involvement in Effective Altruism was a key element of his public image, helping land his beaming face and tangled hair on magazine covers and TV interviews. This image in turn helped attract more deposits for him to steal, while burnishing his image thoroughly enough to get his foot in the door of the U.S. Congress, more than once.
So CFAR is probably right that Bankman-Friedâs $5 million donation got him quite a lot in return.
Terrible Purpose: Emergency and Elite Power
Bankman-Friedâs crimes exemplify in (relative) miniature what political theorist Jonathan White has described as âthe democracy-harming effects of temporal pressure.â EA and Rationalismâs assertion of strong predictive abilities held by a small number of elites has clear authoritarian implications, while their increasingly exclusive focus on catastrophic outcomes amplifies all decisions to matters of life or death. In all of this, there is effectively zero theory of politics, illustrated on the one hand by the emphasis on billionaires unilaterally deploying capital; and on the other by Toby Ordâs simplistic call for a counsel of wise men to govern global extinction risk - with little more theory of how they would be selected than the âroyalistâ Curtis Yarvin has of the source of sovereign authority.
This lack of politics is entwined with the sense of emergency. âFeeling trapped in an airless present,â White writes, âThe temptation is to seek the immediate breakthroughâ - such as by taking over the U.S. government. Neither the Yudkowskyite Rationalists nor Leverage Research managed to accomplish this directly, but Peter Thiel did, with his championing of Donald Trump. Muskâs fumbling evisceration of government agencies and Trumpâs execution of warrantless deportation raids are both justified in the name of a suddenly-imminent âemergencyâ - not imminent problems in the present, mind you, but threats that lie in the future, when a âpopulation bombâ will rip power from those who genetically deserve it (that is, white people.)
What is TESCREALism? Mapping the Cult of the Techno-Utopia.
I credit basically two people with shaping my thinking about Sam Bankman-Friedâs philosophy.
It is grim, indisputable proof of Johnathan Whiteâs diagnosis that âelitist claims on the future bolster elitist modes of rule,â because âThe future can be used to pacify the public, and keep power out of its hands.â
This is illustrated in the science fiction that fuels so much of the techno-utopian mindset (along with young adult fantasy like Harry Potter). But as with most leveraging of art by the court of the techno-kings, there is a striking tendency to slide right past any troubling or nuanced messages.
Two key works in science fiction deal with the question of prophecy and foresight: Isaac Asimovâs Foundation series, and Frank Herbertâs Dune books. Foundation is rooted in the story of Harry Seldon, a generational genius who perfects the new mathematics/social science of psychohistory: âthat branch of mathematics which deals with the reactions of human conglomerates to fixed social and economic stimuli. Implicit in all these definitions is the assumption that the human conglomerate being dealt with is sufficiently large for valid statistical treatment ... a further necessary assumption is that human conglomerate be itself unaware of Psychohistory analysis in order that its reactions be truly random.â
Asimov is fundamentally sympathetic to the idea that statistical modeling could approximate the ability to see the future. Though he introduces flaws and challenges along the way, he offers a relatively untroubled argument that mathematical prophecy, based on something very like the metrics of experimental truth, expected value, Bayesian probability, and p(Doom), could work passably well.
But Dune, by far the more sophisticated work, takes a more pessimistic view - Frank Herbert reputedly wrote it in part as a rebuttal to Foundation. Duneâs themes of insurgency and control echo the horrors of the contemporaneous Vietnam War, which was justified in part by the predictionist âdomino theoryâ of Communismâs future spread. Herbert depicts the status quo of prophecy in his universe as entirely a cynical political project of an elite establishment. The legend of the Lisan Al Gaib that Paul Atreides leverages to gain entry into Fremen society is been seeded over many years by the secretive Bene Gesserit - essentially a longtermist CIA. The result for the Fremen themselves is recognizably the same nightmarish imperial capture and degradation that faced midcentury decolonization movements across the developing world.
On the other hand, Dune also presents us with real prophecy, but shows it to be practically useless - Paul sees the terror of the jihad he will unleash, but lacks the wherewithal to change his own path. It is Duneâs ultimate trick: we watch as the hero of Arrakisâ glorious future becomes the abominable demagogue of its declining present.
Like Sam Bankman-Fried, and like all constructed heroes, he entices the populace to its doom by offering magical powers - of knowledge, of direction, of safety. Knowingly or not, Bankman-Fried and Paul Atreides both promised to resolve the complexity of the human condition into a series of clear solutions. Sigmund Freud identified in the human animal âan extreme passion for authorityâ and desire to âbe governed by unrestricted force.â In the pseudo-medieval world of Dune, that is mere military force and feudal hierarchy.
Sam Bankman-Fried embodied the slightly more nuanced disguise worn by authority in the 21st century: the appearance of infinite wealth.
https://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/
https://www.rationality.org/about/mission
https://arbital.com/p/pivotal/
Anders, Geoff, âReports of past negative experiences with Leverage Research.â https://f18ca2f5-d224-434f-b887-78018b04b503.filesusr.com/ugd/51c82b_d6106e56d3024fbc9d196773318cf4a8.pdf
https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b
https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1
https://www.lesswrong.com/posts/dZsAgZaPeWpoMuBxC/who-is-slimrock-investments-pte-ltd-and-how-are-they-related
https://www.theguardian.com/technology/article/2024/jun/16/sam-bankman-fried-ftx-eugenics-scientific-racism
https://www.entrepreneur.com/leadership/the-bizarrely-authoritarian-us-education-system/425668
â
point curious readers to the short series that started my thinking
â
I went to the link, read it, but there was no easy way to go to the rest of the series. If nothing else DZM should add a comment with links to the rest of the series.
â
â
happen to be an ethical utilitarian
â
happen to profess to be an ethical utilitarian. Technically, ethical does explicitly mean related to ethics. Technically, one could have a set of ethics that says dolphins are superior and all humans should be their servants. And technically, when they chain you to Flippersâ feeding station to toss fish all day that could be âethical.â But generally, ethical refers to generally accepted ethics and should be used that way.
â
â
one likely contributor to Sam Bankman-Friedâs belief
â
I still don't buy it. I don't buy that SBF bought the idea of AI Doom. SBF supposedly didn't care too much about crypto beyond very practical matters. His favorite crypto supposedly being Tether. For one, he loved to bleed farming protocols dry. And I think he was right to assume that ultimately the most profitable thing would be to sell any farming tokens acquired immediately. In Brady Dale's book, I believe SBF referred to farming with liquidity-incentive tokens as Ponzi. Which is extremely ironic as he outwardly tried to help Sushi, supposedly.
But as DZM put it in his Safemoon Substack article before this, what good is a token that is only valuable when sold? The voting rights are a joke. You get the right to suggest something that a handful of humans may or may not do. SBF's actions with Alameda using mercenary capital to drain farms, dump, and move on to the next suggest he was a cynic (at least about matters other than his own self assessment). I think he thought his timeline was short because he ruthlessly sold other people's tokens while at the same time claiming FTT was actually worth something! He controlled supply and could on a whim and a trust-me-bro determine at any time any of the rewards for holders.
No, I don't believe he believed in the AGI/singularity hype. None of that helps him continue grifting and frauding. Not believing it and exploiting others believing it might. But him actually believing it does not. And I think he âbelievedâ in EA because it gave him an excuse to do awful things. He can believe in EA without actually firmly believing any one tenet (like AI Doom). And perhaps more importantly, EAers were some of his first backers. Mostly it could have been tit for tat, probably the Anthropic donations too. No, I'm sorry, I can believe the belief for other EA adherents, but I just don't see it for SBF.
â
â
present actions the the entire
â
present actions with the entire
â
â
advocates of scientific racism.
â
advocates of racism claiming to be scientific. Or âof âscientificâ racism.â
â
â
co-founded by Peter Thiel
â
Hmm, maybe this whole Future as Emergency would be better as the beginning of a different book. Namely a book on Peter Thiel by DZM. Which I would /love/ to read. I'd be a little worried for DZM if he went after that target (per Gawker), but man would it be an epic.
There would not need to be any (paraphrasing) âprobably influenced SBF's thinkingâ. Thiel is funding activity that seemingly promotes the AI Doom narrative. As to why, the only possible explanations are a Punnett square of Believe and Exploit. B is to believe AI Doom is imminent, b is not. E is to exploit the propagation of the idea to some goal, e is not. BE, bE, be, or Be. And since he's an adherent of âRationality,â perhaps we know E/e. Perhaps of course it is E. Perhaps that's the very meaning of âinstrumental.â Perhaps it's merely a question of B/b. To BE or not? To bE? That is the question.
But maybe it doesnât matter. Maybe SBF is right. Those that ask such questions are overrated. I think a literary work with such a question might actually be impactful though.
â
â
Duneâs themes of insurgency and control echo the horrors of the contemporaneous Vietnam War, which was justified in part by the predictionist âdomino theoryâ of Communismâs future spread.
â
For some unspecified reason, Kissinger comes to mind when I think of Thiel. /So far/ there's been no war in the Vietnam sense, with a draft forcing young people to kill overseas. But there are still profits being made and power gained via the US military industrial complex. Are they profits from prepping for a future war? Or profits from a different kind of war that is ongoing? I think the âwar on terrorâ was a ridiculous phrase. So I will not suggest a âwar on freedomâ or, more specifically, a âcivil war on freedom.â But bad sh** is happening. And profits are happening. And it's scary to think how effective that âdomino theory" was. And it's scary to think Kissinger lived so freaking long and almost to the end with a neutral or positive popular opinion by people with even a passing understanding of his legacy.
Yeah, a Thiel book. Specifically by DZM. The one who first brought my attention to Thiel years ago. I think the world could use it. And, sooner than later, please.