Pivotal Acts: The Future as an Emergency, Part 4
Why Logic and Reason Demand a Full Takeover of the U.S. Government
Vacation Announcement: Folks, we are very, very close to the end of Stealing the Future - I will be submitting a final draft to Repeater this weekend. It’s an intensely emotional moment for me, to be frank - this book emerged quickly, but I’ve still been chained to it for nine months. So next week, I’m taking some time off: there will be no posts for the week of June 1.
We are also, of course, nearing the end of draft posts from the book - this is the second-to-last, though of course I will continue closely following the FTX story here.
I appreciate every single one of you, whether you’re here for FTX or not. Your support has made a huge difference for going on two years now. And with the book done, you’ll be seeing a much more diverse lineup of new crypto, fraud, and tech-world content here. I’m looking forward to it - a lot of projects went on the back burner with the book, and now we’re going to finish them off.
But for now - a bit more about the silliness of EA and Rationalism.

Pivotal Acts
Eleizer Yudkowsky, according to an early autobiographical sketch, believed he could personally speed the arrival of the Singularity by twenty years, while also making sure the AI was friendly. He dedicated his life to preventing AI Doom, founding the Singularity Institute for Artificial Intelligence (SIAI) - later renamed The Machine Intelligence Research Institute (MIRI). MIRI’s overriding goal has been the creation of an ‘aligned’ artificial intelligence - that is, one that shares human values, which Yudkowsky conceives as universally shared.
When Yudkowsky realized few people shared his anxieties, his project shifted. Clearly, if most humans didn’t share his conclusions, it must be because they didn’t think as clearly as he did - and this “bias” (disagreeing with Eliezer Yudkowsky) needed to be eliminated. And so, from the direct goal of convincing people to fear AI, Yudkowsky’s efforts shifted into reshaping how they thought.
“AI Safety,” the Rationalist movement’s terminology for creating human-aligned AI, is cited as core to the mission of the Center for Applied Rationality. Yudkowsky’s own most influential work in this effort was a piece of Harry Potter fan-fiction, “Harry Potter and the Methods of Rationality” - references to children’s fantasy and sci-fi books became building blocks for a great deal of Rationalist discourse.
But as much as as he touted the importance of logic and reason, Yudkowsky wasn’t above a little fearmongering to get his point across. He was and seemingly remains genuinely frantic about the arrival of AI Doom, which would become inevitable as soon as an “unaligned” superintelligent AI was invented. This “artificial general intelligence” was believed to be just around the corner - and it has been just around the corner for the two decades since.
These prophecies of the Singularity - simultaneously promising doom and apotheosis - mirror those of the UFOs awaited by The Seekers, the small cult at the center of Festinger, Riecken, and Schachter’s 1956 study When Prophecy Fails. An offshoot of what became Scientology, the Seekers’ leader prophesied that they would be rescued from earth’s destruction by a flying saucer on December 17, 1954 - but when that did not occur, adherents’ beliefs only intensified. The continuing deferral of the Rapture of the Singularity, like UFO Doom and many other prophecies of the End Times, only demanded recalculation, refinement, better math.
Keep reading with a 7-day free trial
Subscribe to Dark Markets to keep reading this post and get 7 days of free access to the full post archives.