Harry Potter and the Mantras of Authority: "Pop Bayesianism" and the Rationality of Power
"Stealing the Future" Draft Excerpt
Welcome to another draft sample of my upcoming book, Stealing the Future. One of the biggest omissions from coverage of SBF and the Effective Altruist movement has been a serious unpacking of “Bayesian inference,” the technique that the EAs and Rationalists hold up as their skeleton key for making more confident, more accurate predictions about the outcomes of their actions. It’s not a difficult concept, but it’s a challenge to lay out exactly why the stakes around it are so high. I’m fairly happy with the following attempt.
Q. Would you repeat, Dr. Seldon, your thoughts concerning the future of [the planet] Trantor?
A. I have said, and I say again, that Trantor will lie in ruins within the next three centuries.
Q. You do not consider your statement a disloyal one?
A. No, sir. Scientific truth is beyond loyalty and disloyalty.
Q. You are sure that your statement represents scientific truth?
A. I am.
Q. On what basis?
A. On the basis of the mathematics of psychohistory.
Q. Can you prove that this mathematics is valid?
A. Only to another mathematician.
- Isaac Asimov, Foundation
On what basis, then, do these priests of risk claim their ordination? What novel tools lead Rationalists to syntactic exactitude, a precise enough mastery over the far future that they have confidently devoted tens of millions of dollars and thousands of talented young people’s lives to protecting against artifical intelligence that doesn’t yet exist? What new techniques of measurement and projection give the Effective Altruists the unique means to determine exactly which impoverished people it makes most sense to “invest” their donations into?
You may be shocked to learn that the techno-rationalists have no secret at all. The tools of rationality underlying their claims to authority are modest and reasonable: An emphasis on evidence-based controlled trials; an approach to probability known as Bayesian Inference; and a broad commitment to overcoming bias in their thinking.
The magic of the movement - and its profound danger - comes in its dramatic overselling of the power of these techniques, through rhetorical and institutional strategies. Most fundamentally, though these tools cannot defy the simple reality that human knowledge has limits, they are framed as steps in a constant progression towards an implicit horizon of certainty. Sam Bankman-Fried is paradigmatic of the underlying category error of the entire movement: its attempt to transmute probability into formal logic, and to leverage that supposed mathematical certainty into power.
A key element of this worldview is its relationship to artificial intelligence. In an earlier stage of its work, a major part of Yudkowsky’s Machine Intelligence Research Institute was what was known as “executable philosophy” - an attempt to build a complete and enclosed system of principles that could underpin an artificial intelligence “aligned” with human values. In essence, MIRI was founded on the premise that all of the basic principles of human ethics and decisionmaking could be encoded using formal logic, rendering them readable by machines. This project is absurd for many reasons: most obviously, its assumption that a single universal model can capture “human values,” when actual humans have never stopped fighting over them for even a moment. More to the point, its a practical absurdity, and after tens of millions of dollars plowed into it, MIRI abandoned it in a 2020 pivot.
But the basic point is clear: The goal of the Rationalist movement, whose principles and practices have come to closely mingle with Effective Altruism, is to make both thought and ethics fully formal and mechanical: to build a system in which inputs produce uniform and predictable outputs. Ethically and aesthetically, this basic impulse aligns with innumerable threads in Bankman-Fried’s story, including his personal disdain for the complexity of books, and his steeping in a determinist understanding of humans as always-already machine-like.
These goals are risible in themselves for anyone who values humanity as an end goal, rather than as raw material for a future increasingly controlled by systems of machine-thought. Even on its own terms, though, Rationalism and EA consist of a constantly receding horizon of pure rationality, as unachievable with the tools at hand as human-like intelligence is with the stochastic parrotry of probabilistic LLM-based “AI.”
The techniques of these conjoined movements, trotted out like fetishes by EAs and Rationalists, are: Controlled experimental trials; Bayesian Inference; Expected Value; and the “elimination of bias”.
The Effective Altruist emphasis on controlled experimental trials to guide charitable giving is, as critic Alice Crary has argued, thin gruel on multiple fronts. Above all, EA has no claim to exclusivity on the idea of experimental trials for directing funds. Rather, what really seems to mark EA out is its naive faith that experimental results are reliable predictors that inputs (money) will produce specific outputs (human utility). In a debate with Crary, Peter Singer allowed that experimental results don’t always pan out in the field, but as Crary responded, that means EA offers nothing new of substance towards its stated goal of “effectiveness.” This in a nutshell is the motte-and-bailey maneuver that allows Rationalists to make claims to funding, legitimacy, and power, without actually bringing any material innovation to the table.
Rationalist decision-making techniques are, similarly, simultaneously formal and probabilistic methods aspiring to machinic precision. The king of all Rationalist insights is known as Bayesian inference: Nick Bostrom’s book Superintelligence posits formal Bayesian reasoning as one of the ingredients in a future god-like artificial intelligence.
Bayesian inference is an improvement to an older version of probability known as “frequentism,” the product of real advances in the melding of mathematics and formal logic. But it’s not a magical key that grants some golden road to predictive prowess - in fact, it’s more about getting mathematics caught up with basic common sense than propelling mathematical thinking beyond what the rest of us already understand. It’s not a staggering advance, but the correction of an error.
The error is this: The roots of probability theory lie in 17th century gambling parlors, where a particularly intellectual gambler known as the Chevalier de Mere first got the idea to engage mathematicians, including Blaise Pascal, in calculating his odds. In gambling, the odds of a certain outcome are, in some sense, knowable: A theoretical die always has a 1/6 chance of turning up a specific number, and the Law of Large Numbers posits that the more times you roll the die, the closer the real outcomes will get to this long-run theoretical ratio. The bulk of the work asked by the Chevalier de Mere and his ilk had to do with combining and comparing these already-known probabilities. This and other inputs set the study of probability on a “frequentist” path.
But there’s a problem: even when the frequentists measured many repetitions of a dice roll, the exercise wasn’t really calculating the probability of any future outcome. Rather, as later critics pointed out, they were merely measuring the frequency of what had already happened. This matters because the complexity of real systems - even including closed systems like dice and cards, which can have imperfections - mean that past results can’t be uniformly taken to indicate future probabilities. Instead, the careful probabilist must constantly be examining their own assumptions.
Keep reading with a 7-day free trial
Subscribe to Dark Markets to keep reading this post and get 7 days of free access to the full post archives.