👁️ The Computability of the Universe is Political
Toby Ord's latest foray into crafting a rationalist universe. Also: Razzlekhan, Nishad Singh, Ryan Salame, TDBank, and more.
Welcome to your weekly Dark Markets news roundup. Below, you’ll find some first impressions of a new paper by Toby Ord, one of the founding minds of the Effective Altruist movement. First, the week’s news in brief.
TDBank did some Incredible Crimes
A very boring-looking consumer bank was letting people come into its Chinatown branches (and others) and deposit duffel bags full of cash - quite literally. The cash was proceeds from fentanyl smuggling, and boy did the bank go out of its way not to notice! That’s how you wind up with a $3.1 billion fine for laundering $690 million dollars. It’s also how none of your executives go to jail, at all!
Bitfinex Hacker Razzlekhan Faces 18 months in Prison
Heather Morgan and Ilya Lichtenstein were technically only charged with *laundering* funds stolen from BitFinex, but it seems likely they/she also played a role in the theft. Still, it looks like she’ll get more prison time than anyone who facilitated TDBank’s laundering. Funny, that.
I Can’t Force Myself to Watch this Stupid Ryan Salame Interview
Because Tucker Carlson is the most annoying man on the planet, I’ve still only made it about twenty minutes into his interview with FTX collaborator Ryan Salame. Even worse, despite Salame’s victim complex and some minor misrepresentations of the various campaign finance frauds going on, they have a point: Salame was oversentenced, and it’s heinous that Barbara Fried hasn’t faced more scrutiny.
Nishad Singh’s Lawyers Ask for Time Served
I’ll be blunt: based on what I saw on the stand and what I’ve learned since, I actually like Nishad Singh. He seemed to really believe in the whole EA thing, which is cute if naive, and Sam Bankman-Fried had him absolutely hypnotized. Still, I’m not sure if his defense team’s request for a prison sentence of no more than time already served is going to go over well.
Stripe’s $1.1B Stablecoin Acquisition
Stripe has paid $1.1 billion for a brand-new (2022) startup called Bridge. Bridge mostly does stablecoin swaps, though I’m sure there’s more to it than that. But big picture, I think we’re seeing the future of global payments take shape, with stablecoins very near the center.
Dow Jones and News Corp Sue Perplexity for Copyright Infringement
The hits just keep on coming. Remains to be seen whether a lot of these are just looking for settlements that will be rounding errors to the AI megafunds. The right approach is to negotiate an ongoing licensing deal, which would make AI much more expensive, because it will actually have to pay for the information it’s using, instead of free riding. We’ll see about that.
The Computability of the Universe is Political
Toby Ord has released a new draft paper that is uncannily relevant to my project on Effective Altruism and prediction: “Bounds on the Rates of Growth and Convergence of All Physical Processes.” Very roughly, it makes the extraordinary claim that limits on the computability of systems by human mathematics, what’s known as the Church-Turing Thesis, can be used to understand real physical limits in the universe. On its face, this is a strange claim, but hey, math and physics are weird.
(Immense thanks to reader Dan on Twitter for alerting me to this paper.)
“AI Safety” is the test run for the global “existential risk” committees of big-brains that Ord imagines.
The reason this particular claim deserves scrutiny, though, is that despite Ord’s hedging in the preprint, it is very much not a theoretical matter. Instead, it neatly reflects the degree to which Effective Altruism and affiliated movements rely on the computability of the real universe for their claims to legitimacy and power. For Ord specifically, his desire to impose centralized global planning committees to combat long-term existential risk ultimately depends on the idea that those risks are computable.
This is a problem for Ord, one that he even acknowledges in his book The Precipice. If scientists estimate a very low probability of an existential event – for instance, a one-in-a-trillion chance that we create a rogue murderous AI – Ord admits that “the chance the scientists have incorrectly estimated this probability is many times greater than one in a trillion.” (198). In easily the most intellectually risible passage in this hefty tome, and one that to my mind undermines Ord’s entire project, he proceeds to insist that “our uncertainty about the underlying physical probability is not grounds for ignoring the risk, since the true risk could be higher as well as lower” than that error-tinged one-in-a-trillion number.
This, at first blush, turns the entire Effective Altruist and Longtermist projects into an elaborate motte-and-bailey anti-argument. The base claim is that e.g. “Long term risk is computable and we should make investments now to mitigate against it, by giving the very smartest people the power to do the calculations and impose solutions of their choosing.” This is happening right now with the plethora of “AI Safety” work going at AI startups. “AI Safety” is the test run for the global “existential risk” committees of big-brains that Ord imagines in his book.
But when the longtermists admit, as they must, that nobody can compute far enough into the future to make centralized planning for the AI apocalypse a sensible idea, they fall back to something more like: “Future existential risks may not be accurately calculable, but they could happen and they’re big and scary, so you should give me power.”
By working backwards from computational limits to physical limits in his new paper, Ord - even if in a highly theoretical and very limited sense - is making his vision of the universe and existence more computable, and therefore more amenable to his authoritarian political and ethical vision.
(Remember, Will MacAskill’s ex-wife Amanda Askell is still literally on this gravy train, a philosopher working on ‘AI safety’ at Anthropic … the company Sam Bankman-Fried stuffed with $500m of stolen money. After all this time I don’t know what an AI safety worker does, and I’m beginning to suspect the answer is “not much.”)
I will leave an evaluation of the legitimacy of Ord’s new claims to physicists and computer scientists - which describes neither myself nor Toby Ord. But I can’t help highlighting an illustrative example that suggests a category error is being made here.
This is a quick first evaluation, but it appears that Ord here is claiming that a pattern representing an expression of a successfully solved halting problem cannot physically exist. This is true at best in two uninteresting ways.
First, if the halting problem can’t actually be calculated at all because of systemic mathematical constraints (the halting problem is here a kind of stand-in for Godel’s incompleteness theorem, a precursor to Church-Turing) then there simply is no pattern to be constructed, because the equation has no answer. This reduces his argument to the equivalent of saying “There are no digits that can express the square root of zero, therefore the squre root of zero doesn’t exist in the physical world.” True, but so what.
While Ord tries to soft-pedal the theoretical and expansive nature of the boundaries he’s suggesting, his payout is beyond radical, and very convenient for Ord. He seems, in short, to be indirectly suggesting that the incompleteness theorem is invalid because the physical world cannot transcend calculable human mathematics.
Or maybe there’s a second read here. Ord is saying something more like: “if a physical pattern containing the solution to the halting problem existed, we could use it to answer the impossible halting problem, therefore such a physical pattern cannot exist.”
This is where Ord really trips up, if my reading is at all available. He is directly mistaking human knowledge for physical reality, because he is assuming that we would somehow be able to recognize a physical pattern that solved an extremely advanced function, and use its existence to break through the theoretically unbreakable Church-Turing limit. But that’s not how math works! If we don’t already know the answer, how would we recognize the pattern that represents the answer? In short, I think this is another category error.
My off the cuff counterpoint to Ord’s thesis is that if no solution to the halting problem exists, then it’s simply a non-sequitor to argue that this implies anything at all about the physical world. If a solution to the halting problem exists but is not humanly calculable, then the world could very easily be full of patterns that answer it, and we would simply never know. A whole hell of a lot of physical phenomena and patterns exist that we humans haven’t gotten around to deriving an algorithm for yet. There is no physical limit on any phenomenon simply because humanity cannot conceptualize it a priori.
In fact, that’s the opposite of the arrow of understanding, and that inversion points back towards the source of Effective Altruism’s many practical and philosophical failures. Quite simply, human mathematics does not prevent an independent (e.g. natural) process from producing a pattern that matches the output of a function humans can’t solve, or perhaps can’t even imagine.
That’s the trouble with reality - it’s beyond our comprehension! And that’s something the EAs, longtermists, utopians, and eugenicists absolutely cannot seem to accept.