šļø The Computability of the Universe is Political
Toby Ord's latest foray into crafting a rationalist universe. Also: Razzlekhan, Nishad Singh, Ryan Salame, TDBank, and more.
Welcome to your weekly Dark Markets news roundup. Below, youāll find some first impressions of a new paper by Toby Ord, one of the founding minds of the Effective Altruist movement. First, the weekās news in brief.
TDBank did some Incredible Crimes
A very boring-looking consumer bank was letting people come into its Chinatown branches (and others) and deposit duffel bags full of cash - quite literally. The cash was proceeds from fentanyl smuggling, and boy did the bank go out of its way not to notice! Thatās how you wind up with a $3.1 billion fine for laundering $690 million dollars. Itās also how none of your executives go to jail, at all!
Bitfinex Hacker Razzlekhan Faces 18 months in Prison
Heather Morgan and Ilya Lichtenstein were technically only charged with *laundering* funds stolen from BitFinex, but it seems likely they/she also played a role in the theft. Still, it looks like sheāll get more prison time than anyone who facilitated TDBankās laundering. Funny, that.
I Canāt Force Myself to Watch this Stupid Ryan Salame Interview
Because Tucker Carlson is the most annoying man on the planet, Iāve still only made it about twenty minutes into his interview with FTX collaborator Ryan Salame. Even worse, despite Salameās victim complex and some minor misrepresentations of the various campaign finance frauds going on, they have a point: Salame was oversentenced, and itās heinous that Barbara Fried hasnāt faced more scrutiny.
Nishad Singhās Lawyers Ask for Time Served
Iāll be blunt: based on what I saw on the stand and what Iāve learned since, I actually like Nishad Singh. He seemed to really believe in the whole EA thing, which is cute if naive, and Sam Bankman-Fried had him absolutely hypnotized. Still, Iām not sure if his defense teamās request for a prison sentence of no more than time already served is going to go over well.
Stripeās $1.1B Stablecoin Acquisition
Stripe has paid $1.1 billion for a brand-new (2022) startup called Bridge. Bridge mostly does stablecoin swaps, though Iām sure thereās more to it than that. But big picture, I think weāre seeing the future of global payments take shape, with stablecoins very near the center.
Dow Jones and News Corp Sue Perplexity for Copyright Infringement
The hits just keep on coming. Remains to be seen whether a lot of these are just looking for settlements that will be rounding errors to the AI megafunds. The right approach is to negotiate an ongoing licensing deal, which would make AI much more expensive, because it will actually have to pay for the information itās using, instead of free riding. Weāll see about that.
The Computability of the Universe is Political
Toby Ord has released a new draft paper that is uncannily relevant to my project on Effective Altruism and prediction: āBounds on the Rates of Growth and Convergence of All Physical Processes.ā Very roughly, it makes the extraordinary claim that limits on the computability of systems by human mathematics, whatās known as the Church-Turing Thesis, can be used to understand real physical limits in the universe. On its face, this is a strange claim, but hey, math and physics are weird.
(Immense thanks to reader Dan on Twitter for alerting me to this paper.)
āAI Safetyā is the test run for the global āexistential riskā committees of big-brains that Ord imagines.
The reason this particular claim deserves scrutiny, though, is that despite Ordās hedging in the preprint, it is very much not a theoretical matter. Instead, it neatly reflects the degree to which Effective Altruism and affiliated movements rely on the computability of the real universe for their claims to legitimacy and power. For Ord specifically, his desire to impose centralized global planning committees to combat long-term existential risk ultimately depends on the idea that those risks are computable.
This is a problem for Ord, one that he even acknowledges in his book The Precipice. If scientists estimate a very low probability of an existential event ā for instance, a one-in-a-trillion chance that we create a rogue murderous AI ā Ord admits that āthe chance the scientists have incorrectly estimated this probability is many times greater than one in a trillion.ā (198). In easily the most intellectually risible passage in this hefty tome, and one that to my mind undermines Ordās entire project, he proceeds to insist that āour uncertainty about the underlying physical probability is not grounds for ignoring the risk, since the true risk could be higher as well as lowerā than that error-tinged one-in-a-trillion number.
This, at first blush, turns the entire Effective Altruist and Longtermist projects into an elaborate motte-and-bailey anti-argument. The base claim is that e.g. āLong term risk is computable and we should make investments now to mitigate against it, by giving the very smartest people the power to do the calculations and impose solutions of their choosing.ā This is happening right now with the plethora of āAI Safetyā work going at AI startups. āAI Safetyā is the test run for the global āexistential riskā committees of big-brains that Ord imagines in his book.
But when the longtermists admit, as they must, that nobody can compute far enough into the future to make centralized planning for the AI apocalypse a sensible idea, they fall back to something more like: āFuture existential risks may not be accurately calculable, but they could happen and theyāre big and scary, so you should give me power.ā
By working backwards from computational limits to physical limits in his new paper, Ord - even if in a highly theoretical and very limited sense - is making his vision of the universe and existence more computable, and therefore more amenable to his authoritarian political and ethical vision.
(Remember, Will MacAskillās ex-wife Amanda Askell is still literally on this gravy train, a philosopher working on āAI safetyā at Anthropic ⦠the company Sam Bankman-Fried stuffed with $500m of stolen money. After all this time I donāt know what an AI safety worker does, and Iām beginning to suspect the answer is ānot much.ā)
I will leave an evaluation of the legitimacy of Ordās new claims to physicists and computer scientists - which describes neither myself nor Toby Ord. But I canāt help highlighting an illustrative example that suggests a category error is being made here.
This is a quick first evaluation, but it appears that Ord here is claiming that a pattern representing an expression of a successfully solved halting problem cannot physically exist. This is true at best in two uninteresting ways.
First, if the halting problem canāt actually be calculated at all because of systemic mathematical constraints (the halting problem is here a kind of stand-in for Godelās incompleteness theorem, a precursor to Church-Turing) then there simply is no pattern to be constructed, because the equation has no answer. This reduces his argument to the equivalent of saying āThere are no digits that can express the square root of zero, therefore the squre root of zero doesnāt exist in the physical world.ā True, but so what.
While Ord tries to soft-pedal the theoretical and expansive nature of the boundaries heās suggesting, his payout is beyond radical, and very convenient for Ord. He seems, in short, to be indirectly suggesting that the incompleteness theorem is invalid because the physical world cannot transcend calculable human mathematics.
Or maybe thereās a second read here. Ord is saying something more like: āif a physical pattern containing the solution to the halting problem existed, we could use it to answer the impossible halting problem, therefore such a physical pattern cannot exist.ā
This is where Ord really trips up, if my reading is at all available. He is directly mistaking human knowledge for physical reality, because he is assuming that we would somehow be able to recognize a physical pattern that solved an extremely advanced function, and use its existence to break through the theoretically unbreakable Church-Turing limit. But thatās not how math works! If we donāt already know the answer, how would we recognize the pattern that represents the answer? In short, I think this is another category error.
My off the cuff counterpoint to Ordās thesis is that if no solution to the halting problem exists, then itās simply a non-sequitor to argue that this implies anything at all about the physical world. If a solution to the halting problem exists but is not humanly calculable, then the world could very easily be full of patterns that answer it, and we would simply never know. A whole hell of a lot of physical phenomena and patterns exist that we humans havenāt gotten around to deriving an algorithm for yet. There is no physical limit on any phenomenon simply because humanity cannot conceptualize it a priori.
In fact, thatās the opposite of the arrow of understanding, and that inversion points back towards the source of Effective Altruismās many practical and philosophical failures. Quite simply, human mathematics does not prevent an independent (e.g. natural) process from producing a pattern that matches the output of a function humans canāt solve, or perhaps canāt even imagine.
Thatās the trouble with reality - itās beyond our comprehension! And thatās something the EAs, longtermists, utopians, and eugenicists absolutely cannot seem to accept.





https://stories.td.com/us/en/article/td-confirms-ownership-stake-in-the-charles-schwab-corporation-following-completion-of-transaction
So you can help a money launderer when you are trading securities! Yay!
"
motte-and-bailey anti-argument
"
Actually, I think Pascal's Wager fits perfectly with TESCREAL eschatology. I don't know if there's a heaven or a hell / I don't know if there's an AI post-scarcity utopia or AI revolt. But rather than risk I'm wrong... I will believe and devote myself to this idea of god / I will believe and give unfettered power and access to AI Tech Bros.
Same problem as Pascal's Wager. 1) Not free. Opportunity cost in praying / opportunity cost in listening to AI Tech Bros. 2) If wrong god chosen, real god might be pissed and lose Heaven or give you Hell / if AI Tech Bros are self serving, the real stewards to AI utopia is lost and some rogue stewards to AI might destroy the world.
If AI Tech Bros can't prove with evidence (and evidence would include both forecasting at least decently on smaller scales enough that their forecasting can/will scale up). Why make them your gods? (Assuming there could be an evidence-based way to find "god.")