đď¸ The Computability of the Universe is Political
Toby Ord's latest foray into crafting a rationalist universe. Also: Razzlekhan, Nishad Singh, Ryan Salame, TDBank, and more.
Welcome to your weekly Dark Markets news roundup. Below, youâll find some first impressions of a new paper by Toby Ord, one of the founding minds of the Effective Altruist movement. First, the weekâs news in brief.
TDBank did some Incredible Crimes
A very boring-looking consumer bank was letting people come into its Chinatown branches (and others) and deposit duffel bags full of cash - quite literally. The cash was proceeds from fentanyl smuggling, and boy did the bank go out of its way not to notice! Thatâs how you wind up with a $3.1 billion fine for laundering $690 million dollars. Itâs also how none of your executives go to jail, at all!
Bitfinex Hacker Razzlekhan Faces 18 months in Prison
Heather Morgan and Ilya Lichtenstein were technically only charged with *laundering* funds stolen from BitFinex, but it seems likely they/she also played a role in the theft. Still, it looks like sheâll get more prison time than anyone who facilitated TDBankâs laundering. Funny, that.
I Canât Force Myself to Watch this Stupid Ryan Salame Interview
Because Tucker Carlson is the most annoying man on the planet, Iâve still only made it about twenty minutes into his interview with FTX collaborator Ryan Salame. Even worse, despite Salameâs victim complex and some minor misrepresentations of the various campaign finance frauds going on, they have a point: Salame was oversentenced, and itâs heinous that Barbara Fried hasnât faced more scrutiny.
Nishad Singhâs Lawyers Ask for Time Served
Iâll be blunt: based on what I saw on the stand and what Iâve learned since, I actually like Nishad Singh. He seemed to really believe in the whole EA thing, which is cute if naive, and Sam Bankman-Fried had him absolutely hypnotized. Still, Iâm not sure if his defense teamâs request for a prison sentence of no more than time already served is going to go over well.
Stripeâs $1.1B Stablecoin Acquisition
Stripe has paid $1.1 billion for a brand-new (2022) startup called Bridge. Bridge mostly does stablecoin swaps, though Iâm sure thereâs more to it than that. But big picture, I think weâre seeing the future of global payments take shape, with stablecoins very near the center.
Dow Jones and News Corp Sue Perplexity for Copyright Infringement
The hits just keep on coming. Remains to be seen whether a lot of these are just looking for settlements that will be rounding errors to the AI megafunds. The right approach is to negotiate an ongoing licensing deal, which would make AI much more expensive, because it will actually have to pay for the information itâs using, instead of free riding. Weâll see about that.
The Computability of the Universe is Political
Toby Ord has released a new draft paper that is uncannily relevant to my project on Effective Altruism and prediction: âBounds on the Rates of Growth and Convergence of All Physical Processes.â Very roughly, it makes the extraordinary claim that limits on the computability of systems by human mathematics, whatâs known as the Church-Turing Thesis, can be used to understand real physical limits in the universe. On its face, this is a strange claim, but hey, math and physics are weird.
(Immense thanks to reader Dan on Twitter for alerting me to this paper.)
âAI Safetyâ is the test run for the global âexistential riskâ committees of big-brains that Ord imagines.
The reason this particular claim deserves scrutiny, though, is that despite Ordâs hedging in the preprint, it is very much not a theoretical matter. Instead, it neatly reflects the degree to which Effective Altruism and affiliated movements rely on the computability of the real universe for their claims to legitimacy and power. For Ord specifically, his desire to impose centralized global planning committees to combat long-term existential risk ultimately depends on the idea that those risks are computable.
This is a problem for Ord, one that he even acknowledges in his book The Precipice. If scientists estimate a very low probability of an existential event â for instance, a one-in-a-trillion chance that we create a rogue murderous AI â Ord admits that âthe chance the scientists have incorrectly estimated this probability is many times greater than one in a trillion.â (198). In easily the most intellectually risible passage in this hefty tome, and one that to my mind undermines Ordâs entire project, he proceeds to insist that âour uncertainty about the underlying physical probability is not grounds for ignoring the risk, since the true risk could be higher as well as lowerâ than that error-tinged one-in-a-trillion number.
This, at first blush, turns the entire Effective Altruist and Longtermist projects into an elaborate motte-and-bailey anti-argument. The base claim is that e.g. âLong term risk is computable and we should make investments now to mitigate against it, by giving the very smartest people the power to do the calculations and impose solutions of their choosing.â This is happening right now with the plethora of âAI Safetyâ work going at AI startups. âAI Safetyâ is the test run for the global âexistential riskâ committees of big-brains that Ord imagines in his book.
But when the longtermists admit, as they must, that nobody can compute far enough into the future to make centralized planning for the AI apocalypse a sensible idea, they fall back to something more like: âFuture existential risks may not be accurately calculable, but they could happen and theyâre big and scary, so you should give me power.â
By working backwards from computational limits to physical limits in his new paper, Ord - even if in a highly theoretical and very limited sense - is making his vision of the universe and existence more computable, and therefore more amenable to his authoritarian political and ethical vision.
(Remember, Will MacAskillâs ex-wife Amanda Askell is still literally on this gravy train, a philosopher working on âAI safetyâ at Anthropic ⌠the company Sam Bankman-Fried stuffed with $500m of stolen money. After all this time I donât know what an AI safety worker does, and Iâm beginning to suspect the answer is ânot much.â)
I will leave an evaluation of the legitimacy of Ordâs new claims to physicists and computer scientists - which describes neither myself nor Toby Ord. But I canât help highlighting an illustrative example that suggests a category error is being made here.
This is a quick first evaluation, but it appears that Ord here is claiming that a pattern representing an expression of a successfully solved halting problem cannot physically exist. This is true at best in two uninteresting ways.
First, if the halting problem canât actually be calculated at all because of systemic mathematical constraints (the halting problem is here a kind of stand-in for Godelâs incompleteness theorem, a precursor to Church-Turing) then there simply is no pattern to be constructed, because the equation has no answer. This reduces his argument to the equivalent of saying âThere are no digits that can express the square root of zero, therefore the squre root of zero doesnât exist in the physical world.â True, but so what.
While Ord tries to soft-pedal the theoretical and expansive nature of the boundaries heâs suggesting, his payout is beyond radical, and very convenient for Ord. He seems, in short, to be indirectly suggesting that the incompleteness theorem is invalid because the physical world cannot transcend calculable human mathematics.
Or maybe thereâs a second read here. Ord is saying something more like: âif a physical pattern containing the solution to the halting problem existed, we could use it to answer the impossible halting problem, therefore such a physical pattern cannot exist.â
This is where Ord really trips up, if my reading is at all available. He is directly mistaking human knowledge for physical reality, because he is assuming that we would somehow be able to recognize a physical pattern that solved an extremely advanced function, and use its existence to break through the theoretically unbreakable Church-Turing limit. But thatâs not how math works! If we donât already know the answer, how would we recognize the pattern that represents the answer? In short, I think this is another category error.
My off the cuff counterpoint to Ordâs thesis is that if no solution to the halting problem exists, then itâs simply a non-sequitor to argue that this implies anything at all about the physical world. If a solution to the halting problem exists but is not humanly calculable, then the world could very easily be full of patterns that answer it, and we would simply never know. A whole hell of a lot of physical phenomena and patterns exist that we humans havenât gotten around to deriving an algorithm for yet. There is no physical limit on any phenomenon simply because humanity cannot conceptualize it a priori.
In fact, thatâs the opposite of the arrow of understanding, and that inversion points back towards the source of Effective Altruismâs many practical and philosophical failures. Quite simply, human mathematics does not prevent an independent (e.g. natural) process from producing a pattern that matches the output of a function humans canât solve, or perhaps canât even imagine.
Thatâs the trouble with reality - itâs beyond our comprehension! And thatâs something the EAs, longtermists, utopians, and eugenicists absolutely cannot seem to accept.
https://stories.td.com/us/en/article/td-confirms-ownership-stake-in-the-charles-schwab-corporation-following-completion-of-transaction
So you can help a money launderer when you are trading securities! Yay!
"
motte-and-bailey anti-argument
"
Actually, I think Pascal's Wager fits perfectly with TESCREAL eschatology. I don't know if there's a heaven or a hell / I don't know if there's an AI post-scarcity utopia or AI revolt. But rather than risk I'm wrong... I will believe and devote myself to this idea of god / I will believe and give unfettered power and access to AI Tech Bros.
Same problem as Pascal's Wager. 1) Not free. Opportunity cost in praying / opportunity cost in listening to AI Tech Bros. 2) If wrong god chosen, real god might be pissed and lose Heaven or give you Hell / if AI Tech Bros are self serving, the real stewards to AI utopia is lost and some rogue stewards to AI might destroy the world.
If AI Tech Bros can't prove with evidence (and evidence would include both forecasting at least decently on smaller scales enough that their forecasting can/will scale up). Why make them your gods? (Assuming there could be an evidence-based way to find "god.")