3 Comments

Side note. Again, there are three important points to be verified. A) predictions of sufficient accuracy can be made. B) They can allow for a quantifiable bad behavior. C) That is offset by a quantifiable larger good result.

Per A, I often think about this article https://www.vox.com/future-perfect/2024/2/13/24070864/samotsvety-forecasting-superforecasters-tetlock . Notably, it was published as part of Vox's Future Perfect series. Which was given a grant by SBF's foundation (or, effectively, SBF gave away funds that were actually user deposits to Future Pecfect). Vox first wrote they put the project the money was going to go to on "pause" https://www.vox.com/future-perfect/23500014/effective-altruism-sam-bankman-fried-ftx-crypto . Then eight days later the Washington Post said Vox planned to return the money https://www.washingtonpost.com/media/2022/12/20/sbf-journalism-grants/ . But if you look at Future Perfect articles, as recently as this year, they still only say the project is on pause or "cancelled," and I can find nothing about actually returning the money https://www.vox.com/business-and-finance/2024/3/28/24114058/sam-bankman-fried-sbf-ftx-conviction-sentence-date .

But, yes, I wonder about these superforecasters https://en.m.wikipedia.org/wiki/Superforecaster . They, of course, were never consulted by SBF, as far as I know, but perhaps they can give the upper bound of point A. Not sure if we have enough data (time and sample size) and can apply their forecasts to the situations EA utilitarians supposedly calculates for. However, I'd like for someone to try, and then hear the analysis, preferably from someone possibly a bit more impartial than Vox's Future Perfect.

Expand full comment

"there is no such thing as a numerical 'probability' when it comes to the truly uncertain."

Not to be pedantic, but that's basically saying math, as most people knows it, doesn't exist. In a purely abstract way, in a binary situation of equal weight outcome, the probability is 50-50. It's an abstract, but abstracts are things by most definitions.

In the non-abstract, one can make an approximate probability, which is still a thing. It is uncertain that a coin flip will be heads or tails. But there is an approximate 50-50 probability on heads or tails.

Now, to be extra pedantic, you could say that knowing the exact probability is unsure. You can't know the number of atoms on the head's side versus the tails' side and whether a butterfly will flap its wings across the planet and affect it's trajectory. You could do all the calculations and measurements humanly possible and come up with odds of 50.001 to 49.999, and it's really 50.0011 to 49.9999.

So, to say in real world scenarios, no one would be able to get exact percentages is fair. But of course that is not controversial. Because to say one can get perfect percentages, such as exactly 75% to occur, of all things would be frankly the same power as saying one could predict exactly 100% to occur of all the things. Both would be infinitely more powerful than normal human ability, and would no doubt allow one to rule the world.

Then you can try "there is no such thing as a numerical 'probability' of sufficient probability when it comes to the truly uncertain."

But of course that's not accurate, or casinos would be out of business. They can control quite a bit of the environment, such that butterfly flaps are unlikely to sway the ball in the roulette wheel. And while they cannot be sure one number slot does not have enough extra atoms to make it slightly wider than the zero slot on the roulette wheel, that extra width is generally not going to be impactful compared to the fact there is that zero slot, giving approximately 2.7% odds advantage over any player (that's the European wheel, American wheels have the double zero also and even better house odds). With sufficient volume, that 2.7% benefit could be at least at whole percentage point off and still be profitable. For the behavior of a casino, the percentages are definitely sufficiently accurate.

So, what I think is being said is:

"there is no such thing as a numerical 'probability' accuracy that can sufficiently compensate for certain behaviors when it comes to the truly uncertain."

The challenge with presenting it this way, is now your rhetorical opponent has something to argue. They can ask you to prove their probability calculations are not accurate enough to be the basis of their actions. However, at this point you are now being tasked to prove a negative.

Because if the probability was the likelihood that a centralized exchange, run exactly like FTX, would have a "run on the bank" (though, of course, not a bank) greater than their assets and/or delays could mitigate is not as simple as trying to figure out coinflips and roulette wheels--of which there are many in existence and whose consequences are generally well known. So to know how far off the calculation was of the result is difficult, to say the least.

And the eventual good behavior that is supposedly sufficient compensation for the naughty/risky behavior is always based on good-behavior results so far in the future, it is on longer time frames than the EA movement has generally existed. Not only would we need hundreds of FTXs in the past, we would need hundreds of SBFs to prosper from them and then see what effect their "philanthropy" had well past their lifetimes.

Then it becomes an ever shifting game of goalposts. Do you compare other utilitarian philosophies that purport using science but do naughty/risky behavior that we generally now accept as immoral? And for whom there never seemed to be a positive/good result? Such as Social Darwinism and Eugenics? EA can argue science is better now. IQ tests are better than, say, phrenology (which doesn't say much). Or do you just dismiss the purported scientific justification part from the utilatarianism, because then you can have a much better sample size, such that you can determine if the justification part is really relevant? Perhaps that would work. Look at all the utilitarian movements throughout history, and see how many of them, regardless of justification, nonetheless lead to massive abuse that negated and went beyond any "value."

But has an argument like the Ricky Gervais' regarding religion on Stephen Colbert's show ever worked? I.e. describe how according to one person's religion those other religions must be wrong, and if so it means that a ridiculously large number are wrong, and yet, despite the odds, that one religion is supposedly right.

Religion is thankfully not utilitarian by default, but when it is used in a utilitarian way (e.g. the Crusades) it is extremely dangerous. And EA when used on large time scales may as well be religiously Eschatological and when done by SBF is utilitarian. And, as is rightly brought up in that TESCREAL essay in a previous substact, that is very dangerous.

But I've digressed. My real point was that one can't simply say probabilities don't exist for the real world. You have to tackle the much harder question of are the EAs and SBFs getting the probabilities right enough for using them as they do.

Edit 10/9/2024 - typoes.

Expand full comment

Haha yes, this is a bit pedantic. I suppose I should have written e.g. "there's no way of *knowing* the probability of the truly uncertain." Which is indeed a tautology, but my point being it's a tautology a lot of people I'm writing about have forgotten.

Expand full comment