👁️ Stolen FTX Money Supported Richard Hanania and Other Racist "Rationalists"
LessWrong's administrators got $5 million from Sam Bankman-Fried ... then cosigned the worst people on Earth.
Welcome to the weekly Dark Markets fraud, crypto, and “artificial intelligence” news roundup. If you missed it, be sure to check out my podcast interview with the Crypto Critics, Cas Piancey and Bennett Tomlin. They walked me through the very weird relationship between Alameda Research, Tether, and Bahamas’ Deltec Bank.
I’m trying a new format today, with briefs up top and two longer dives down below. Enjoy.
News In Brief
American Cops Killed Journalist Linda Tirado
Linda Tirado was the journalist shot in the eye by a “non-lethal” rubber bullet during the Black Lives Matter protests of 2020. That wound left her with degenerative brain damage that is now in its final stages. In short, even given the four-year delay, American cops killed a journalist. (I wrote about the dangers of supposedly non-lethal weapons for Fortune at the time.)
Do Kwon’s Terraform Labs to Shut Down after Record Fine
Terraform Labs has decided to liquidate and shut down after being fined a record $4.5 billion by the SEC. That seems reasonable, since Terraform was a criminal enterprise masquerading as a “company.”
PleasrDAO Sues Martin Shkreli Over Wu-Tang Release
The decentralized group PleasrDAO paid $4 million in 2022 to buy Once Upon a Time in Shaolin, the single-copy Wu Tang album, previously owned by dirtbag Martin Shkreli. Pleasr bought the album, in fact, as part of the liquidation of Shkreli’s assets to fulfill a judgment.
But boy, that Shkreli! He allegedly retained, distributed and publicly played the album after it was no longer his property, in violation of the terms of his purchase agreement. PleasrDAO are suing Shkreli.
Disclosure: One of my private clients is affiliated with PleasrDAO. See this post for more detail on my private work and disclosure practices.
Bilt Conned Wells Fargo Into a Very Good Deal
Fintech VC Sheel Mohnot highlighted the bizarre situation of the fintech Bilt, whose entire business model seems to hinge on ripping off Wells Fargo. The gist is that Wells Fargo signed a deal with Bilt based on some bad assumptions about how people would use a credit card that offered 1% back on rent: namely, Wells assumed it would be used primarily for things other than rent, which seems … misguided.
The deal is reportedly costing Wells millions of dollars a year, and the bank has (very unusually) said they don’t plan to renew it after it expires in 2025. So … enjoy working at/using Bilt while it lasts, I guess?
TrueAnon and Max Read on AGI:
I highly recommend the new TrueAnon episode with Max Read, which entertainingly deconstructs the absurdity of AI hype. Perhaps weirdly, they chose a promo clip about the stupid and shortsighted reasons Elon Musk is removing likes from Twitter? But that’s also very funny. A great listen as always.
Stolen FTX Customer Deposits Supported Racist Event at LessWrong Compound
One of the arguments I’ve tested in a few different forms in my writing about Sam Bankman-Fried and FTX is that the core premises of Effective Altruism, Longtermism, and Silicon Valley “Rationalism” are deeply authoritarian. While my focus is on the underlying theory, new reporting from The Guardian shows how that theory plays out in practice: by normalizing fascists.
Supposedly reformed neo-Nazi Richard Hanania, who declared Black people incapable of self-governance as recently as 2010, was just the most egregious of a half-dozen overt or covert ethnonationalists invited to speak at Manifest 2023, a conference sponsored by the prediction market Manifold, and hosted at the Lighthaven building in San Francisco. Lighthaven is a project of Lightcone Infrastructure, which also administers Eleizer Yudkowsky’s LessWrong.
(Note that Manifold Markets is not seemingly affiliated with Manifold.xyz, an NFT service platform.)
Other speakers at the conference included Razib Kahn, who had written for outright fascist publication VDare, which the SPLC considers a hate group; and someone named Jonathan Anomaly, a defender of “liberal eugenics.” There were also many guests in the “just short of explicitly exterminationist” category, such as the eugenicist child abusers Malcolm and Simone Collins.
This matters because it connects the theory of “Rationalism,” and the closely tied “TESCREAL” ideological bundle (deeper dive here), with the practice of ethnonationalism.
And guess who financially backed this unholy wedding: Sam Bankman-Fried.
The Guardian reports FTX creditor claims that FTX funneled $5 million to Lightcone Infrastructure, including funds used in escrow for the purchase of the building where the conference was held. And we already know every dollar that flowed out of FTX was effectively commingled with customer deposits. (I haven’t been able to find the actual lawsuit document, which may not be public yet, so I’m not sure exactly which channel the funds travelled.)
Oliver Habryka, who runs Lightcone and whose Twitter bio says he is “building LessWrong.com,” has made some fumbling attempts to rebut specific elements of The Guardian’s reporting, with the same sort of grasping-at-straws nitpicking that made Sam Bankman-Fried’s testimony in his own defense such a flaming disaster.
Similarly weak-to-self-owning defenses were offered in the Guardian piece, as when a Manifold Markets exec named Austin Chen defended the presence of speakers like Hanania, saying:
“We did not invite them to give talks about race and IQ … Manifest has no specific views on eugenics or race & IQ.”
Pro Tip: When someone says they “have no specific views on eugenics or race & IQ,” they are confessing to being racist moonbats.
It’s continually enthralling how these people have “rationalized” themselves into being absolutely dogshit at thinking.
A more upbeat gem in the Guardian piece, though, comes from Daniel HoSang, a professor and part of the Anti-Eugenics Collective at Yale.
“The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable,” HoSang said. “They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.”
This is an elegant summation of the common denominator between Richard Hanania, Yudkowskyites, and something as seemingly remote as Barbara Fried’s utilitarianism: They all ultimately consider human beings objects, not subjects. At its root, TESCREALism is the denial that human beings are sacred. The willingness to liquidate inconvenient people is the inevitable corollary of such instrumental thinking.
This also, I’m increasingly convinced, helps explain why these people are so brainless when it comes to artificial intelligence: They don’t regard most humans as having subjectivity, so questions of consciousness and free will don’t trouble them in the slightest.
“Artificial Intelligence” Was Always a Fundraising Gimmick
Evgeny Morozov has been one of the most incisive and insightful analysts of the consequences of digital communication for well over a decade, marked first by his brilliant anti-hype book The Net Delusion (2011), which neatly rebutted the idea that the internet was an unmitigated vector of freedom and equality, years before Facebook and Donald Trump proved him all too prescient.
Morozov has been rolling out a big new project about the history of AI, including posting amazing archival footage to his Twitter account. He recently posted footage from 1973 of AI pioneer John McCarthy, who coauthored a paper in the late 1950s in which the term “artificial intelligence” was coined.
“I invented [the term] because we had to do something when we were trying to get money for a summer study in 1956,” McCarthy says, to waves of laughter.
In the footage, McCarthy admits that the phrase was motivated (at least in part) by the need for a catchy marketing phrase. With all respect to McCarthy, the situation illustrates the real risk of puffery: That someone far stupider than the coiner of a clever phrase will come along and take it literally. McCarthy clearly regarded his tossed-off invocation of machine “intelligence” as a literal joke - but Ray Kurzweil and his gaggle of inept followers didn’t get the punchline.
PleasrDAO is not easy to research. Mostly it's articles like this, that talk about what they buy: https://www.rollingstone.com/music/music-features/wu-tang-nft-album-once-upon-time-shaolin-1244859/ . What is not brought up ever, is how they operate.
I've yet to see a DAO structure that's really intrigued me. The autonomous part is a whole issue in itself, but I'm particularly suspicious of the "decentralized" part. As presumably the point of decentralization is egalitarian rule. To the point where I feel like I need to wrap the initials in quotation marks whenever I use it. That suspicious.
With most DAOs, usually it's $1=1vote. Not always $1, but you can basically buy as many votes as you want, and the more money the more votes in a linear fashion. Which is nice in that theoretically anyone can join. But not nice because well the rich have the most power, which is how it always is, so not exciting.
(Side note, today there was news about a "governance attack" with Comp, where people were buying up tokens to push their own agenda--which strikes me odd that Working As Intended is considered an attack. Comp was a payment/reward for using the protocol, but payment that was never soul-bound, not that it really could be in those days. Comp was always sellable, and buying more could always get you more power, so that's what the "attackers" are doing. Short selling needs to exist, in my mind, and this might just be shorting a governance method.)
Or, it's $1=1vote, but a ridiculous number of tokens/votes are already stuck/locked with the founders and maybe some VCs. Locking tokens and reserving them for the founders makes a lot of sense from creating a financial incentive for the founders to work on and care about a project. But then the doors are closed all that much more on others who might want to join, and frankly really takes out the "decentralized" part as the amount for a founder is almost always more than any one else's. (And then founders and VCs often hide behind the DAO to avoid culpability, usually the legal kind. Can someone commit 15% of a crime? If that 15% is the largest percentage, are they liable for 100%? Murky.)
Then you have "DAOs" like PleasrDAO. Which have zero transparency on how they operate or how people join. Their website is just NFTs they've purchased and a link to their Twitter presence. A "collective" is probably a much better name. Can I buy tokens or even NFTs to join? Can I provide liquidity to a pool? Maybe I need to be ranked somewhere on my NFT sales? Or maybe I need to win some art awards? Or maybe... Just maybe... You gotta know someone. Someone important in the "organization."
That's my guess.
Needing to know someone sounds /very/ centralized. Where there is smoke there's fire. Where there is shrouds of mystery... There is usually hanky panky. Though sometimes it is just plain disorganization/ineptitude. Sometimes it's simply fear when not everyone agrees what's hanky panky and what's honky dorey. But as the money increases, the more likely hanky panky, in my mind. Not necessarily illegal hanky panky, as wash trading NFTs, for example, is not currently illegal as far as I know. And again, some consider it honky dorey.
As someone who's met flesh in the NFT scene, I've heard "artists" admit it's a world of "you scratch my back / buy my NFT; I'll scratch/buy yours" until there is enough volume to get someone unknowing of this arrangement to buy. Some call it "supporting each other." I think there was a recent court case about a similar arrangement though, and it involved someone who would really like the idea of $1=1vote. And the prosecutors had a different term for it: "quid pro quo." (And this scenario went very much went against laws that currently exist.) When is quid pro quo okay? Dunno, but I suspect it's more okay when it's more transparent.
So, yeah, I'm skeptical.
I like reading Vitalik Buterin's essays on the topic. You can see he thinks about it a lot. See the ones on quadratic voting and Soul-bound tokens. https://vitalik.eth.limo/general/2021/09/26/limits.html https://vitalik.eth.limo/general/2021/08/16/voting3.html https://vitalik.eth.limo/general/2020/09/11/coordination.html https://vitalik.eth.limo/general/2019/04/03/collusion.html https://vitalik.eth.limo/general/2018/03/28/plutocracy.html https://vitalik.eth.limo/general/2022/01/26/soulbound.html
I think a mix of soulbound with verifiable accounts makes a lot of sense (verifiable that the accounts are not sybil and very unlikely to be sold as essentially wrappers for the voting rights). Along with that, the entry should be open and/or meritorious, e.g. winning some award that is not blatantly related to having privilege, but more related to skill. That's for an art DAO, but what about a more traditional DeFi DAO? If there was a protocol, where the founders foreplan that any planned airdrop will be one that can't be sold (soulbound/verified), and a timeline for airdrop is locked into place well in advance, and nobody including them will have anything earmarked, that might make for an interesting governance model. (That said: Markets always find a way, but it could theoretically be made prohibitively difficult. And if the markets want to even after it's become prohibitively difficult, well that's a bit of a good problem to have.)
Regardless, this does not sound like PleasrDAO. If DZM is associated in some way to PleasrDAO, I'd be very interested in reading one of his Substacks on the topic.