đď¸Klein/Yudkowsky: A Confederacy of Dunces
Ezra Klein's mighty efforts to sanewash the Rationalist cult leader run smack into Yudkowsky's intransigent robo-fantasia.
Welcome to your weekly Dark Markets news roundup, Iâm longtime technology, fraud, and finance reporter David Z. Morris.
This week: Eliezer Yudkowsky fumbles Ezra Kleinâs NYTimes boost; LLMs get Brain Rot; Nigeria deports foreign scammers; the downward spiral of Grift Economics.
First, your regular reminder to pre-order Stealing the Future, a comprehensive analysis of why Effective Altruism, Abundance, Singularity Thought, and Rationalism are all dangerously wrong.
âHere we have the snake eating its own tail - Klein, a studiously unwitting operative of conservative Democrats, giving a platform to Yudkowsky, whose protestations against AI have proven a huge boon to the development of AI and the military surveillance and planning that is its real aim.â
The Sunil Podcast Episode 2
If you want a deeper dive into the book, here it is. Sunil Kavuri has released Part 2 of my appearance on his new podcast, where we discuss FTX, Sam Bankman-Fried, the Parents from Hell, and Dan Friedberg. In all humility, this is a *great* introduction to my forthcoming book. Check it out.
LLMs Can Get Brain Rot
A new preprint research paper has shown that exposing LLMs to viral short-form content tanked their reasoning ability by 23% and their memory by 30%. How does that work? I have no idea. But as one AI booster plaintively put it on X, âItâs not just bad data â bad output. Itâs bad data â permanent cognitive drift.â And given that these things are trained on increasingly large bodies of not-exactly-carefully-curated data, a downward spiral seems almost inevitable.
Nigeria Deports Hundreds of Chinese and Philipino Scammers
Nigeriaâs Economic and Financial Crimes Commission (EFCC) has announced the completed or planned deportation of more than 700 foreign nationals accused of âcybercrime, money laundering, and ponzi scheme operations.â
While âthe deported convicts include nationals of China, the Philippines, Tunisia, Malaysia, Pakistan, Kyrgyzstan, and Timor-Leste,â the EFCCâs announcement suggests that the bulk of the scammers were Chinese and Philipino. And rather than a diffuse scattering of different scams, it seems these were largely parties to one operation - âa sophisticated cybercrime and ponzi scheme syndicate operating under the cover of Genting International Co. Limited.â
âThe Griftoverse is Collapsingâ
For a couple of years my YouTube algorithm has been feeding me videos from Scott Carney, a journalist-turned-YouTuber who has merged into the fraudbusting lane. A recent Carney video particularly caught my eye because it hit on a few concepts that I find very compelling:
That frauds and grifts have their own economic logic - including old favorites Supply and Demand.
That rising fraud levels have implications for the macroeconomy.
Carney, working from the premise that levels of grift and fraud have been rising in our economy for decades, reaches the conclusion that grift may be reaching some kind of self-inflicted inflection point.
I find many of his premises useful and interesting. Carney argues that:
Grift, or the deceptive marketing of fraudulent products, relies on a broader basis of trust, including trust in institutions. You can see this everywhere - people like Alex Jones and RFK Jr. need trusted institutions like the CDC and the FDA to turn into useful enemies.
Over time, as it expands and metastasizes, grift erodes that trust broadly, making it harder for grifters to trick their victims. I think this is true broadly - grifters are parasites of trust, but when if they go too far, even their victims might actually accidentally learn some critical thinking skills. Thatâs bad! However, I tend to think that griftersâ core audience go quickly from being skeptics of the mainstream to being cultish followers of their new leaders, so itâs not obvious there actually is any limit for a certain subset of victims.
Where I disagree with Carney, or at least would add some nuance, is in his prediction of what happens after peak grift.
Carney I think is right to a degree with his prognosis: after grift comes outright crime. That actually rhymes with a specific recent example - Tai Lopezâ indictment by the SEC for securities fraud. Lopez had been a pioneering grifter in the legal-but-shitty lane of rip-off âeducationalâ products for more than five years when he decided heâd try and resurrect Radio Shack as an online brand. That was a huge mistake, amazingly leading him to be prosecuted *by the Trump SEC,* no less. Others will follow suit in overextending their hand.
But I would add a more optimistic note. The grift cycle eventually swings back. We can look to what happened after the previous fraud bubble of the Gilded Age, which ran from roughly 1880-1930 in the big picture. The final fraud-stock meltdown was followed by a massive recommitment to rigor and expertise. We are years away from that transition - and Americaâs midcentury thriving was also the product of outside disciplining forces including World War II and the Cold War.
It would be great if we didnât have to repeat that sort of catastrophe to get back to Peak America, but there is ultimately a limit to the publicâs eagerness to debase itself.
I do have to note that Carney is an interesting commentator on grifts, having seemingly promoted and/or fallen victim to a few in his journalistic career. Carney wrote an entire, seemingly credulous book about the breathing guru Wim Hof, whose methods have been connected with the deaths of 19 people; and another book further exploring ideas related to Wim Hof ⌠with input from pseudoscience hypebeast and sexual dilettante Andrew Huberman.
Believe it or not, I donât say that entirely critically - thereâs likely some insight to be gleaned from someone whoâs demonstrably vulnerable to the kind of fraud theyâre commenting on.
Klein/Yudkowsky: Abundunce
Eliezer Yudkowsky was interviewed on Ezra Kleinâs New York Times podcast, and it went over like a lead zeppelin. Yudkowsky is not just personally off-putting and full of shallow observations, he seems, strangely, to palpably not care about any of this. Which does make sense considering he has spent his career as a useful idiot frontman funded by tech billionaires to see just what form of gorilla dust heâs going to stir up next.
Of course, the interview makes no acknowledgment of Klein and Yudkowskyâs longstanding and deep connections, including through Vox, which Klein cofounded with the even more risibly halfwitted Matt Yglesias, and which continues to prominently feature an entire vertical dedicated to sponsored promotion of Effective Altruism. Kelsey Piper, Voxâs longtime resident EA booster, is to journalism what Yudkowsky is to philosophy - a cosplaying fraud propped up by friendly billionaires.
And of course, more deeply, Yudkowsky and Klein share the center-right technocratic neoliberal viewpoint that attracts that kind of funding, and gets you jobs and gets you featured the New York Times - a covertly right-wing ethos recently repackaged for âliberalsâ via Kleinâs Abundance project.
For just one glimpse of how lazy, incurious, and tendentious the Abundance project is, hereâs Kleinâs partner Derek Thompson getting disassembled by Mehdi Hassan over clear evidence of their inaccuracies and motivated thinking. Thompson and Kleinâs book baldly misrepresents the reality of a Biden broadband bill that is one of their key examples: While they excoriate the bill as an example of Democratic overregulation, the restrictions that hampered it were actually conditions imposed on the bill by the GOP, with the backing of internet incumbents who didnât want state-backed âcompetitionâ - even if they werenât actually providing sufficient rural service in the first place.
Intellectual and factual laziness is endemic to both Abundance and Yudkowskyite Rationalism, in part because they share common cracked foundations. But mostly, this is because they are not intellectual projects following internal logics, but ideological agendas externally supported, both rhetorically and materially, by the powerful people most likely to benefit from them. And here we have the snake eating its own tail - Klein, a studiously unwitting operative of conservative Democrats, giving a platform to Yudkowsky, whose protestations against AI have proven a huge boon to the development of AI and the military surveillance and planning that is its real aim.
Theyâre both deeply tragic figures, and as much as anything the interview drives home how pitiable Yudkowsky is as a human being. Like Peter Thiel, itâs a safe bet he hates living in his body, and you can understand why he would long to exist as an algorithm on a server on Venus. Heâs the kind of classic Dweeb archetype that seems to thrive in these circles (and, not coincidentally, in the far-right Groyper/Nick Fuentes universe). Yudkowskyâs upper lip doesnât entirely cover his teeth when he speaks. His hands, in unnerving contrast to his overall girth, are spindly, delicate, uncalloused things that dance in the air like particularly hesitant mosquitoes. For someone who has done a lot of public speaking, Yudkowsky is here breathless, disengaged, smug, and twitchy.
As someone who lifted myself up from Dweebdom by lifting weights and having adventures, I find this little bottled man all the more contemptible.
But Yudkowskyâs deeply off-putting personal presentation is not really the issue here.
The first thing thatâs noticeable about this interview, at least as published, is that it buries the supposed core concept of Yudkowskyite AI fear - the Doom part. Klein opens the interview by inviting Yud to talk for nearly 15 minutes about things like LLMs guiding teens to suicide, as if the present-day, real-world impacts of LLMs were what Yudkowskyâs âEveryone Diesâ warning was somehow about all along.
But thatâs not the reality - Yudkowsky has rarely if ever written anything serious about algorithmic discrimination or the threats of surveillance. Because that would have required nuanced and attentive thinking.
At least to my paranoid mind, given that Yudkowsky is in several material ways an albatros tied to Kleinâs neck, this sympathetic opening reeks of intentionally protecting Yudkowsky from himself by presenting him as a critic of âAIâ as it exists or interacts with humans, rather than what he is, which is a faith-based believer in an obfuscated millenarian eschatology in which the Flying Decision-Tree Monster is coming to devour us all.
Even gifted this generous misdirect by Klein, Yudkowsky fails to understand that his real ideas are simply bonkers to most people. Klein, to his credit, offers firm counters to Yudkowskyâs worst excesses, mostly by pointing out that we actually do program LLMs despite Yudâs total commitment to equating âitâs a bit of a black boxâ with âit has a soul and intentionality.â But Yud simply refuses the many exit ramps being offered.
Yudkowsky does a truly terrible job of explaining, either in substance or appeal, the core importance of the âalignment project.â He seems disaffected, sighs constantly, and for the most part seems to regurgitate stories about AI mishaps he read in the news. He reads cases of GPT psychosis as somehow suggesting that the LLMs have intentionality. He spends the first 20 minutes of the interview vaguely suggesting, but not coming out and saying, that LLMs are showing signs of consciousness (âThis is not like a toaster ⌠this is something weirder and more alien than thatâ), and then gets baffled by one simple question from *Ezra Klein* asking for a second example. He keeps talking about âside casesâ and âsuggestiveâ instances, but the man seems genuinely incapable of converting a coherent linear thought into sound with his mouth.
At about the 16:00 mark, after warnings and checks from Klein, Yud drops the Really Big Turd: recounting the plot from Terminator, which is ultimately the intellectual base for his entire worldview.
âYou do see stuff that is currently suggestive of things that have been predicted to be much bigger problems later ⌠These current systems are not yet at the point where they will try to break out of your computer and ensconce themselves permanently on the internet and then start start hunting down humans. They are they are not quite that smart yet as far as I can tell.â
He goes on long, rambling anecdotes about ice cream. He offers remarkably weak arguments for the big leap he has to make to âevil superintelligent robots.â He offers a weak version of the argument for scale - that LLMs will get weirder as they get larger. Thatâs an even more squirelly version of the same basic logic currently leading the rest of the A.I. âindustryâ to publicly eat shit, which is interesting.
Klein: Your book is not called if anyone builds it, there is a 1 to 4% chance everybody dies. You believe that the misalignment becomes catastrophic. Why do you think that is so likely?
Yudkowsky: Um, thatâs just like the the straight line extrapolation from, it gets what it most wants and the thing that it most wants is not us living happily ever after, so weâre dead.
Thatâs it, the jig is up. His argument is a straight line extrapolation. Thatâs the level of nuance and sophistication weâre looking at here. Weâve let rubes and clowns masquerade as intellectuals because they serve the political purposes of billionaires, and itâs all going to come home to roost.
Another really funny thing Yudkowsky just comes out and says, in the context of hypothetically protecting against rogue AI by not connecting it to the internet, is:
âIn real life, what everybody does is immediately connect the AI to the internet. They train it on the internet before itâs even been tested to see how powerful it is. It is already connected to the internet being trained.â
This is one of those things that makes me unhesitant to put it straight: Eliezer Yudkowsky, just like Sam Bankman-Fried, is a fucking moron cosplaying as a genius.
This guy has devoted his entire life to AI and doesnât seem to understand that the AI heâs actively complaining about would not exist if it hadnât touched the internet, because it fundamentally is the internet. LLMs were not created, fundamentally, by transformers or some other kind of technique. They are the product of the sudden appearance of a huge, digitized corpus of human text communication.
But even all that embarrassing idiocy is not the real nut here. What matters here is that Eliezer Yudkowsky is being offered a thorough sanewashing by Ezra Klein and the New York Times, and he is fucking it up for himself.
Klein is trying his best to position Yudkowsky as a âcritic of AI,â almost as if heâs the acceptable mainstream version of Ed Zitron. But heâs wildly less interesting, insightful, and entertaining than Ed. More important, Yudkowsky canât stop himself from still acting as if itâs inevitable that AIs will discover desire, occupy the internet, develop nanotechnology, and start hunting humans for sport! Heâs still literally doing the Terminator bit!
Regardless of this habitual crash out, this automatism at the end of thought, we have to ask - exactly why is it important that Yudkowsky get a red-carpet invitation from Ezra Klein to pivot here away from his theological commitment to the Infinite Built God of Artificial General Intelligence? Why are all these lazy nitwits so committed to saving each other?
I may have just answered my own question.








"
A new preprint research paper has shown that exposing LLMs to viral short-form content tanked their reasoning ability by 23% and their memory by 30%. How does that work? I have no idea.
"
Nor do you need to know how it works... But if you don't, then you probably should verify it /does/ work by an expert (I mean, I hope I would, if I were taking its premise as valid and passing it on). Isn't a research paper with incomprehensible text and an attention grabbing, clickbait-y title ITSELF a form of viral short-form content?
"
Nigeriaâs Economic and Financial Crimes Commission (EFCC)
"
Uh, don't forget that Nigeria is who held Tigran Gambaryan, rather unlawfully I think, for supposed crimes. This was someone who did very impressive and important work against crypto crooks (https://rekt.news/bring-tigran-home) before I guess he decided that he was doing a lot of risky work for little monetary reward. So he started to work for Binance. And then that turned out the riskiest of all. But anything that Nigeria does regarding crypto should be taken with that GIANT grain of salt.
"
But I would add a more optimistic note. The grift cycle eventually swings back. We can look to what happened after the previous fraud bubble of the Gilded Age, which ran from roughly 1880-1930 in the big picture. The final fraud-stock meltdown was followed by a massive recommitment to rigor and expertise. We are years away from that transition - and Americaâs midcentury thriving was also the product of outside disciplining forces including World War II and the Cold War.
"
I am not surprised there was a fraud bubble of the Gilded Age because MY theory is fraud thrives when there are fewer options for social mobility and a larger wealth/income divide. I will say it again, there absolutely should be no difference in rules for the rich and poor as to what they can invest in. It's forcing poor people to take huge risks on likely grifters. Now, if the SEC's accreditation rules really want to make it TRULY about knowledge, fine. But everyone, rich or poor, should have to take those exams. No free passes for the exact people who don't need anything more free.
"
and it went over like a lead zeppelin.
"
Dude, I just got a great name for a band! It Went Over! Going to find some surviving members of the Yardbirds, be right back.
"
Flying Decision-Tree Monster is coming to devour us all.
"
I love it. But who is the savior against their Flying Decision-Tree Monster? Does it involve the second coming of Michael Milkin?