👁️The Biggest Crypto Fraud Since FTX?
Mantra crashed from $6 Billion: Did insiders sell? Also Meta's latest AI model met nearly universal scorn; T. Fong x Elon Musk; How Rationalism set the table for DOGE, and more.
Welcome to your weekly Dark Markets roundup of news and analysis in the world of technology and investing fraud. It’s been quite a week: we’re kicking off with the crash of Mantra, what looks like the Biggest Crypto Fraud since FTX. Scroll down for AI fails, Rationalist delusions, and my SBF trial colleague Tiffany Fong back in the news.
Mantra Down $6B
Mantra, a “real world asset tokenization” blockchain project, suddenly collapsed over the weekend, with its OM token losing 90% of its value - a wipeout of about $6 billion in notional value.
The team claimed on X that “Today’s activity was triggered by reckless liquidations, not anything to do with the project.” ‘Reckless liquidations’ is a very strange construction - and quickly allegations began swirling that the team themselves had indeed been behind the dump, largely through insider OTC trades. This appears to have been confirmed by Coffeezilla, who is pretty amazing at just getting people to admit to crimes.
One notable detail is that Om founder John Patrick Mullen is a former ICO promoter in Asia, a pretty fundamentally shady role which he mentions in this just-released interview on The Rollup. He also says he was “legitimately going broke before COVID happened and [we] launched Mantra.” Mantra is very new, launching from a raise of just $11 million in early 2024. (I’m just learning about The Rollup, but I’m intrigued. This interview about tariffs featuring Arthur Hayes seems particularly notable.)
I’m strongly considering a deeper dive on this for premium subscribers in the next few days. It’s a hell of a rabbit hole.
Lame-A
Meta released its newest LLM, Llama 4, a little over a week ago, and even the relatively pro-AI people who pay close attention to things like this were dramatically disappointed. I wish I knew who to give credit for this meme, which sums up the nearly universal sentiment I’ve seen from both professionals and average posters:
Gary Marcus (admittedly a borderline LLM skeptic) calls it the latest failed attempt to create GPT-5, the mythical human-like intelligence that deluded or dishonest figures in the AI industry have been promising is just around the corner.
Worse than that, though, Marcus and others are alleging that Meta is gaming the metrics, with an anonymous but credible Reddit poster claiming that Meta leadership pressured their team into “blending test sets from various benchmarks during the post-training process.”
The most important takeaway here is that you can’t build God by scaling compute or training data. (Of course, you can’t build God at all. But you definitely can’t do it through scaling.)
In a a further sign that things are going badly, this follows the departure of Joelle Pineau, Meta’s Head of AI research. One report claimed that Meta actually delayed the release of Llama 4 when China-based DeepSeek was unveiled, and apparently outperformed what Llama 4 was capable of at the time. The current Llama 4 varietals Scout and Maverick are rumored to have been built with some tactics adopted from DeepSeek - but that seemingly wasn’t enough to save them.
(At the risk of stirring a pot that doesn’t belong to me, this is also a very personal vindication for Marcus, who has a kind of ongoing vendetta against Yann LeCun, now Meta’s Chief AI scientist.)
Tiffany Fong Denied Elon Musk Her Seed
(I would like to think Tiffany would appreciate the joke.)
It seems what was for a time a light meme among the crypto Twitterati was actually 100% true: Elon Musk asked Tiffany Fong to have his kid. I didn’t believe it either, but it was reported in the Wall Street Journal. The journal also reveals that when she (to her infinite credit) turned him down, he unfollowed her and tanked her earnings on X.
Fong and I actually spent some time talking while we were both covering the Sam Bankman-Fried criminal trial, and we recorded an interview. I know she camps it up pretty hard online, and I loathe her current politics, but I do like her as a person - and above all, I respect the work she did getting scoops on Sam, by any means necessary.
It’s definitely been a hell of a ride for her since. It’s only been a bit over a year since our interview about SBF and “citizen journalism.” That was clearly a little delusional on my part - I’ll admit I’m disappointed she got so sucked into the world of social media. She has some real chops.
Tiffany Fong is a Better Journalist than Michael Lewis
In this episode, David interviews Tiffany Fong – an investor activist and independent journalist whose extended post-arrest interviews with Sam Bankman-Fried played a major role in unpacking what happened in the FTX collapse.
The Flop of QuAIke
Llama 4 wasn’t the only recent AI project to incinerate on collision with actual users - Microsoft’s “AI generated Quake 2” managed to be almost as much of a letdown.
Like many AI products, it’s vaguely impressive at first glance, but the details are nearly deal-breaking. The frame rate is horrendous and juddery, straight lines are uncannily curved, and enemies are probabilistic clouds until they’re directly in front of you.
The biggest limitation, and the most uncanny part of my attempt to actually play the thing, is that the AI can only remember a few seconds of context. There is no object permanence, because there are no 3D models - just images that look like 3D models. According to Eurogamer, the model only remembers 9 frames, which in practic means that objects like barrels can disappear if you look away from them, and every once in a while you’ll simply be teleported to a completely different environment after looking at a wall too long. I will say the demo manages to remember the general layout of its “level,” which I’d speculate is an artifact of distilled training data rather than memory per se.
The absurdity, and the implied desperation, are obvious. This is a far worse version of what proper 3D modelling could do on consumer hardware almost three decades ago. It is hard to imagine, even conceptually, the path to an LLM-style AI capable of doing this in a way that works. It is even hard to imagine why you would bother trying, unless you had absolutely nothing better to invest capital in.
Recommended Reads: Rationalism and the DOGE delusion
Hubris is clearly Elon Musk’s greatest and fatal flaw, and it’s reflected in his stumbling government interventions. As Timothy Faust commented about this Rolling Stone deep dive into its staff made up of children and dropouts:
“DOGE is made of people who believe they can derive all things from first principles because they're Very Special Boys who are good at using the computer.”
Derivation from first principles, specifically in a computer-like logical mode, is a pillar of the similarly irresponsible and lazy work of one man: Eleizer Yudkowsky. Rationalism is tightly interwoven with Effective Altruism, which Elon Musk has professed affinity for. The DNA of Bay Area Rationalism can be seen not just in DOGE, but also in Sam Bankman-Fried’s hilarious insistence that he knew the law better than his lawyers - and the even more tragic intellectual hubris of the Zizians.
This week I ran across some important reads on Rationalism’s hubris.
The Sequences and Rationalist Epistemics, by Ozy Brennan at Thing of Things
This is a brief piece about the anti-scientific bias of Bay Area Rationalism’s understanding of knowledge. Brennan connects Rationalism directly to now widely-debunked research in behavioral psychology - things like “priming” and “power stancing,” which turned out to mostly be the product of p-hacking, if not outright fake data. Brennan argues Rationalism fell victim to this “one weird trick” view of the human brain, while ignoring that the subtler questions of biased thinking - such as “am I giving special treatment to people I like?” - have been the focus of philosophers “for literal millennia.”
Effective Altruism: Quantitative Mindset, by (also) Ozy Brennan
“The well-kept secret about cost-effective analyses is that they’re all fake.”
While I’ve sharpened my daggers for EA and Rationalism as fundamentally thought-destroying ideologies, thoughtful and self-reflective EAs do exist. Brennan is a self-identified Effective Altruist, and much of this post is a sensible defense of the basic premise that you should pay attention to the math if you want your donations to good causes to be effective.
But Brennan also admits that the math is never as precise as it aims to be - or as it necessarily represents itself as being. “We don’t know that Deworm the World is precisely 34.4 times better than cash transfers in Kenya. It could be 30 times, or 36 times, or even 34.5 times.” Brennan demonstrates epistemic humility - but this self-reflection is not widespread in EA, and is often aggressively rejected by the more Rationalist elements of the movement, such as Toby Ord’s weird attempt to imagine the universe as a closed and predictable system.
Read More: The Computability of the Universe is Political
LessWrong Against Scientific Rationality, by Topher Halquist (2015)
This even more helpful, in-depth breakdown of Yudkowsky’s blinkered attitude to science was suggested in the comments to Brennan above. In one case, Yudkowsky advances the “many worlds” thesis of quantum mechanics, which many professional theoretical physicists accept. But Yudkowsky apparently makes some simple mistakes, while grandiosely wrapping it all up with an injunction “to break your allegiance to Science.” As Halquist put it nearly a full decadea ago, Yudkowsky “seems more interested in becoming an Ultimate Prophet than encouraging his followers to study science.”
Even more specifically relevant are Yudkowsky’s claims that science is ‘too slow,’ expressed in arch, self-important declarations like: “Work expands to fill the time allotted, as the saying goes. But people can think important thoughts in far less than thirty years, if they expect speed of themselves.” This ties directly into a theme that I’m discovering is at the heart of my SBF book: that Rationalism’s false confidence about the future turns the present into a constant emergency.
Read More: What’s so Bad About Rationalism?
KiloEx Let Anyone Change the Price
One last little hack.
Many cryptocurrency applications use what are known as “oracles” to gather and integrate asset price information. They’re structured in numerous ways, but they’re generally more complicated than in traditional finance, whose centralized structure makes it easier to plug in data.
This has made oracles a frequent target of manipulation by hackers trying to extract money from decentralized services. If you can trick a service into using the wrong price for an asset, you can get away with all sorts of financial shenanigans. But a small protocol called KiloEx made this particularly easy, letting anyone change the oracle price. This enabled a $6 million theft.
"
I’m strongly considering a deeper dive on this for premium subscribers in the next few days. It’s a hell of a rabbit hole.
"
I'd be interested. I saw the Mantra crash and there was really no good news about it. Because, well, there's no good crypto news any more. You got Protos and that's about it nowadays. I don't want to watch Youtube or have to read the mangled auto-generated English transcription. So, yes, consider this interest. I'm sorry I'm so far behind in comments. I really have become a bit of a completist and should just start commenting on the most recent ones. I often think about those people who are super wikipedia editors. Like, that's cool, but damn, all that time and for what payoff?
--
"
I know she camps it up pretty hard online, and I loathe her current politics, but I do like her as a person - and above all, I respect the work she did getting scoops on Sam, by any means necessary.
"
I've seen a few tweets--indirectly of course, hang out a Nazi bar and people will think you're a Nazi--and they do not reflect well. I thought it a pretty cool scoop she got on SBF at the time too, but dang... They do not reflect well. They sound kinda like the stuff you might overhear at a Nazi bar.