👁️ OpenAI is Just WeWork With Extra Steps
It's the same picture
“The We Company's guiding mission will be to elevate the world's consciousness.”
- Adam Neumann, CEO
“What I hope is that we successively develop more and more powerful systems that …become an amplifier of human will."
- Sam Altman, CEO
Welcome to your weekly Dark Markets free edition, where we talk about techno-frauds, utopian techno-fascists, and dimwit crypto grifters.
I’m David Z. Morris, longtime finance reporter and PhD historian of technology, and author of Stealing the Future: Sam Bankman-Fried, Elite Fraud, and the Cult of Techno-Utopia.
It’s been a bad week, maybe a bad month, for Sam Altman, OpenAI, and the so-called “AI industry” in general. Scroll down for why OpenAI’s IPO might crash and burn like WeWork’s.
But first, an update and one bit of news.
The Nerd Reich
I was very honored to be invited to discuss the book, and my recent reporting on Jeffrey Epstein, with Gil Duran and The Nerd Reich. Please have a listen.
Milei Called $Libra Backer Seven Times on Launch Night
New evidence shows Argentine President Javier Milei called $Libra memecoin promoter Mauricio Novelli seven times on the night of the fraudulent token’s launch. This pretty well disproves Milei’s previous claims to have some vague sort of distance from the project, which Milei endorsed and wound up costing speculators about $250 mil.
The revelations also show a longer corrupt relationship between the two. Records from 2023 have Novelli telling an assistant to budget “the usual 2,000 for Milei,” calling it a monthly salary, while in a separate April 2024 message he referenced “the 4,000 we need to give to Karina” - Milei’s sister.
Novelli, remember, worked alongside Hayden Davis, who launched and then rug-pulled Milei’s token, walking away with close to $100 million. Davis is the grandson of Mormon mass-murderer Evril LeBaron, and adjecent to a multigenerational scheme to exert American influence in South and Central America.
We’ll be continuing our dive into the LeBarons - and their link to Keith Raniere’s Nexium - soon. Catch up below.

The Mormon Manson and the $Libra Fraud, Pt. 2: Hayden Davis' Family Murder Cult Was All About Money
OpenAI May Never IPO
One of the memos, about Altman, begins with a list headed “Sam exhibits a consistent pattern of . . .”
The first item is “Lying.”
I was at Fortune during the WeWork saga of late 2019, and it’s easily one of the highlights of my mainstream media career. Here’s me blurbing just one of the hilarious ways the offering was a painfully obvious con-job - the tax advantages it gave to insiders.
There’s so much to remember about WeWork, and it has been an incredibly long six years since its vaunted IPO turned into a suicide vest for backers. But with OpenAI aiming to IPO later this year, it’s a critical moment to gaze directly at the very scary parallels.
Fundamentally Stupid Valuation Propped up By Venture. Above all other causes, WeWork collapsed when a years-long narrative sold to venture investors collided with real revenue numbers that had to be disclosed in an S-1.
Now we have this absolutely staggering report from The Information that Altman is boxing out his CFO because she’s telling him things he doesn’t want to hear about revenue.
According to The Information: “She told some colleagues earlier this year that she didn’t believe [OpenAI] would be ready to go public in 2026 … said she wasn’t sure yet whether OpenAI would need to pour so much money into obtaining AI servers in the coming years, or whether its revenue growth, which has been slowing, would support the commitments.”
That alone might be enough to convince me this IPO is already doomed.
Core Business Doesn’t Do What it Says On the Tin. WeWork rented real estate, but tried to transubstantiate that into something about technology and, even more vaguely, “consciousness.”
OpenAI builds probabilistic chatbots and meme-generators, but Altman (who is not an engineer or computer scientist!) promises to create superintelligence within just a few years, cure cancer and solve climate change. The latest of a thousand blows to Altman’s delusion is this paper demonstrating that LLMs can’t do basic math reliably.
What’s shocking is that the mainstream media are actually starting to talk about this openly. A Bloomberg editor is publicly speculating that LLMs were a “false start.” Not great!
Deception at the Top. Adam Neumann was a shameless self-enricher as WeWork’s founder, with famous maneuvers like personally trademarking “We” and leasing buildings he owned back to his own company. Karen Hao’s “Empire of AI” and Ronan Farrow’s recent New Yorker piece make it truly unambiguous that, while quieter in his personal presentation, Sam Altman is cut from the same cloth.
And the meme that Sam Altman is a habitual liar is very much spreading. I don’t think the IPO can outrun it.
(As an aside: Andreessen had Adam Neumann on the a16z podcast just last year, talking of all things about “how to build iconic companies.” WeWork was certainly an iconic failure, I guess! A16z has also put big money into Neumann’s post-WeWork projects. Why? My thesis is that they know they have to keep polishing the image of these huge failures in order to maintain credibility when they promote future failures.)
Built around a Weird, Insular Community. Adam Neumann’s nepotism at WeWork was legendary, particularly his decision to let his wife launch a comically inept schooling project called WeGrow. OpenAI sits at a similarly insular nexus of fringe ideological movements like Effective Altruism and Yudkowskyite Rationalism - movements that frequently veer all the way into cult behavior. Nothing coming from inside these AI firms should be trusted, any more than Neumann’s declarations about WeWork should have been trusted.
A Narrative to Rationalize Huge Capital Raises. WeWork pitched itself as a tech company that needed big money to lease real estate. OpenAI is pitching itself as a tech company that needs big money to build data centers.
Data scaling is the idea that if you throw more compute at a probabilistic LLM architecture, it will somehow become a God-like intelligence. There has never been any real reason to believe this, and the underlying computer science now clearly rebuts it.
But that’s not what matters about the scaling narrative. Because it involves infrastructure and not just software, scaling has always been a useful justification for huge capital raising.
And in a couple of years when they admit this and pivot away from scaling, OpenAI still gets to keep the money.
You might not guess (because why would you), but even at a relatively forthright and outspoken publication like back-of-the-book Fortune, reporters had far harsher assessments of WeWork than you’d actually read in our pages.
I remember chatter around the release of the “strange and alarming” S-1 and cancellation of the IPO was basically gleeful, because we’d all been proven right about our assessment of this absolute dogshit and its carnival-barker figurehead.
Here’s a haunting quote from CNBC’s coverage of the filing at the time: “You can say I’m growing faster, but you can’t say that if for every dollar you’re getting, you’re losing a dollar,” said Renaissance Capital principal Kathleen Smith.
Well, who does that remind you of?











Will probably comment on this soon, but if I can go off topic... I just saw
"
Carreyrou narrowed a pool of 620 early cryptographic mailing list users down to a single suspect using a range of writing tics, including hyphenation errors and spelling variations, he wrote in the report.
"
https://www.theblock.co/post/396663/nyt-investigation-suggests-adam-back-may-be-satoshi-nakamoto
The writing ticks are exactly what I was thinking when I wrote this comment:
"
The big question is, do either of the users use asterisk-space-word-space-asterisk for emphasis solely, or do both, and to what degrees.
"
https://davidzmorris.substack.com/p/why-len-sassaman-was-not-satoshi/comment/71989136
I mean does not one remember how they caught the Unibomber? We might not have Satoshi's relative to rat on him (after all Satoshi isn't blowing up people) but we get pattern matching technology that is, uh, kind of everywhere now.
"
Above all other causes, WeWork collapsed when a years-long narrative sold to venture investors collided with real revenue numbers that had to be disclosed in an S-1.
"
I'm beginning to think AI projects are like crypto projects--and not in a good way. Not the on-chain stuff. Not the actual crypto. That's never the problem. It's the stuff that they do not have on chain like off-chain computation and servers, or the stuff that runs the chain like a handful of dudes, or the humans that supposedly may or may not do stuff based on what happens on chain, or the code audits that aren't up-to-date with the latest push to production, or the "decentralized" app that only needs 2 out of 3 signatures to do something and oops 2 of them got social engineered by North Korea, or APRs out of their butts because they are using the next investor to pay the first investor, or on and on and on.
And all this is happening while they promise crazy things. We'll solve centralized infrastructure... With crypto! We'll solve science reproducibility crisis.... With crypto! We'll solve social media... With crypto! Now, it's "We'll solve..." With AI!
But crypto is just a decentralized ledger technology (usually). It's accounting with less trust required. But that's it. It's not even a database. The problem is crypto can't do meat-space. It can't force people to do anything. It's supposed to be all about being trustless, and yet you have to trust humans to do whatever the ledger is asking/suggesting them to do
Now look at AI, it's just a chatbot. It summarizes stuff pretty well, and if you steal a bunch of stuff from the internet and books, you can sometimes get it to spit it back out to you. But it cannot be trusted. Because that summarizing is not from understanding, but from probability, and sometimes, per probability, it'll come up with complete nonsense. So, again, the bottleneck is humans. [ In this case you have to trust humans to find and fix anything the LLM hallucinated. Edit 4/8/26: It's supposed to be all about automation, and yet there's always a human step that prevents the full automation, which means its no longer about automation, it's just about sometimes-marginal efficiency gains.]
Who's going to make those data centers for LLMs? Humans. Has /anything/ suggested that more data centers will get new AI technology to overcome the inherent problems of AI? No.
Who's going to do supposedly the stuff the ledger says to do? Humans. Has /anything/ suggested that new crypto technology will somehow force humans to actually do what they are told? No.