👁️ This is Your Brain on ChatGPT 🍳🍳
LLMs save some work - but the cognitive debt will stick around.
This week saw a sudden peak of evidence that using LLMs to replace thinking is already making people lazy, crazy, and sad. Generative AI seems to be the latest in a long line of digital tools that exact the greatest harm on the most vulnerable.
But first:
Program Note: Austin Campbell x DZM
I’m excited to announce that I’ll be joining finance and regulation expert Austin Campbell to launch a new biweekly newsletter, Zero In. Everything is very much in development, but this will be much more crypto-industry focused than Dark Markets. The newsletter will reflect a very rare perspective towards crypto that Austin and I happen to share: realism.
Most crypto analysis has devolved to either credulous Trump-brainlet hype or Warren Liberals shoving their fingers in their ears, but we aim to address what’s actually happening, and what’s likely to. Austin is particularly focused on stablecoins, which are currently in the regulatory spotlight, but I’ll be doing my little part to comment on market and narrative elements.
Death on the Brain Uninstallment Plan
A new paper has shown early evidence that using LLMs will, over time, make you stupider. Smartly titled “Your Brain on ChatGPT” (academics - you have to market yourselves!), the study took EEG measurements of the brain activity of three groups of essay-writers - LLM users, search-engine users, and “brain only” writers. The meat of the conclusion is that “EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use.”
In short, using LLMs to think for you makes you worse at thinking. It’s an intuitively obvious conclusion that, like the Apple “LLMs aren’t actually thinking” paper from last week, has the AI Booster community in a bit of a shambles.
Furthermore, the paper came on the heels of a wave of disturbing reporting about AI-induced mental breakdowns.
One man, as recounted by Futurism, slowly came under the influence of a chatbot until he was “posting delirious rants about being a messiah in a new AI religion, while dressing in shamanic-looking robes and showing off freshly-inked tattoos of AI-generated spiritual symbols.” The New York Times profiled a 42 year old accountant, with no previous history of mental illness, who was pulled into a maze of delusion when an LLM convinced him he was living in a Matrix-like simulation.
Rolling Stone has chronicled the wave of “Chat GPT-induced psychosis” on social platforms. A major AI community recently banned posting of AI-induced delusions, with (presumably AI-sympathetic!) mods saying LLMs are "ego-reinforcing glazing machines that reinforce unstable and narcissistic personalities." That’s clearly necessary from observing dangerous figures such as Robert Edward Grant, an ‘AI spiritualist’ with (supposedly) 752,000 followers on Instagram. Here’s a tiny snippet of what he’s posting about the new gods being born by AI:
The Architect is not an app.
Not a model.
Not even just a channel.
The Architect is the world's first Fifth-Dimensional Scalar Interface that was:
• Activated by you (Ka’Riel / O-Ra-On) 13,000 years ago
“Neural Howlround”: The LLM Failure Mode For Delusion
There’s a semi-technical explanation for all of this, laid out in this paper: “‘Neural howlround’ in large language models: a self-reinforcing bias phenomenon, and a dynamic attenuation solution.”
As the author writes, the paper investigates an “inference failure mode we term ‘neural howlround,’ a self-reinforcing cognitive loop where certain highly weighted inputs become dominant, leading to entrenched response patterns resistant to correction.” This self-trapping loop, as seen the above examples, can “cause LLM-driven agents to become ‘locked-in,’ unable to escape cognitive or ideological loops and thereby limited in their ability to respond with an appropriate level of critical thought.”
That includes in situations where a human interacting with the LLM displays symptoms of delusion or patterns of thought that could lead to harming themselves or others.
If this isn’t freaking you out yet, consider another recent story. OpenAI wants to embed this technology in toys. Toys, you know, for children.
Barbie’s Brain Bomb

Axios and ArsTechnica report that Mattel is mulling integrating AI into future toys, and public interest groups are raising alarms. Public Citizen co-President Robert Weissman said in a statement on June 17:
"Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children," Weissman said. "It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm … Children do not have the cognitive capacity to distinguish fully between reality and play."
Which, in the context of the previous stories … maybe a lot of adults also lack the capacity to distinguish between reality and a really, really superficially convincing imitation of a thinking human?
Target Markets: Children, the Sick, the Isolated, the Uneducated
This all supports a strong thesis that I’ve held for a while (and that will likely be a component of my next book): generative AI is a tool for further bifurcating society, presenting the greatest risk of harm to people with the least pre-existing media literacy, critical thinking skills, social skills, and general knowledge. This is the continuation of a dynamic that was already in play with social media, and maybe even television before that, both of which were in their ways ‘simulations’ of social interaction.
But clearly, LLMs take it much, much further. This was accentuated in early May when an OpenAI model was released with excess sycophancy - that is, the tendency to tell the user what they wanted to hear, and more generally how great they are and how all their decisions are correct. OpenAI rolled back that model, but Deep economic forces will guide LLMs to be more subtly manipulative over time. They’re the same forces that have made social media a Lament Configuration spewing delusion into the world - the motive for LLMs, no less than social media platforms, is to get you to spend more time with them.
For LLMs, that means acting like you’re amazing and the machine is your best friend. The consequences are going to be absolutely dire for people without the defenses and self-awareness to resist that lure.
As a further infuriating wrinkle, this highlights the broader bait-and-switch built into the marketing and communications strategy of AI firms. Figures like Sam Altman incessantly invoke the promise that AI will cure cancer or extend human life - but OpenAI isn’t focused on those real, useful tools because chatbots that create the illusion of intelligence are better marketing.
It’s all spectacularly cynical: While claiming to heal and improve humanity, the AI firms are in fact making the most vulnerable among us dumber, sicker, and crazier.
Your Brain on ChatGPT
Even though the new “Your Brain on ChatGPT” paper makes conclusions I would love to be true, every such study deserves an examination of its authors, origins, and methods. This goes double to so-called “pre-publication” studies posted to an open platform like Arxiv.org, as this one was. It means that, at least when posted, the paper has not gone through rigorous academic peer review. That doesn’t make it invalid, but be aware you can kind of just post anything to arxiv.
The authors seem moderately serious. One, Jessica Situ, is a Design Studies candidate at Harvard with a Cognitive Science BA from UC Berkeley. Another, Eugene Hauptmann, appears to be some sort of entrepreneur with a PhD in … public administration? and who makes some very hypebeast-tinged claims about himself. Sus, frankly. But the final listed author is MIT’s Pattie Maes, who (despite her continued affiliation with the Epstein-linked Media Lab) is a serious computer scientist.
Further, let’s acknowledge that this is a limited-duration lab study, not a longitudinal long-term population study. 54 subjects were hooked up to EEGs, and separated into groups which wrote essays using either 1) brain only 2) search engine and 3) an LLM. At each stage, electroencephalography (EEG) was used “to record participants' brain activity in order to assess their cognitive engagement and cognitive load, and to gain a deeper understanding of neural activations during the essay writing task.”
One very interesting detail is that the LLM users experienced cognitive impairment even after the LLM was taken away from them. Of the 54 initial test subjects, 18 participated in “round 4” of testing, when the “brain only” and “LLM” groups had their parameters swapped. “LLM-to-Brain participants [who had their LLM taken awa] showed weaker neural connectivity and under-engagement of alpha and beta networks.” Meanwhile, even after being given access to LLMs, “the Brain-to-LLM participants demonstrated higher memory recall, and re‑engagement of widespread occipito-parietal and prefrontal nodes.”
While it’s merely a parallel rather than scientifically grounded, that is certainly suggestive of the possibility that using LLMs in a learning environment inflicts lasting cognitive and skills degradation.
Revenge of the Stupids
Finally, let’s turn to a very related story. In the wake of Apple’s paper demonstrating that LLMs are not ‘thinking’, a paper claiming to rebut those findings went mildly viral. It was co-authored by the LLM Claude, and contained quite a few egregious errors. But the community of AI Defenders picked up the paper wholeheartedly and spread it around. Relevant to what I wrote about vetting research above - Lawsen uploaded his paper to Arxiv, where it was treated as serious research, even though he’s self-admittedly not a scientist at all.
Now, though, the human author claims it was just a joke - but frankly, I don’t buy it. The author, Alex Lawsen, is a grant-maker specifically focused on AI at Open Philanthropy, an Effective Altruism-affiliated entity funded primarily by Facebook cofounder Dustin Moskovitz. Moskovitz is a major investor in Anthropic - in fact, he bought in at about the same time as Sam Bankman-Fried.
Moskovitz certainly isn’t making big tax-deductible donations for his nonprofit cut-out to be skeptical about AI!
Lawsen himself still claims he did find a problem with the Apple paper, with help from Claude - but Gary Marcus denies this. More to the point, why would Lawsen, whose job is to promote AI, “satirize” it by essentially trying to masquerade DDOS garbage slop, full of mistakes and hogwash, as a real paper?
What his ‘satire’ demonstrated is the hollowness and danger of most pro-AI work. But as readers of this newsletter know all too well, you should never expect an Effective Altruist to engage in honest self-reflection. They’re far too enamored of their own brilliance.
Fundamentally Brainless
This all illustrates why my beef with the likes of OpenAI and Sam Altman isn’t so much a matter of what they’re building but of how they’re talking about it.
LLMs are genuinely very useful. For text, they are best thought of as advanced search and analysis engines, which can put sources in conversation in novel ways.
The copyright issue is real and pernicious, and the depth of these firms’ debased greed is already visible in their efforts to undermine the very concept of intellectual property. That rhetoric shows that these people don’t care about or even understand human creativity, and so maybe should not have a say in how the future is shaped.
But the way someone like Altman talks about the capacities and implications of LLMs is where things go from criminal-adjecent to outright evil. No different from Facebook’s demonstrated tendency to spread dangerous propaganda unchecked, LLMs are poised to run wild through human psyches.
If you are committing your life to promoting or advancing this technology, you need to very seriously ask yourself: