Why Artificial Intelligence Can Never Exist
On Berkeley philosopher Alva Noe's "The Entanglement"
I send this with apologies. At 3,700 words, it is probably more than you’ll want to tackle in one day - but I didn’t have the heart to split it across two emails. Lord forgive me. - David
Over the past three years, and especially since the first demos of the ChatGPT large language model (LLM) on November 30 of 2022, public discourse about “artificial intelligence” has been infested with an array of dangerously misleading (and sometimes outright deranged) claims. Specifically, I’m talking about the related claims that LLMs might be developing consciousness, and that they might escape their silicon containment and destroy humanity.
The reality, to keep it simple, is that LLMs and similar systems are incredibly interesting and powerful mimics, but have nothing resembling minds. These limitations become very clear in practical terms in the course of using the systems, which is why the initial ChatGPT buzz (“My God it’s just like talking to a person!”) has settled down into a quieter trough of disillusionment (“This is potentially useful for summarizing and organizing existing work.”)
But failing to see past the hype is, for investors, a potential death sentence. It’s vital to expunge the lazy (and frankly sometimes corrupt) thinking that wants to mislead us into thinking that LLMs have “minds,” or “souls,” or are transformatively creative and so should be allowed to rip off humans.
Because those, to put it bluntly, are siren songs trying to steal resources from real applications.
One imperfect but relevant way to capture Noe’s concept of entanglement is to say that a robot can’t *make* coffee because a robot can’t *taste* coffee.
University of California, Berkeley philosophy professor Alva Noe has produced a hugely powerful weapon in separating reality from bullshit, in his recent book “The Entanglement: How Art and Philosophy Make Us What We Are.” That might not sound much like a book about A.I., but the experts certainly think so – Noe has been invited to speak with the staff of OpenAI, for instance.
Noe’s book is fundamentally about human consciousness, and its central claim is deceptively simple: That all human activities, whether “thinking,” or “perceiving,” or “writing,” or even “walking,” are ‘entangled’ with an enormous breadth of other aspects of the human experience – and that any of these acts becomes either meaningless, or sheerly impossible, when those “entanglements” are lost.
To oversimplify only slightly, Noe argues that there is no functional equivalent to human intelligence absent the parameters of actually being human. Noe explores many aspects of this. One is simply embodiment and action as necessary correlates of thought: “It is not what we cogitate that we know best,” Noe writes, “It is what we do.”
This simple dictum he draws from Giambattista Vico, a contemporary opponent of Renee Descartes’ long-dominant mind-body dualism. Cartesian dualism’s core tenet is the dictum “I think, therefore I am,” and can be reduced to the idea that the essence of a human is a mind that just happens to be trapped in a body – but could just as easily be in a jar sat on a shelf somewhere.
Cartesian dualism is essential to the scientistic worldview that even allows for the concept of ‘artificial intelligence.’ But Noe is among those contemporary philosophers who largely reject Descartes’ separation of the mind and body. And he is just one index of the precipitous decline in Descartes’ reputation among serious philosophers over the course of the 20th and 21st centuries.
For Noe “the singularity” can absolutely never happen on some server where a hacker has created a novel algorithm for grasping the nature of reality – it can only happen, his reasoning seems to imply, after androids as whole and robust as humans have lived among us for decades.
Moreover, Noe largely rejects any deep opposition between “thought” and “action,” between “ideal” and “existence,” and between “self” and “culture.” All of these, he argues, are “entangled” with each other, by which he ultimately means constitutive of each other – figure and ground, and a constant conversation, forces unable to exist separately.
One imperfect way to sketch Noe’s concept of entanglement is to say that a robot can’t *make* coffee because a robot can’t *taste* coffee. The movements involved in brewing a cup are entangled in a perceptual chain from the smell and texture of the beans to their final taste in the cup, a chain of sensory significance that informs the making itself, but is decades away from even being crudely developed for robotics. More important still, the taste of coffee is a signifier rooted in culture – Turkish coffee tastes different from American diner coffee, which is different from Starbucks.
Every cup of coffee is entangled in the meaning of every other cup of coffee, and this a robot can simply never grasp – at least not until it has been created with the full suite of human-like perceptions, and fully integrated into human culture and society. For Noe, implicitly, “the singularity” can absolutely never happen on some server where a hacker has created a novel algorithm for grasping the nature of reality – it can only happen, his reasoning seems to imply, after androids as whole and robust as humans have lived among us for decades.
Mother and Child: The Problem of Seeing
Or put another way, a Tesla can’t be trusted around children because it has never held one. A Tesla is not part of a species that reproduces by creating kids. And so it will run them over - not out of hostility, but out of complete and utter indifference, to its very core. Because an artificial mind cannot be anything other than indifferent, and even that is somehow giving it too much credit.
Noe spends a lot of time on vision, in fact, working to deconstruct what he calls the “snapshot conception” of human vision, which he says “informs a great deal of cognitive science.” This is the idea that our eyes are essentially cameras, constantly taking pictures that our brain then “interprets.” It is part of the Cartesian conception that human beings are “brains in jars” with various tools plugged into them (bodies as pure extension).
But Noe is unflinching: “Vision is not a process in the brain whereby the brain produces an internal picturelike representation … The retinal image is not a picture. It is not made. Noone can see it.”
This is in part, he says, because the body is implicated in seeing. “Eye, head, neck, body, movement. We see with all that, and then only thanks to our impulses, curiosity, feeling, and drive.”
To spell this out, “seeing” for the vast majority of people involves movement. We constantly move our heads to see different angles of an object or situation. We move our bodies when we need to see even more. And this is not simply a matter of convenience – movement helps us answer questions about what we are seeing. Seeing is interactive, not passive.
The fact that a Tesla’s vision system doesn’t control the “body” in which it resides is, if we think about it for a moment, already a huge barrier to making its “vision” work anything remotely like a human being’s. It’s operating with one hand tied behind its back, more or less literally.
But Noe also argues human vision relies on the human tradition of pictorial culture, including visual art. He argues that the making of pictures is entangled with the way that humans see, that it is a conversation about seeing, and also a way that all humans are trained to see.
“We live in a picture world, and … we have lived in a picture world our species life long. We have learned to use our fluency with pictures not only to think about what seeing is, but to see.” One way to think about this is that both simple pictures and visual art can teach us to pay attention to features of the visual world that were present well before we saw them depicted with mind-expanding attentiveness and focus.
To repeat myself, you can’t trust a Tesla with the safety of a child because it has never seen the heart-rending tenderness of Klimt’s “Mother and Child.”
What we may be discovering through the very frustrations of Tesla’s camera-based, 2D vision system is that even with the richness of high-definition visual input, an AI is an immensely isolated thing. It is missing swathes of context and meaning so vast it may be more difficult for us to truly understand the depths of its ignorance than to grasp the small acts of mimicry that constitute its “intelligence.”
A self-driving system ultimately comes down to the question of “knowing what to pay attention to,” which is another way of saying “knowing what is important.” It is not a lack of vision, but a lack of seeing, a deep lack of knowledge about the meaning and significance of what appears before the cameras, that produces the “edge cases” of failure that continue to hold back self-driving in particular.
So while it’s nominally about what a human is, Noe’s book is also, inevitably, about what an AI is. And its ultimate implication seems quite clear: that a “brain in a box” as envisioned by the AI cabal can never actually exist.
The Axes of Entanglement
The Entanglement is a usefully sprawling conversation of a book. It is only very tentatively that I identify four different kinds of “entanglement” that Noe identifies: Entanglement by Body, by Culture, by Action, and by Self-Reflection, which could also be called “Rule Challenging.”
“There could be no straightforward and direct use of language for any purpose at all if there were not also a possibility of taking up a playful, or a subversive, or a questioning attitude to language.” - Alva Noe
Keep reading with a 7-day free trial
Subscribe to Dark Markets to keep reading this post and get 7 days of free access to the full post archives.