Succession and the Bias of Ambiguity
Artificial Intelligence is going to be Very, Very Bad for Non-Readers
It’s spring in Brooklyn, and we’ve been enjoying the hell out of it. I don’t know if I’m behind the curve, but I finally feel like the pandemic has truly released its grip on my spirit.
The past couple of days have also felt like the return of better, or at least less anxious times. The online chatter about the finale of HBO’s Succession recalled the halcyon days of digesting and debating Mad Men or (yes it actually happened, however we feel in the cruel light of morning) Game of Thrones.
*SPOILER WARNING FOR SUCCESSION FINALE*
Succession, for a writer, is an immense treat, and some of the most heated discussions of the series finale hinged on its most writerly moments. Probably the biggest treat was thinking back through the episode, and season, for all the moments leading up to Tom’s apotheosis. Apparently this included a moment in the funeral episode when Madsen clocking that Tom is not at the funeral because he’s working – just the sort of dogsbody he’s in the market for.
But the most interesting discussion revolved, naturally, around ambiguities. There was the question of Shiv – specifically, whether she really decided to betray Kendall in a last-moment emotional crisis, as it seemed; or whether she had in fact already decided that Tom was her best bet, however joyless or compromised.
There may be clear tells in the show that collapse it, but in my mind that ambiguity is intentional. By the same token, there was debate over whether the Kendall-Roman hug <thing> was Kendall intentionally hurting Roman to dominate him, or Roman masochistically using Kendall to break his stitches.
I think maybe in both cases the answer is *both*. The ambiguity IS the truth, because these characters are just that deeply compromised and alien to themselves and enmeshed with everyone around them.
*SPOILERS END*
How to Spot an AI? It sucks at posting.
It’s a hack observation, but the rich ambiguity of the Succession finale is the sort of thing an AI writing program can’t do. It’s a really obvious, extreme example, but it scales downwards. Specifically, an AI pretty clearly can’t capture the liveness and ambiguity of discourse on Twitter dot com.
Three times in the past week I’ve spotted accounts who seem to be using AI text generators to write Tweets. While they seem to be writing diverse enough tweets to get past Twitter’s bot and spam filters, their actual output is weird dreck like these replies to tweets of mine:
It’s even easier to tell in full context, but these tweets both basically rephrase or just rearrange the content of the original tweet. Both have a palpable emptiness – they’re not actually saying anything. They’re just plausible sequences of words, which is exactly what LLMs produce. Sure, maybe one or both of these is just written by a person with flawed language skills, but I don’t think so.
But what I do believe is that there are a lot of people out there who can’t spot that kind of nuance very easily. Now, the above tweets don’t seem to be part of any direct scam, or spreading anything that quite qualifies as disinformation. They’re bots aimed at generating generic “engagement” because that’s a metric that people who get paid to buy or grow Twitter accounts sell to businesses or people who don’t really understand what Twitter is for. So I can’t say I particularly fear that this will actually give anyone some unfair power or influence, because an algorithm can’t solve for actual communication, since it has nothing novel, and therefore interesting, to say.
What is a problem is spam, in two senses. For even a pretty savvy reader like me, the above are a distraction – they’re noise, a waste of milliseconds of my time even if I didn’t decide to write some dumb essay about them. Over time, they’ll overwhelm any anti-spam system they can sneak past, which apparently include’s Twitter’s. That has crazy implications for social media usability.
But the even more interesting and nefarious implication of this semi-convincing spam is for people who aren’t particularly media literate – who might spend the same amount of time and attention absorbing AI-generated tweets as they would the equivalent written by a human being. I’m citing tweets, but the same goes for images, deepfake or simulated videos, fake paintings, etc.
For a certain kind of person, this sort of stuff will have the outlines and contours of the cultural products they’re used to, and habit may well lead them to consume them in the same way they do the real, human-crafted images and words and videos they once did. There will be a hollowness at the center of it all – the void of intent and meaning that is the mark of AI. But that may in the end only be detectable by a fairly refined human palate.
We already know the ultimate maddening deadness of simulation, even when we choose it. We look on the men with their Replika AI relationships and laugh, or we shake ourselves out of a depressive video-game binge to find that the bleak hangover of the not-quite-real has devoured a piece of our souls. Now imagine the same hollowness infesting your every digital moment, just outside the edge of conscious experience, constantly gnawing at your very sense of being a human in the world.
It’s not going to be a fair fight out there. The deindustrialization of the United States has already been devastating for the less educated U.S. population, and we’ve seen the consequences in the form of a significant bump in far-right populism. We’ve seen the consequences when everyone and their crazy uncle is online, sharing regular old made-up bullshit, to say nothing of the algorithm-remixed collective unconscious.
The amount of raw id that’s going to be let free to bubble up to the surface after the Ego has been swept away in a cleansing bath of Large Language Models is going to put the past six years to shame.