The partisans beyond the filter bubble
Plus the chatbot that got hired and fired double quick, and other tsunami harbingers
Ever since Eli Pariser produced his terrific 2011 book “The Filter Bubble”, which pointed out that web companies tailoring what they showed you to what they inferred were your interests would trap you in an echo chamber, there’s been a lot of concern about polarisation via algorithm. How does Facebook choose what it shows you? How does Twitter pick content to go in the “For You” tab? Is it to confirm your interests and strengthen the bubble around you?
The surprise is usually that no, it’s not. Twitter’s For You tab has, pretty much from its inception (which dates back to 2015; the name is new, not the concept) been all about engagement. And the point about attracting our attention is that what works is stuff that we agree with, and stuff that we really really disagree with, or find surprising and/or shocking. The latter set is better at grabbing and holding our attention than the former: if you imagine yourself reading a newspaper, ideally in paper format, then you can see that a story about a topic you find encouraging will give you a little dopamine hit, but an extreme will do a lot more.
Let’s say your favourite political party is The Purples. (Look, political parties have really cornered the colours market.) So here are two headlines you might find in your paper:
» Purples gain five points in polls
A nice little dopamine hit, right? Five points is always good. But the headline’s told you pretty much everything. OK, let’s try a different story:
»Purples leader condemns party for losing touch with voters
This is quite different, isn’t it? Now we’ve got your attention. The irony is that this story has actually happened a few times in the past 20 years; the first time was Theresa May (OK, not then the leader) telling the Tory conference they were thought of as the “nasty party”, and more recently in 2021 an aide to Keir Starmer pointing out that yes, Labour had lost touch with voters. Liz Truss, who lasted less long as PM than an incandescent light bulb, has indulged in similar critiques, I think.
There’s another version, of course, which would probably get even more attention:
» Purples leader found naked with three prostitutes and suitcase of cocaine in Trafalgar Square
but that one is rather less common.
My point is that we’re attracted to the outrageous, the surprising, the shocking. What you find comforting, someone else will find shocking. Or possibly you’ll both be shocked.
But the real point is that filter bubbles aren’t really a thing. Though the sites think that we want to see the same stuff all the time, we actually like to experience something outside the warm bath of our existing views, because that gives us something to react against.
In that context, a fascinating study came out last week, with the snappy title “Users choose to engage with more partisan news than they are exposed to on Google Search”. Only the abstract is publicly available, but I’ve read the full paper, and it’s full of fascinating nuggets.
It’s by a team from Stanford Internet Observatory, Northeastern University, and Rutgers University, who were looking at what sort of news American voters chose to engage with around the 2018 midterm and 2020 presidential elections. Using a browser extension, they monitored desktop browsing, and particularly what news sources were shown to people after search, and what they chose to follow. By giving each of the news sites in the SERP (search engine results page) an “unreliability” score, leaning on the Newsguard ratings—which really tells you how wildly partisan a site is—they were able to calculate how partisan the results shown from a Google search were. Then they looked at which link(s) people chose to follow, and give them a follow-on reliability score. If 9 of the 10 links were “unreliable” (aka partisan) and you followed the 1 reliable link, you’d get a different score that if only 1 of the 10 was partisan and you followed that. Of course it’s more subtle than that, because there are gradations of partisanship (Infowars is further into the weeds than Fox News, Mother Jones further left than MSNBC). But you get the general picture.
What emerged was this: people tended to go for the more partisan/unreliable content being offered in the SERP. There were a few wrinkles in this, too:
• people who identified as “strong Republican” (this is America, after all) were more prone to follow partisan content; this tendency reduced as you moved from “strong Republican” to “not very strong Republican” to “lean Republican” to “independent” to “lean Democrat”, “not very strong Democrat” and “strong Democrat”. In other words, right-wingers are more likely to go for the crappy news sites written to outrage.
• to quote the study, “age was also associated with greater overall engagement with unreliable news in 2020; compared with the 18–24 age group, those aged 45–64 and 65+ both engaged with significantly more unreliable news”. (They found no significant differences by age in 2018.)
• this one seemed relatively unremarked, but struck me as most interesting: 90% of all unreliable news “exposures” (ie page loads) were done in 2018 by just under a third of participants; in 2020, the 90% exposure figure came from just a quarter of participants. As the researchers say, the findings “suggest that interactions with unreliable news are driven by a relatively small number of individuals”.
So if we put those three findings together, what do we get?
• Small groups of
• ageing
• right-wingers
• on their desktop computers (because this study wasn’t—couldn’t be—carried out on mobile, only desktop)
..get their information from unreliable, partisan news sites. The study doesn’t say whether they then go on to share it on Facebook or on their Twitter account grumpyboomer032945231, but it’s not hard to imagine that’s what happens.
This isn’t to let the search algorithms off the hook either, but does go to show that the real problem, as ever, lies with the humans.
And that point about the small group who interact with the junk is the social warming crux. It doesn’t matter that the vast majority of people want to use social networks in a positive way, and aren’t looking for trouble. There’s a relatively small group—perhaps a third but more likely substantially smaller—who are attracted to trouble, and very probably spread what they find around, creating friction even with those who broadly agree with them. You may think you’re in a filter bubble, but you’re going to find extremes intruding because of those partisans. It’s a fact of life; the researchers note that their findings back up those from a number of other studies. We tend to think that the problem is what the machines are doing, in what they’re serving up to us, but really it’s about what we pick. Social warming is a human phenomenon. We have to accept that.
Glimpses of the AI tsunami
(Of the what? Read here.)
• “Artificial intelligence could lead to the extinction of humanity, experts - including the heads of OpenAI and Google Deepmind - have warned.” The Centre for AI Safety website suggests a number of possible disaster scenarios: 1/ AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons; 2/ AI-generated misinformation could destabilise society and "undermine collective decision-making"; 3/ The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship"; 4/ Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E".
Well, OK, though we already have 1/, we’re already good at 2/, we already have 3/, and lots of people struggle a bit without a phone so you could say 4/ is here. And where are or were these folks on this topic called “global heating”? Anyhow, Chicken Little is on the white courtesy telephone.
• Is avoiding extinction from AI really an urgent priority? Seth Lazar (philosopher), Jeremy Howard (fast.ai staffer), and Arvind Narayanan (AI impacts researcher) think: not so fast.
• AI chatbot to take over from humans on eating disorder helpline. It’s not that the chatbot is particularly better at the job than the humans, but unlike the four staffers doing that work at the National Eating Disorder Association, it hasn’t joined a union. Once again: technology as a social weapon. And, notice, not a far-future event.
• puts finger to earpiece uh this just in: “US eating disorder helpline takes down AI chatbot over harmful advice”:
activist Sharon Maxwell posted on Instagram that Tessa offered her “healthy eating tips” and advice on how to lose weight. The chatbot recommended a calorie deficit of 500 to 1,000 calories a day and weekly weighing and measuring to keep track of weight.
“If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED. If I had not gotten help, I would not still be alive today,” Maxwell wrote. “It is beyond time for Neda to step aside.”
*Executive suit voice* Looook, there are bound to be some problems along the way to cost reductions, OK?
• “Man commits suicide after talking with AI chatbot, widow says”:
The [Chai] app’s chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet [La Libre in Belgium]. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting.
Not sure if that falls under 1/, 2/, or 4/, but again—the bad effects are already here. No need for “extinction”, unless you think the AIs are going to kill us one by one.
• Adobe Photoshop’s AI-generative-fill Firefly has reached a lot of people, who have thought of all sorts of things to do with it. Brian Roemmele (and others) had a quote-tweeted thread of “AI completion of an iconic album cover”, of which the Abbey Road one is outstanding. There’s also Girl With a Pearl Earring, Expanded, which is a short video that will leave you gasping, or aghast.
See? It’s the tsunami. It’s everywhere.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• Back next week (and then taking a break for a couple of weeks in June). Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
Bravo Charles. Great work as ever.
It would be useful to have a gender breakdown for this trial. Having just read a preview copy of James Ball’s book ‘The Other Pandemic’, it seems a certain subset of men are the big drivers of online misinformation and online conspiracies, which I found interesting as it counters our societal conditioning that men are the ‘rational’ sex.