The question of whether social media actually influenced the outcome of the 2016 US presidential election is one that just won’t go away. That election might—as I wrote in Social Warming—now feel like ancient history. But we like to revisit what the Romans got up to when we can find a new wrinkle on it, and the same applies to more recent events.
Thus there’s been a lot of attention to a new paper in Nature Communications with the snappy (SEO-friendly!) title “Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior”.
The authors (from Denmark, Ireland, Germany and the US) lay it out in the abstract:
We demonstrate, first, that exposure to Russian disinformation accounts was heavily concentrated: only 1% of users accounted for 70% of exposures. Second, exposure was concentrated among users who strongly identified as Republicans. Third, exposure to the Russian influence campaign was eclipsed by content from domestic news media and politicians. Finally, we find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior.
So! Move along, nothing to see, right? This seems like a pretty important finding. If foreign influence doesn’t change people’s minds or votes, then basically there’s absolutely nothing to worry about. Who cares? Let the Internet Research Agency (IRA) buy blue ticks (“verified” status) and tweet for all they’re worth, because it won’t make the tiniest bit of difference.
You only have to express it like that to feel that this somehow doesn’t tell the story. If this stuff makes no difference, advertising is a con, isn’t it? All political effort is wasted, surely? Social warming isn’t a thing, and that vague annoyance you get from those “For You” posts is just imaginary, right?
Well, except. There are a few points in the “Results” section:
The main avenue for exposure to these posts was not through users directly following foreign influence accounts. Exposure was mostly incidental, primarily via retweets from ordinary accounts that users followed. The share of retweets as an exposure pathway grew steadily over time, reaching 75–80% by election day.
So the IRA’s tweets tended to reach people through others’ accounts.
Despite the seemingly large number of posts from Internet Research Agency accounts in respondents’ timelines, they are overshadowed—by an order of magnitude—by posts from national news media and politicians. While, on average, respondents were exposed to roughly 4 posts from Russian foreign influence accounts per day in the last month of the election campaign, they were exposed to an average of 106 posts on average per day from national news media and 35 posts per day from US politicians. In other words, respondents were exposed to 25 times more posts from national news media and 9 times as many posts from politicians than those from Russian foreign influence accounts.
Again, this feels like it misses the point of those influence posts. People tune out posts from the news media, and what politicians say (or certainly they did back in 2016, when most politicians—with one big exception—were quite bland on social media). But those IRA posts were intended to inflame. Riling people up, getting people to take a side, was the point.
But don’t take it from me. Here’s Jennifer Mercieca, professor in the department of communication and journalism at Texas A&M University:
Mercieca’s thread (in full here) on this is instructive:
Folks who think that public persuasion is a mechanism & you can pretest/post-test & arrive at verifiable conclusions have false expectations about how persuasion works. The assumption is a media consumer (of tweets or fb posts or ads or speeches, etc) has no opinion about something, then sees the |thing| and then grabs hold of that opinion and makes it their own. That's not at all how this works.
In reality, public persuasion works by controlling the discourse, by flooding the media spaces with the same idea/content and drowning out other ideas, by reaffirming ingroup outgroup identities, and (above all perhaps) by repetition.
That accumulates. It's about definitions. It's about controlling the news agenda, the platform agenda, it's about framing and narrative. There isn't a solid "hypodermic needle" effect for persuasion/media, but there are other effects.
If you're trying to find a causal mechanism you probably won't unless you look at a ton of different things, over time. You'd need to track what people saw, how they feel about the people/accounts who post it, and what they saw & didn't see and how often.
You'd need to know about how their brains are processing that information. I'm sure it's doable, but maybe not how it's being done currently.
I know that everyone got very excited about Facebook and Cambridge Analytica, and the latter’s role in the Brexit referendum and the US presidential election. In researching Social Warming, I grew less persuaded that that had made any difference (it was all being discounted anyway), but increasingly persuaded that in the Philippines, earlier in the year, and then the Brexit referendum, and then the US election, that social media (principally, then, Facebook) had an incremental or reinforcing effect. For Brexit, Vote Leave spent £2.7m of its advertising money putting targeted ads on Facebook, all tuned to rile people up against the EU (naturally) via a data harvesting scheme that used a lottery (unwinnable) to identify potential anti-EU users who wouldn’t normally vote. It’s the last bit that’s important. Getting people steamed up about a subject is a crucial part of getting them to turn up and vote: if you could vote just by ambling into the kitchen and pressing a button, things would be (very) different. But lots of people are indifferent. They don’t vote. Brexit got a 72% turnout, which is amazingly high. The 2016 US presidential election got 55-60%, depending how you calculate eligible voters. A key factor was disaffected voters not turning out for Hillary Clinton in key states, while energised voters did for Trump in the same. Don’t underestimate the effect those Facebook ads could have had. They don’t have to change people’s minds. They have to energise (or antagonise) them.
That’s why I keep saying about social warming, as a phenomenon, that it doesn’t have to be large to have an effect—even a big effect. Tipping points happen. Ice melts. Electoral shocks happen. Hopefully, not too often.
Glimpses of the AI tsunami
(Of the what? Read here.)
• Inside CNet’s AI-powered SEO machine - terrific piece by Mia Sato and James Vincent, looking at how a private equity company’s takeover of the once-venerable CNet has led it to put out a ton of AI-generated “content” whose only real aim is to rank highly on Google:
Viewed cynically, it makes perfect sense for [private equity owner] Red Ventures to deploy AI: it is flooding the Google search algorithm with content, attempting to rank highly for various valuable searches, and then collecting fees when visitors click through to a credit card or mortgage application. AI lowers the cost of content creation, increasing the profit for each click. There is not a private equity company in the world that can resist this temptation.
The problem is that there’s no real reason to fund actual tech news once you’ve started down that path.
• Getty, the image licensing company, is suing Stability AI for using copyrighted images in the training of the Stable Diffusion system. The argument seems to be that Stability AI must have copied and (however briefly) retained images from Getty’s site without a licence in order to carry out the training of the system. Getty doesn’t seem to be arguing that the system that one uses stores any copyrighted images. (Getty press release.)
• Stable AI, Midjourney and DeviantArt sued in class action for, well, all sorts of things (mainly copyright); more usefully, an analysis of whether there’s a case there by Andres Guadamuz. He thinks not.
• AI-created comic could be deemed ineligible for copyright protection. The US Copyright Office denied copyright on an AI-generated image, “A Recent Entrance to Paradise” claimed by Stephen Thaler because he’d generated it with AI. Then it gave copyright to an AI-produced comic, then reversed, and now it’s with the artist to try to claim copyright. Makes sense to me that AI work isn’t copyrightable.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• Back next week! Or leave a comment here, or in the Substack chat.