"The lamps are going out all over Europe, [and] we shall not see them lit again in our life-time", British Foreign Secretary Sir Edward Grey remarked to a friend on the eve of the United Kingdom's entry into the First World War. First published in Grey's memoirs in 1925, the statement earned wide attention as an accurate perception of the First World War and its geopolitical and cultural consequences.
To be precise, Grey made his remark on the evening of 3 August 1914; Britain entered the war the next day. Between the war, which lasted until November 1918, and the flu pandemic which came hot on its heels and lasted to April 1920, Grey’s comments were quite astute; a lot of people didn’t make it, even though of course he did. (Hence the memoirs.)
Grey’s remark1 surfaced in my mind this week when I heard that a big new study had been published, examining the question of Facebook’s effect on us all. “Maybe Facebook’s Ruthless Ascent Didn’t Make the World More Depressed, Study says” (quoth Gizmodo); “Why Facebook may actually be GOOD for you: Social media site is not linked to psychological harm and could even have a positive impact on wellbeing, study suggests” offered the ever-wordy Daily Mail website; “Facebook might actually benefit mental health, new study suggests”, said Sky News.
The study was carried out by a team at the Oxford Internet Institute (OII), which is a well-respected group; it was led by Professor Andrew Przybylski and Professor Matti Vuorre. Przybylski (whose name is, I think, pronounced Pri-shuh-bill-skee) was one of the people I talked to at length when I was writing Social Warming. My experience was that he tended to think that most social science studies were overinterpreting their findings, which struck me as an unusual attitude for someone working in the social sciences, but equally, not unusual among the many scientists I’ve contacted down the years: very frequently when I was seeking comment on a story about something claimed by one scientist or group of same, and asked an independent scientist in the same field, they’d be cautious. And often they’d be right in doing so. (Just look at all the excitement about superconductor-oh-no-it’s-not LK-99 in the past week: any materials scientist who was popping the champagne at that looks pretty foolish now, while the ones who said “wellllllllll” look wise.)
My purpose here isn’t to critique the study. Prof Przybylski is far better at that stuff than me, and anyway offers all sorts of hedging in the press release:
The research paper states, ‘Although reports of negative psychological outcomes associated with social media are common in academic and popular writing, evidence for harms is, on balance, more speculative than conclusive.’
Professor Przybylski explains, “We examined the best available data carefully – and found they did not support the idea that Facebook membership is related to harm, quite the opposite. In fact, our analysis indicates Facebook is possibly related to positive well-being”.
Oh well OK so—
But, says Professor Przybylski, ‘This is not to say this is evidence that Facebook is good for the well-being of users. Rather, the best global data does not support the idea that the expansion of social media has a negative global association with well-being across nations and different demographics.’
As with a lot of my discussion with Przybylski, this leaves me a bit nonplussed. So it’s not bad, but it’s not good either? But they’re the scientists, and for their study they say they looked at “massive amounts of” data covering the period from 2008 to 2019. Facebook provided the data, but didn’t get to oversee the research, and didn’t know what the findings would be ahead of time, the OII says.
This period was critical because popular commentators have claimed, without evidence, that trends in social media use and well-being during this period are linked.
Is there a scientific equivalent of a subtweet? Anyway.
Professors Przybylski and Vuorre tackled this idea head-on, ‘To better understand the plausible range of associations, we linked data tracking Facebook’s global adoption with three indicators of well-being: life satisfaction, negative and positive psychological experiences. We examined 72 countries’ per capita active Facebook users in males and females in two age brackets (13-34yrs and 35+years).’ They found no evidence for negative associations and in many cases, there were positive correlations between Facebook and well-being indicators.
Again, I’m not going to critique the study—that’s for far better qualified scientists to do. What I want to point out is this: how long is it since you heard of an in-depth scientific study into Facebook? Or Twitter? Or Instagram, or Snapchat, or TikTok? (Has there ever been an in-depth scientific study into TikTok? Easy: no. “It’s time to study this”, researchers said in November 2021. No dice.) Yes, there were a set of studies published at the end of July, carried out in cooperation with Facebook/Instagram owner Meta, which found that people are ideologically split (fancy!), that Pages and Groups are more deeply polarised than the rest of Facebook, but that it doesn’t really change peoples’ political views.
Yet a few years ago it felt as though you couldn’t turn round without another study into one of the two influential social media sites—Facebook and Twitter—being published. Snapchat had its share: here’s one from 2016, and a search on “Instagram scientific study” turns up a ton of them.
Now, though, the volume is being turned down; quite deliberately, I think. Facebook cooperated with the OII, but the data being sought seems to have been relatively superficial, and not like the in-depth internal studies that Facebook itself carried out to discover that it was driving people to extremist groups. Twitter has turned off its free API read access which used to help researchers find misinformation, and the cost now of getting that feed is prohibitive: $500k annually for access to 0.3% of tweets. And even for that, the API is reportedly janky.
Which is what brought Sir Edward Grey’s remark to mind. You’re never again going to get a sensible study out of Twitter (or whatever it’s called next week). Never. Just look at Elon Musk’s reaction to the Center for Countering Digital Hate, which asserted that Twitter is allowing hate speech and misinformation to thrive under its current owner. In response, Musk’s X Corp is suing the CCDH, alleging among other things that it “scraped data from X’s platform in violation of the express terms of its agreement with X Corp.” Since you wouldn’t need to scrape the site if you had an API deal, this implies that the CCDH has been trying to get its data by the sort of web scraping that ordinary mortals have to resort to. (The insistence by Twitter that the CCDH’s publishing of reports pointing to hate speech is “not for the goal of combating hate, but rather to censor a wide range of viewpoints” on p2 of its civil complaint is quietly hilarious: how on earth is publication censorship?)
It’s not as if Facebook was ever very eager to cooperate with scientific studies it couldn’t control, but at least it did sometimes. My suspicion is that now social scientist’ attention will anyway turn to the effects of AI-generated content on what we read and do, and how we react to it: the new shiny. If you want to generate headlines and get news spots, you’ll want to go with what’s hot. That’s the way of the world.
This doesn’t of course mean that there’s nothing to find out about social media, no new observations to be made, no news. Just that with Facebook entering its 20th year, the circus of social media is a background hum, not a shocking new entrant.
Operational note: I’m taking a two-week break. Back in a fortnight.
Glimpses of the AI tsunami
(Of the what? Read here. And then the update.)
• Google’s search box changed the meaning of information. Wonderful essay which points out that adding AI to answers isn’t necessarily going to solve “the soft apocalypse of truth”.
• The number of AI-generated “news” websites has shot up from 50 in May to more than 300 now, says NewsGuard. This doesn’t seem a good development: the sites are barely coherent, and they’re siphoning advertising revenue away from human-created sites which actually deserve the attention.
• AI travel guides are infesting Amazon, and they’re as junky and numerous as you’d expect. But: Amazon today, retail outlets tomorrow? What if someone thinks there’s money to be made offering something that goes on shelves?
• YouTube thumbnails—which are a big part of why some people click on videos—are big business for humans, but now AI is taking over.
• YouTube is going to experiment with AI-generated video summaries. All we need now is for AI to generate the videos, and we’ve closed the loop. Can’t be long now. (And there were those weird videos aimed at kids; weren’t those as near as can be?)
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
Which I first came across via the very old Radio 4 quiz game My Word, which had two teams of two people; the respective captains were Frank Muir and Denis Norden. If you don’t recognise the names, they were comedy writers. (Norden later became far more visible through “It’ll Be Alright On The Night”, the TV series of bloopers.) One element of the game was that each team would be given a famous quotation near the start of the programme; at its end, the non-comic would say where the quote came from, and then their comic captain would tell a short story for which the punchline was a strangled version of the quote. For this quote, the story was about a would-be animal handler called Oliver Yallop, who—unfortunately for him—was always attacked and/or maimed by the innocent-sounding animals he wanted to examine and help. One day the news came that Oliver had died—an unfortunate accident involving an animal—but the narrator imagined that now he was in a better place where his desire was met. Or, as he put it, “The lambs are going ‘ah’ to Oliver Yallop.” (I told you it was strangled.)
My father and I always used to wonder whether the comics wrote these during the programme, or if they did it in advance. I’m certain now that they had advance warning (the stories were too good). Perhaps my father was trying to get me to apply a bit of thought to how difficult their work was. Got there in the end, Dad.
I used to love listening to Muir and Norden as a teenager in the 70s—the convoluted puns and the general delight in word play had a big influence on me. And 'Frank Muir Goes Into…' I've just found a website which has the old recordings and I'm reluctant to click on the links because I don't think I want to find out they weren't as funny as I thought they were at the time. https://www.radioechoes.com/?page=series&genre=OTR-Quiz&series=My%20Word
Same with 'I'm Sorry I'll Read That Again', 'What Oh Jeeves', 'The Navy Lark' and the rest—best to let sleeping heroes lie, I think.
Thanks for a typically interesting article!
Typo at the top. 1914, not 1912. Delete this comment when you fix it!