Too much, too young
Molly Russell, Jeremy Vine, the Women's March: the collateral damage of social media
The inquest into the death of Molly Russell began this week. Russell was 14 when she took her own life in November 2017, to the utter shock of her parents. They thought she was just a normal teenage schoolgirl, but after her death they discovered that she had been viewing content on Instagram, Pinterest, YouTube and Twitter that advocated self-harm, offered depressing quotes, suggested suicide.
Even more awful for the Russell family, after her death they found Pinterest kept emailing her—and that the suggested content was more of the same dire, depressing content.
The testimony of Ian Russell, her father, and his statements in the intervening years, make clear how Molly’s smartphone was able to steer her mood better than her parents and family could. She spent hours and hours on her phone, as teenagers do: she looked at her Instagram account more than 120 times a day, she liked more than 11,000 pieces of content, and she shared material more than 3,000 times, which included more than 1,500 videos.
On Thursday, the inquest heard from Judson Hoffman of Pinterest, who was challenged about the content sent to her:
He was taken through a number of images the company had sent to Molly via email before her death, with headings such as "10 depression pins you might like" and "depression recovery, depressed girl and more pins trending on Pinterest".
The emails also contained images about which Mr Sanders asked Mr Hoffman if he believed they were "safe for children to see".
Mr Hoffman replied: "So, I want to be careful here because of the guidance that we have seen.
"I will say that this is the type of content that we wouldn't like anyone spending a lot of time with."
But she did. The algorithms saw that she spent a lot of time looking at it—she engaged with it—and so they pushed more and more of the same stuff to her. The outcome, it seems, is that that pushed her to take her life; though that is what the inquest is there to decide. (It’s expected to last two weeks.) In her way, Molly was a sad case of what happens when the algorithms tune in to something deep in a person’s soul, and that something is not good.
• In a similar vein, a man who “weaponised the internet” to harass Jeremy Vine and a number of other BBC staff was jailed for five and a half years, a substantial sentence by any standards. (Grievous Bodily Harm, GBH, tends to have a maximum of five years.) The man had harassed people for nine years for reasons that haven’t been adequately explained; he used YouTube and Twitter to marshal people against his targets, which led Vine to fear for his own and his children’s safety. It’s not that the stalker ever looked like he would do something; but he was trying to amp up his followers so they, conceivably, might.
Even worse, when Vine and others complained to the social media companies, nothing was done—or else the very least possible. YouTube would take a video down, but the stalker had already encouraged his followers to copy the video and put it up themselves. Vine is concerned that the stalker will continue to harass people from jail; he still has a Twitter account, and has put out a video promising to be back.
This is absolutely classic social platform behaviour: ignore, ignore, ignore. Don’t do anything that would cost you. Don’t moderate. Even when people complain that they’re being driven mad by harassment, well, who’s to say who’s right and who’s wrong? Much easier to let the algorithms funnel users this way and that, and let the courts sort out any problems that emerge.
• The New York Times had a fascinating piece about how a Russian disinformation farm used Twitter to amplify content from right-wing American sites to harass (that word again):
One hundred and fifty-two different Russian accounts produced material about her. Public archives of Twitter accounts known to be Russian contain 2,642 tweets about [Linda] Sarsour, many of which found large audiences, according to an analysis by Advance Democracy Inc., a nonprofit, nonpartisan organization that conducts public-interest research and investigations.
As the piece points out, social platforms have made it much easier for Russia to amp up social division among Americans, something that it has sought to do for half a century or so.
Under these auspicious conditions, their goals shifted from electoral politics to something more general — the goal of deepening rifts in American society, said Alex Iftimie, a former federal prosecutor who worked on a 2018 case against an administrator at Project Lakhta, which oversaw the Internet Research Agency and other Russian trolling operations.
“It wasn’t exclusively about Trump and Clinton anymore,” said Mr. Iftimie, now a partner at Morrison Foerster. “It was deeper and more sinister and more diffuse in its focus on exploiting divisions within society on any number of different levels.”
There was a routine: arriving for a shift, workers would scan news outlets on the ideological fringes, far left and far right, mining for extreme content that they could publish and amplify on the platforms, feeding extreme views into mainstream conversations.
And this is the point. The divisions already existed. But the Russians successfully used the social media platforms to amplify them. You couldn’t ask for a clearer example of social warming: the process by which social platforms make their users more and more aggrieved and divided, and less and less happy.
But all three of the above are examples of that. Molly Russell was taken too far. Jeremy Vine feared that some social media users would go too far. And the Russians got broad swathes of Americans to become even more partisan, even more entrenched, and split up a burgeoning women’s movement in the process.
Sure, social media has its benefits. I use Twitter a great deal. But I’m always wary of the people there who don’t seem to realise how easily they are drawn into confrontation and polarisation of views. Maybe Molly Russell’s death has some lessons for us. But I worry that we’ll ignore them, too eager to move on to the next bit of outraging content.
Glimpses of the AI tsunami
(Of the what? Read here.)
• Lexica is a search engine for AI-generated images. Delightful. Enter a phrase, get lots of pictures, see what the prompt was which generated that picture. Then pursue that down a rabbit hole.
• Getty Images has banned uploads for sale generated by AI art systems including Dall-E, Stable Diffusion (and Diffusion Bee), and Midjourney. The reason given: “There are real concerns with respect to the copyright of outputs from these models and unaddressed rights issues with respect to the imagery, the image metadata and those individuals contained within the imagery,” as Getty Images CEO Craig Peters told The Verge.
Getty’s concern is that the AI systems have been trained by scraping the web, including copyrighted material. As evidence, James Vincent at The Verge points to a Lexica search relating to D— T—, though you can get the same (and, weirdly, quite a lot of similar pictures) just by entering “getty images” as the Lexica search.
My initial reaction to this was that Getty was overreacting: that the systems don’t actually contain copyrighted content, just a set of mathematical representations of what pictures look like, and that it would be like complaining that someone who walked around a gallery of works by a well-known artist (say, Roy Lichtenstein) and then mimicked their style was infringing.
In response, on Twitter, Nicole Miller pointed me to Charlie Warzel’s writeup of the blogpost by Andy Baio and Simon Willison about the training data used to train Stable Diffusion. And yes, the Stable Diffusion system (and presumably the other systems too) have been trained on lots of copyrighted work. But the important word here is trained. I’ve got the downloadable Mac app version of Stable Diffusion, called Diffusion Bee, running locally. I can inspect the whole package (which takes less than 2GB on disk), with its nested folders inside folders, and there’s absolutely nowhere that you can point to as STORE_OF_PICTURES_TO_REFER_TO. I browsed through the entire package and found no images at all except the app icon; just loads of tiny text files and a few Python executables.
It’s the same as if you were to open up an artist’s head and peer inside (please don’t do this): you learn nothing of how the final product comes out. Human artists look at loads of copyrighted work in the course of learning their craft, but you can’t point to the part of the brain that says “pointillist”.
Miller says (I paraphrase) “what about when the AI generates an image of a copyrighted/trademarked object”, such as Mickey Mouse or Baby Yoda. Being no lawyer, I’ve no idea. That might be where Getty worries that the legal ground is shaky, which could be reasonable. If you have knowledge on this topic, let me know in the comments.
• AI is “here to stay” in DevOps (development and operations programming) according to a survey at Gitlab, which found use of AI/machine learning has doubled, and AI will have “an increasingly important role”.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.