Sit right there while we berate you for running your platform within the law
Plus AI-generated junk starts spreading
Once again this week, the chiefs of the big social media companies were hauled into a geriatric daycare centre in Washington—sorry, the headmaster’s office—sorry, Congress—to be berated by a group of somehow-elected representatives over their perceived failings. As ever, it wasn’t a pretty sight, nor a very helpful one.
This is a scene that we’ve seen repeated multiple times over the past six years or so, ever since April 2018 when Mark Zuckerberg made his first appearance before the House [of Representatives, the slightly younger group] Committee on Energy and Commerce over the Cambridge Analytica scandal. The formula is pretty consistent, particularly when it comes to the Senate: the people from the companies do their level best to look contrite and not give blunt answers to stupid questions, while the congresspeople ask a set of leading questions that would be thrown out if they were posed in a courtroom. The aim is frequently not to throw any light on the subject, but to generate plenty of media heat and especially short clips that can then be used in subsequent political campaigns.
And yet, they can also produce some telling moments. On Wednesday, the theme of the hearing was that the big tech platforms have been facilitating child sexual abuse. There to be ritually beaten up were five chief executives of their respective platforms: Zuckerberg (on his eighth appearance), Linda Yaccarino of eX-Twitter (no doubt wondering more than ever about her life choices; she was there after being subpoenaed), Singapore-born Shou Zi Chew of TikTok, Evan Spiegel of Snap (also subpoenaed) and Jason Citron of Discord (also subpoenaed). Though as the Washington Post pointed out, there’s one very big, very obvious site missing there: YouTube, which is the most widely used tech platform in the US among those aged 13-17: 93% say they use it. (The other 7% are probably lying. And for those aged 1-13, it’s probably also 100%, due to weary parents handing over the iPad when they need a break.)
The senators, ahead of time, said that they wanted to use the hearing to (quoting the Washington Post) “galvanise support for legislation to curtail child sexual abuse material [CSAM] online and require tech platforms to expand their safety offerings.”
The first part of that is puzzling: owning or distributing CSAM is already illegal in pretty much any country you care to name, and the platforms go to great lengths to try to stamp it out. However, the volume has continued to grow: 90% is uploaded outside the US, though the platforms that host it are American-owned.
Lindsey Graham, a Republican representing South Carolina (population 5.1m), kicked the charm session off by saying to Zuck and the others that “I know you don’t mean for it to be so, but you have blood on your hands.” Things went downhill from there, with Graham claiming that Tiktok is being used “to basically destroy the Israeli state”, which takes the concept that “words are violence” a lot further than you might have expected.
But the Democrats got in on the daftness too. The committee chair Dick Durbin (rep Illinois, pop. 12.8m) was next up, and said that the real fault was Section 230 of the 1996 Communications Decency Act (which allows internet service providers to remove some content without being held liable for all content posted on them). This, he said, means “the most profitable industry in the history of capitalism” can function “without fear or liability for unsafe practices.” I think he’s wrong on two counts there. First, the most profitable industry in the history of capitalism is either the iPhone, or possibly Welsh slate mining. Apple certainly has to worry about liability via the iPhone, though the owners of Welsh slate mines in the 1860s didn’t, at all. By contrast eX-Twitter and Snap aren’t profitable, and TikTok probably isn’t.
Anyway: Zuckerberg grasped the moment by standing up, turning round and apologising to the families who had been sat behind him: “I’m sorry for everything you have all been through. No one should have to go through the things that your families have suffered, and this is why we invested so much.”
(Note the little inclusion of how, actually, Meta has been trying really hard. Which is doubtless true.)
The most notable element is the attack on Section 230, because for all the noise about “passing laws”, there’s no sign that the politicians are going to do any such thing. The only one who had anything that looked like legislation is Chris Coons (a Democrat from Delaware, pop. 1.0m), who co-authored the Platform Accountability and Transparency Act. When asked, none of the five CEOS was in favour of it (“Mr. Chairman, let the record reflect a yawning silence”, Coons said) but it hasn’t advanced out of committee either, so it seems other politicians don’t like it either. Yaccarino said she would support a Durbin scheme which would make companies more liable for “facilitating child abuse”. This is something of a turkey-voting-for-Christmas move, given how dramatically eX-Twitter has cut back its moderation staff. Maybe she judged that it’ll never happen, given how glacial lawmaking is there, and that it would be good PR.
So what was this round of hearings meant to find out? As the Washington Post noted,
Senate Judiciary Committee leaders said they hoped the hearing would help build momentum for a package of bills aimed at curbing child abuse material online, including by allowing victims to sue companies for facilitation and by making it more difficult for platforms to dismiss such lawsuits. The latter seeks to narrow industry protections afforded under Section 230, the besieged legal shield that immunizes digital services from lawsuits for hosting and moderating user content, which senators repeatedly attacked Wednesday.
The focus on Section 230 is, frankly, concerning. It’s one of the very, very few pieces of legislation passed that deals with the internet as it is, rather than with the internet as some badly informed people (politicians) want it to be.
Section 230 really is essential for anyone providing any sort of forum online. If people can post on your forum, and if you are jointly liable with them for what they say, then it’s a very short line between that and a lawsuit. That’s why S230 exists: it gives platforms limited legal protection. Specifically,
The ‘Good Samaritan’ clause, §230(c), conferred legal immunity for ‘any action taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected’.
In other words: you can moderate how you like—heavily, lightly, not at all. But you can’t be sued for what people post, unless (and not necessarily even then) you’re put on notice that it’s in some way offensive to someone. If the platform thinks the case is defensible, it can reject the complaint.
Imagine if you didn’t have that: then, as a platform owner, you might be liable for anything anyone posts, along with the person who posted it. This is not a comfortable position to be in. This is actually the situation that newspapers face: they can be held liable for what they print, along with the writer. The difference with platforms is that they allow a far, far great amount of content, which they haven’t directly commissioned, to be put on their sites. That’s how the internet functions. If the senators want Facebook and Instagram and Snap and TikTok and eX-Twitter to resemble newspapers, where every piece of content posted remains invisible until it is examined and passed fit to publish, they’re going to destroy a lot of value, because Facebook and the rest will be turned into near-literal newspapers. Perhaps that does mean we would only see jolly, uplifting pieces of content, but it’s not what you might think of as The Internet. (In reality you’d just see a huge leak of business overseas to sites that were perfectly happy to host content. Maybe Facebook would be a European company.)
There’s a strange self-destructive element to American politicians at present. They don’t like the power that the tech companies have. They don’t like some of the side effects of what happens to some users (one family at the hearing had a daughter who died after buying a fentanyl tablet from a drug dealer on Instagram; the drug dealer was still on there months later), but they don’t have a clear idea of how their proposed changes would really play out. Yes, it would be good if the platforms took their responsibilities more seriously (especially eX-Twitter, which has fired a lot of its moderators). But tearing down the structures that have actually got the internet to where it is? That isn’t the way to do it. Meta especially feels the weight of failure, I think—hence Zuckerberg’s little address to the crowd. He didn’t have to do it, after all.
He did, however, have to turn up to the hearing, because he has learnt that they are full of sound and fury, signifying nothing. Indeed, tales told by idiots. In hours of “interrogation”, the only significant moment was one that Zuckerberg himself had planned well ahead of time. It’s enough to make you think that politicians just don’t get the internet.
Glimpses of the AI tsunami
(Of the what? Read here. And then the update.)
• Kempelen’s Mechanical Turk was a forerunner of today’s systems of artificial intelligence, not because it managed to play a game well, as with IBM’s Deep Blue or Google’s AlphaGo, but because many AI systems are, in large part, also technical illusions designed to fool the public. Jathan Sadowski not holding back at Real Life magazine.
• In 2018, the indie women’s website The Hairpin stopped publishing, along with its sister site The Awl. This year, The Hairpin has been Frankensteined back into existence and stuffed with slapdash AI-generated articles designed to attract search engine traffic. (Sample headlines: “What Does It Mean When You Remember Your Dreams?” and “White Town’s ‘Your Woman’ Explained.”) Kate Knibbs on turning a well-loved site into a clickbait farm.
• Based on related searches, like “subway accident,” Mr. Khan could surmise how Mr. Sachman had died. Mr. Khan could then conduct a cursory search of his own around the internet for any biographical information, leading him to a LinkedIn page detailing Mr. Sachman’s work history. And finally, he could prompt an artificial intelligence tool called a large language model to create a short article. - Andrew Keh and Stuart A. Thompson at the New York Times on the “obituary pirates” who capture short-lived interest in sudden deaths to drive traffic to LLM-generated “obituaries”.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
The challenge with social media platforms is that their algorithms promote the most morally outraged content that puts two groups against each other bc that content keeps people engaged.
The platforms need to be held accountable for promoting the toxic content not for having it on the site.
I think combining cyber bullying and CSAM in the same hearing creates confusion. Which is why nothing gets done.
There are regulations for CSAM and lots of tech to weed it out.
There are not regulations for designing algorithms that promote division. And the latter is the biggest issue.
The politicians were doing what they do best: grandstanding. Expecting anything else is…err…fanciful.