The outrage machine finds its nirvana
Plus don't get AI to write a poll based on news stories (especially about unexplained deaths)
Think back to yesterday. How many events did you witness, in your physical realm (ie not on TV or some other device), which you found outraging? So shocking that you paused for a moment to say “Did they really do that?” The number might not be big. In fact it might well be zero.
OK, then, expand your horizon: let’s include all of today (so far) and the preceding week. How many truly outraging, outrageous events happened in the physical space you share: perhaps a pushing fight on the Underground, or someone shoving someone aside to get up stairs, or shouting in someone’s face? (Any of those counts: they’re dramatic events, and would count as such in a film or TV script.)
Even with that enlarged view, it’s likely that your outrage count isn’t very high (unless you work in one of the emergency services, or perhaps at a train station or a vehicle fuelling stop). A few?
But now let’s change the limitation: now you’re allowed to include anything you’ve seen on TV or the internet, or read there.
Suddenly, it’s a lot easier, isn’t it? Social media, the outrage machine, won’t let you down in your moment of need. Here are a couple of examples that went past on Twitter the other evening. Both come via the same person, the left-hand one being a retweet:
For future historians, the context is: on October 7 2023, the terror group Hamas broke through barriers into Israel and massacred around 1,500 people, including children, the elderly, and people who were having an open-air dance rave. They took more than 150 hostages back to the Gaza Strip, their home base west of Israel. In response, the Israeli government cut off fuel, water and power supplies to the Gaza Strip, and told the population to abandon the northern area or face the consequences of any future incursion by the Israel Defence Force (IDF) as it sought to find the hostages and kill all of Hamas’s fighters and leaders. The Israeli armed forces also carried out hundreds of air strikes on locations all over the Strip, and for days refused to allow trucks to cross from Egypt with any sort of supplies, despite many calls from the United Nations and others for humanitarian aid to be permitted across.
Amid the fog of war, social media provides the perfect outlet for outrage. The stark facts set out in the paragraph above are what we can all agree on. My purpose here, by the way, is absolutely not to determine who’s right and who’s wrong in the decades-long fight; if you want a useful perspective on that, you should read Gurwinder’s post.
What I do want to do here is to point out that your emotions are being manipulated on social media, sometimes by people who feel emotional about something, sometimes by people who delight in fomenting outrage. Welcome to the world of “outrage tweets”.
So let’s look at the two tweets above. To emphasise: I don’t diminish how strongly the two people there feel about the topic. What I’m fascinated by is how attuned their tweets are to making you feel a particular way. The first one starts with an outrage word: Sickening. Take a quick look through Twitter, and you’ll find it’s not an uncommon word used to gin up outrage. And indeed the same applies on Threads. It’s sickening! It’s a legit vomitarium! (Also, as happens when you look at a single word too often, such as in an unfurling of search results, you begin to doubt that it is a real, actual, meaningful word and not just a random collection of letters.
Anyway, moving on to the rest of the tweet: “As Hamas was losing ground in the fight with the IDF in [the city of] Jabalia yesterday, they sent around 100 women and children towards the Israeli troops as human shields—to try to protect the terrorists.” I can’t find any independent verification of this; it’s a claim made by Israeli soldiers who were trying to capture a compound in Jabilia. I’d imagine that it’s pretty difficult as a soldier to figure out what is happening when you see a stream of people coming towards you; if they’re not armed and you’re in a conflict, the presumption that they’re “human shields” might feel reasonable. Though objectively, and with the important proviso that there aren’t bullets whizzing past my head, I’d say that the women and children aren’t doing much shielding if they’re out in the open.
And the last part of the tweet: “Expect Amnesty, the BBC & UN to remain silent about these war crimes.” This completes the perfect outrage confection: here’s an outrageous event, and now you aren’t even going to be told about it by those who should be expressing outrage about it!
As it happens, I pretty easily found a BBC report in which Amnesty condemns the use of human shields in Gaza… though that was about Israeli troops forcing Palestinian families to remain in their homes “after taking them over as sniper positions or bases”, and happened back in January 2009. For the latest claim, I could only find the Times of Israel story linked above.
Thing is, we don’t know if that’s what the civilians were being used for; we know barely anything. Yet in one tweet we’re encouraged both to be sickened and to distrust the media and NGOs because they won’t tell us about something we’re not sure actually happened. You can see how this feeds on itself: in a few days if/when none of Amnesty, the BBC or the UN mentions this, then that creates the conditions to say “You see? Just as I told you, they’re too biased to tell the truth.”
This sort of outrage/followup technique is how some of social media’s biggest grifters have built up their followings: hook enough people on what you point out they’re not being told by the (ugh) MSM, and just keep going from there.
The second tweet is very neat in its distillation of outrage: three weeks after the Hamas attack, Angelina Jolie called for a ceasefire in Palestine. This was not popular on the Israeli side; hence the comparison that the tweet makes with her preceding silence. Our obvious conclusion: she’s a hypocrite, or a sympathiser with Hamas, convicted by her lack of response. Join in the silent outrage!
At which point I want to tell you: stop. Recognise the outrage tweets, and don’t be taken in by them. Words like sickening (is it really a word? Are we sure any more?) and horrific and vile and of course disgusting are all signals to outrage. The insistence that “you won’t read this anywhere”, along with its corollary “spread this as far as you can”, are all signs of outrage tweets. Once you spot them, you can learn to ignore them.
Yet people like getting worked up over things they can’t change. Otherwise, why social media? It’s been amazing in the past week to see people getting worked up over whether Keir Starmer should be asking for—nay, demanding—a “ceasefire” in Gaza, or, instead, a “humanitarian pause”. First of all, how many of the people demanding the one or the other know the difference? Second, what difference do they think if the Leader of the Opposition, someone who literally does not currently wield power, calls for one thing or the other? There’s a strange sort of masochism involved in yelling about this online: you’re only going to frustrate yourself when nothing changes, which will probably magnify your sense of outrage, which will make you perfect prey for outrage tweets, which will make you feel more frustrated, which… you see how it works.
Don’t take the bait, plentiful though it is (and despite the best efforts of social media algorithms to put it in front of you repeatedly). Resist the outrage tweets. Resist the outrage. The world is confusing and maddening, but that doesn’t mean you have to be that way too.
Glimpses of the AI tsunami
(Of the what? Read here. And then the update.)
• Guardian complains to Microsoft over AI-generated poll on cause of death of Australian woman. A truly horrendous bit of AI accidentalism.
• Scarlett Johansson sues over AI advert. As reported in The Guardian, an AI app used “her name and likeness in an AI-generated advertisement without her permission.”
Of course the advert was on careful advertising platform X, aka Twitter, and generated by an app called Lisa AI: 90’s Yearbook & Avatar. It used real footage of Johansson, but generated a fake image and dialogue.
• South Korean Christians are using chatbots for prayers; parsons are using them for sermons. What I didn’t know before this story was that Christianity is the biggest religion in (South) Korea.
• Biden seeks to rein in AI with Executive Order (which isn’t actual legislation) which rests on the Defense Production Act (which is). Casey Newton with a useful “what it is and what it isn’t” writeup of Joe Biden’s EO from earlier this week. (Personally I think such stuff authored by politicians has minimal impact on what actually happens.)
• OpenAI has an internal tool which is really good at spotting AI-generated images, apparently. But it doesn’t want to release it because the bad publicity around its last tool for spotting AI-generated text, which wasn’t that good. According to Techcrunch,
A draft OpenAI blog post shared with TechCrunch revealed this interesting tidbit:
“[The classifier] remains over 95% accurate when [an] image has been subject to common types of modifications, such as cropping, resizing, JPEG compression, or when text or cutouts from real images are superimposed onto small portions of the generated image.”
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
The last sentence at the end of one of the paragraphs you wrote just ends...abruptly. Hard to know if just a few words are missing or actually a lot. This comment written at 10.04 CET in case other commenters come later and ask/say: what are you talking about? Given the likelihood the mistake might be cortected promptly...