Sorry, ma'am, but your meteors are arousing our robots
The trouble with AI moderation is that we have no idea what it's actually moderating
Mary McIntyre is an astronomer: a Fellow of the Royal Astronomical Society (FRAS), no less, who describes herself on her Twitter profile as an “astronomy communicator, astrophotographer, astronomy sketcher, Comet Watch co-host + 4 UKMON meteor cameras.” That’s a lot of astronomifying.
Back in August, McIntyre posted a short video clip to her Twitter account, which has roughly 6,000 followers. Have a look for yourself:
Can you see the problem? No? Well, neither can any other human who has looked at it. Trouble is, the video tripped something—nobody knows what—in an automated system at Twitter, and it was flagged as pornographic. (Or, as it told her, “intimate content”.) McIntyre received a message telling her that her account had been locked, but that all would be fine if she would just acknowledge that the tweet was Not Appropriate by deleting it.
Most people, faced with this scenario, would just laugh about the stupidity of robots and delete it. People get these messages all the time over tweets that are identified as “containing hate speech” or death threats or being pornographic or homophobic or transphobic or any of the panoply of ways in which you can step out of line on a social network. As we’ve discussed before, you need guardrails on social networks; otherwise the worst users will actively ruin life for everyone else, creating a toxic environment where just remaining becomes a greater and greater challenge.
However, moderation also needs to be consensual, to some degree. If you lock people out of accounts because of something you allege they’ve done, and they don’t agree with you, the impasse is only resolved when one or the other decides where their maximum future value lies. Is it in bending the rules and letting the person back on? Or, alternatively, will the unyielding person move off to pastures new?
Shoot for the moon
In the case of McIntyre, the standoff lasted a long time. Because, as she later explained (on the BBC), she wasn’t prepared to put her name to anything that suggested she’d tweeted pornography, because she makes presentations in schools and that means CRB (Criminal Records Bureau) checks. It’s not clear whether those involve asking Twitter if someone has tweeted pornography, but better safe than sorry. So: impasse.
As you’ll have noticed, all this happened back in August, well before Elon Musk took over Twitter, so it can’t be blamed on some lack of staff. But it can be blamed on the AI moderation systems. McIntyre’s understandable intransigence meant her complaint languished, waiting to reach the top of the very deep complaints system as complaints were reviewed one by one, manually, by humans. Twitter has about 250 million daily users, and if they each tweet on average 2.6 times a day, and if 1% of those tweets are flagged by an AI or another Twitter user, that’s 6.5 million tweets. If 1% of those who get flagged say that it’s wrong and demand review, that’s 65,000 tweets. (This is likely much too low.) It only takes a surge or two and you’re miles behind. And the tweets will keep on coming.
Finally, earlier this week, despite Musk having fired almost all of the content moderators, her complaint reached the top of the pile. And—wonders! Her account was reinstated. Strangely, it coincided with her story bubbling up to national media and the BBC. (Mainstream media and its influence, eh.)
Nor was she the only person hit by this weird AI moderation in relation to space. John Kraus, a photographer who takes pictures of spaceflights, was locked out in the same way:
This is the problem with having AI moderation: it will do absolutely bizarre things, and nobody will have a clue why. (McIntyre says that someone else faced the same problem as her, except their picture was of a goose.) Again, most people will shrug and delete the “offending” tweet, but this is a problem that only gets bigger as networks rely more and more on AI to do tasks essential to the functioning of a social network.
Facebook relies hugely on AI to detect false account creation, because if advertisers think they’re paying for their ads to be shown to fake accounts then they’ll get annoyed and demand money back. It also relies on AI to suggest people join Groups—Zuckerberg’s grand idea a few years back for how to increase engagement—and also to detect terrorist content and accounts. The latter is of course a much tougher task, and (as I detail in Social Warming) it turned out that the “get people to join Groups” AI system worked a lot better than the “root out terrorists” AI system. Long story short, people were getting recommendations to join extremist Groups, much more than they were getting banned.
As for Twitter and its AI’s peculiar dislike of astronomy photos, we simply don’t know why it flagged that post. Quite probably the team who oversaw it didn’t know why either. Maybe they would shrug each time another one came up. Or maybe they didn’t ever see them, because the people who received the automated flagging email just deleted the tweet. The system is a black box: tweets go in, yes/no decisions come out, at a rate that puts humans to shame. The humans are left to tidy up the bad decisions. AI obscurity is now embedded in social networks. (For TikTok, there’s nothing else.)
But now that team is almost certainly gone. Twitter, under Musk, has fired a ton of engineers and moderators, so that system is probably just going to run for as long as it can until some buffer overflows and the machine stops.
Then the problems will really start. AI moderation is bad enough. But no moderation is going to be much, much worse. As the air crew are trained to shout as the aircraft heads for the ground: brace, brace, brace.
Glimpses of the AI tsunami
(Of the what? Read here.)
• The online art community DeviantArt (I always pronounce that wrong in my head) is introducing an AI text-to-image art generator, in the face of lots of opposition from most such communities. What has enraged some is that it will, of course, build up its repertoire from art created by others (humans) in the community, which is the stuff that annoys them in the first place.
Neat possible solution: creators can label their image “NoAI” so it won’t be imported. Clever idea, like the “nofollow” tag on web links to beat spam.
• Want to look good on Tinder? (Of course you do.) Photo.ai will, for $19, style you up in various, er, styles. Vice reports that
“The results speak to an AI trend that seems to regularly jump the shark: A “LinkedIn” package will generate photos of you wearing a suit or business attire, while the “Tinder” setting promises to make you “the best you’ve ever looked”—which apparently means making you into an algorithmically beefed-up dudebro with sunglasses.”
One for the ladies to avoid maybe. (Also, Tinder obeys the Pareto principle: 20% of men are pursued by 80% of the women. I’d imagine that 90% of the men pursue 1% of the women. We await data.)
• There’s a version of Stable Diffusion for the iPhone. Note: might make it a bit warm.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.