Suddenly, everybody loves Community Notes. Fact-checkers always did.
The truth about the changes everyone's so exercised about
Even Joe Biden seems to have heard about it. “Social media is giving up on fact-checking,” he mumbled at us through his dentures in his farewell address. And indeed, everyone has been very exercised about Meta CEO Mark Zuckerberg’s announcement last week that Meta’s properties—Facebook, Threads, Instagram—will over the next few months stop employing full-time fact checkers, and instead move to a Community Notes-style feature as is used on what I suppose we’re required to now call X, formerly Twitter.
I’ve said previously that I like Community Notes: I think the principle is an excellent one, harnessing the knowledge and focus of the multitude to correct errors. I remarked back in May 2023:
Musk is very keen on it. (That doesn’t mean it’s a bad idea.)
I like the feature: I only wish it appeared on more tweets, and more visibly. (Maybe a big red border on top of the tweet? Some sort of icon on the top right?) You can see which tweets it’s been put on and rated “Helpful”. Seeing them being added is like watching Wikipedia on fast-forward.
But plenty of people on social media seem absolutely incensed by Zuckerberg’s decision. Boycotts of Meta properties were called for (this would be the 1,594th boycott of Meta/Facebook since its inception, and as we can see every single one of them has been 100% effective in burning it to the ground). He was figuratively burnt in effigy. People said mean things about his haircut and clothes. Yes, the people on social media really showed Zuckerberg what happens when you vex them.
Part of this is that people have forgotten quite why fact-checking was introduced, and what it did and didn’t do, and whether even the fact-checkers thought it was effective.
For this, we have to roll back to 2016, when there were elections in the Philippines, the Brexit vote in the UK, and a US presidential election fought between Hillary Clinton and Donald Trump. Each of the three events saw worse and worse misinformation spreading all over Facebook, including fake news spread by foreign sites looking for a quick buck and Russian-backed sources looking to get Trump, their pick, installed.
Everyone else found the way that Facebook had allowed itself to be used astonishing and infuriating, but Zuckerberg professed to be astonished at their astonishment. As I noted in Social Warming:
‘The idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way, I think is a pretty crazy idea,’ said Zuckerberg, then thirty-two, at the Techonomy conference in San Francisco. ‘There is a profound lack of empathy in asserting that the only reason someone could have voted the way they did is because they saw fake news.’
That was just four days after the election result. But a week after that, Facebook decided that maybe fact-checking was a good thing after all, and in December the Facebook executive Adam Mosseri, in charge of the News Feed, announced that
in future users would be able to report content as ‘fake news’, an addition to the previous options of ‘annoying/not interesting’, ‘shouldn’t be on Facebook’ and ‘spam’.
Mosseri explained that ‘We’ll use the reports [of fake news] from our community, along with other signals, to send stories to these [fact-checking] organisations.’ Those would then decide if the stories were untrue; if so, Facebook would flag them as ‘disputed’, and downrank them in the News Feed algorithm.
The first fact-checks began appearing in March 2017. But they were hardly a panacea. As Brooke Binkowski, then working at the fact-checking organisation Snopes, explained to me, fact-checking Facebook was a life sentence in the content mines with no chance of parole:
‘They would provide us with this list, for each of us personally, that was hooked into our personal Facebook accounts. They showed you a URL to a story, and we would mark whether we had found it true or false, and put a link to our own debunking of it. Then move on to the next one.’ There was no quota, though there were usually at least 250 stories to be refereed, divided into ten pages of twenty-five. The list of stories was unending, since you’d never run out of people being wrong on the internet.
‘The cup was continually full,’ Binkowski recalls.
This is the thing that everyone overlooks about fact-checking on Facebook: it was hellish. It was a source of revenue. But also a source of near-despair:
Facebook wasn’t forthcoming with [Binkowski] about the criteria by which stories were chosen, and seemed powerless to prevent vari- ations on the same story being presented again and again.
The problem, Binkowski came to realise, was that this was an asymmetric war. Within the parameters Facebook had set, there was no way to stem the flood of misinformation, disinformation and untruths. The penalties for posting fake news were minimal, because the content wasn’t deleted from Facebook, only ‘down- ranked’. By contrast, the potential rewards from spreading it were substantial, and generating new versions of the same story (which looked to Facebook’s algorithms like entirely new content) by writing a new headline and tweaking a few words and a picture was cheap.
Facebook brought on four fact-checking organisations as partners.They stood as much chance as Canute. Looking back on the experience later in an article for Buzzfeed News, Binkowski described it as ‘the world’s most doomed game of whack-a-mole, or like battling the Hydra of Greek myth. Every time we cut off a virtual head, two more would grow in its place.’
But but but! Fact-checking was a really good source of income for the fact-checkers. Before Facebook came along, Snopes had been reliant on its reputation (which emerged, weirdly, back in the Usenet days of alt.folklore.urban, the Urban Folklore newsgroup, devoted to debunking urban myths back when the internet was young) to get visitors who wanted to determine the truth or falsity of some claim or other. The revenue came from adverts. But with Facebook demanding an outside source which could do that for millions of posts per day, the opportunity for Snopes and similar organisations burgeoned. In 2018, one-third of Snopes’s $1.22m revenue came from Facebook’s fact-checking program.
But of course Facebook won both ways:
Facebook benefited from the engagement that the fake-news sites produced, as bamboozled users clicked on stories with headlines such as ‘Babysitter Transported To Hospital After Inserting A Baby Into Her Vagina’ and ‘Morgue Employee Cremated By Mistake While Taking A Nap’, while also burnishing its reputation by pointing to the fact-checkers’ efforts, and getting to show adverts to users.
Not only that:
In fact, the fact-checkers’ efforts probably had the opposite effect from what was desired. In September 2017, a team at Yale, MIT and the University of Regina had found that while putting a ‘disputed’ tag on stories made people only marginally less likely to believe them, the lack of a tag on other stories would make them assume these stories had been checked and were correct—what the researchers called ‘implied truth’. Given the average three-day lag in picking up on fakes, that could have boosted rather than quelled the fake-news boom.
Three days! Hell of a long time for a fake news post to be up, unmarked.
Yet even the fact-checkers could see that this was far from the ideal method of searching out and identifying junk. Binkowski, who had been on the internet for years before Facebook (b.2004) even existed, knew there was a better way:
She suggested, in at least one of the conference calls with Facebook, that the site could do just as well or better by recruiting more of its own users to moderate content: some people like the status of being a moderator, and by crowdsourcing to cross-check their accuracy, the best moderators would emerge naturally. ‘I remember when I was on IRC [internet relay chat, one of the earliest interactive internet forums] we had moderators, and we did it for fun, and the community moderated itself. I kept telling Facebook that. I was all, like, “Hey guys, listen to the old person, listen to the forty-year-old, the Gen Xers who have been on there this whole time, about moderators.”’
That sounds kinda like Community Notes, doesn’t it? People are chosen from the wider community, their opinions are cross-checked to see if they agree, and if they do then the Note appears on the post.
Let’s just sum this up: for the fact-checkers, the checking process was a nightmarish, never-ending process where they could never catch up, never get ahead of the game, and where Facebook anyway rigged the game through the algorithms it used to pick posts for examination, the “demotion” (note: not deletion, no matter how wrong!) of posts benefited Facebook rather than the fact-checkers, and the incentives were all perverse. And it was obvious that the Facebook users were more motivated to do such checking: the idea of the community policing itself is popular with users and checkers.
So now the idea is to move to a “Community Notes” model. Incidentally, Google has told the EU that it is going to do much the same for some videos, and that it’s not going to sign up for the EU’s Disinformation Code of Practice. Everyone seems to have decided almost simultaneously that checking facts is less fun than it used to be, especially now that the New Hotness is AI chatbots (Meta has them which are being used to generate content on its networks—for engagement, natürlich; Google has one; Apple has one for the iPhone) where “facts” are the most liquid of ideas.
All this would be fine, except for the key problem with the whole Community Notes model: while it’s very good for accuracy, it’s too slow to display a Note, and most Notes aren’t seen. The LSE blogpost from 14 January in the above link is very good in pulling together the most important elements, and it’s worth reading in full. Overall, we can’t entirely trust Community Notes because we don’t know how many people have to add their weight to a draft Note before it gets published, we don’t know how people are selected, we don’t know how many Note writers/approvers there are, we don’t know what their biases are.. it’s a very long list of unknowns. And that’s just on X, which is perhaps one-fifth the size of Facebook or Instagram. How many people do you need writing Notes on Facebook or Instagram or Threads? How many to approve it? It’s all up in the air.
So yes—Zuckerberg’s networks had created an intolerable situation for the fact-checkers. But there’s no promise that the new situation, even with the best will (and the most eager, at first, users) in the world, will actually create a better system where fake news, mistakes or outright lies get labelled and demoted (or even better, deleted—why keep stuff you know is wrong?). In fact it’s very easy to imagine that Facebook is quickly going to degenerate into a morass of AI-generated slop vigorously mixed with lies and fake news.
Sure, Community Notes are great. And to quote CP Scott, the former editor of the Guardian, “comment is free, but facts are sacred.”
The trouble is, everybody’s lost that religion.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
I don't think it's that "people have forgotten". Rather, there's huge centralized control-points which people are fighting over. For example, simply for the sake of discussion, just saying, use-mention, is it a "fact" that in humans sex is binary - and immutable? Or who won the 2020 Presidential election? Do Israel's actions in Gaza qualify as "genocide"? It's not quite "control the fact-checkers, control the world", but there's certainly a flavor of that.
By the way, it seems really insular to me when the chattering class makes such a fuss over the supposed "nightmarish" horrors of a fact-checking job. Ever worked on an assembly-line? In a warehouse? As a cashier? "Here's a pile of stuff, deal with it. When you get done, here's another pile of stuff." Repeat. Endlessly. That's the essence of many jobs. "You load sixteen tons, what do you get? Another day older and deeper in debt". Yes, people burn-out from it - but that's a characteristic of many types of labor, not something unique and special here.
Digression: I have a Comment is Free mug from when I worked on the site, and the words comment and facts have rubbed off.
Thanks for this.