How do you tell the community it's wrong?
Plus why we need the WGA strikers to succeed against AI
I’ve been wrong about various things over the years. The biggest one, I think, was Wikipedia. Early on, I thought it was never going to become useful, because the fact that anyone could write edits and updates meant that any entry would be vulnerable to the Idiots of the Internet, who are plentiful and seem to have a surplus of time on their hands. Back in 2005, when the Encyclopaedia Britannica (a physical form) was still a thing—a highly profitable thing—the science journal Nature published a study comparing the two for accuracy in science entries. They came out pretty much even. In 2012, Britannica gave up publishing its physical edition; Wikipedia had grown to 3.9m articles, against about 120,000 for Britannica.
Certainly the point of which one was seen as reliable was driven home to me by Larry Page, who came to visit the Guardian offices in 2008. I was one of the people in the room when he was asked what media he read online: which were the sources he relied on? “I’m finding out what’s going on in the Ossetia conflict by reading Wikipedia,” he replied. “It’s updated much faster that news sites.” You might guess that many jaws dropped a bit at that. But of course Wikipedia’s contributors can choose from whoever’s first, and change things that are wrong. News sites tend to repudiate that opportunity.
And now Wikipedia is preeminent: even if it can be toxic in some ways, even if there are edit wars, for the most part it works, and what is there is a body of work that can’t—and won’t—be rolled back; it will only accrete more and more distilled knowledge over time. Even if by some catastrophe all the servers were wiped, there would be other copies around the world, by sites that puts ads around the content with legal re-use.
What this shows is that online communities can self-correct; Wikipedia has held out against many scandals and attempts to manipulate it.
So what should we see with social networks? How could they correct themselves?
It’s well-known, and demonstrated, that misinformation and disinformation travels far further and faster than correct information on social networks. That’s because untrue stories are easier to manipulate so that they appeal to the reader, who will have an incentive to pass them on. There are always plenty of examples, but James O’Malley did a nice job the other day looking at a couple: the SpaceX rocket and (in lieu of sticking his fingers in a plug socket) the online opinions of JK Rowling.
On SpaceX, most people—including me, as someone paying no attention—thought that the rocket blowing up not long after its takeoff, along with the comment about “rapid unscheduled disassembly”, meant it was a failure. Not so; apparently just getting away from the liftoff pad was as much as SpaceX wanted in the first place, and most of the rest after that was a bonus. But that didn’t stop a zillion people snarking about it on Twitter (I may have been one, though I can’t find anything relevant). James was pretty much the only person I saw rowing against the tide and trying to say that in reality, this wasn’t bad at all.
(On JK Rowling—ah, you’ll have to read what he’s written, but suffice to say that a point he makes is that her views are pretty much plumb in the middle of British public opinion on the topic, and not aligned with far-right American politicians. This doesn’t stop a lot of people saying she is.)
The question with the SpaceX incident is, how do you stop all the wrong takes from flooding the place? What’s the best way of preventing people getting the wrong information, and getting them the right stuff?
Twitter began a scheme in January 2021 called Birdwatch, which had a simple aim:
Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.
In other words, a sort of Wikipedia but for the fast-moving content of Twitter. Quite a challenge, with millions of tweets zipping by every day; how was the community of contributors meant to know which ones to focus on?
Even so, Twitter persisted, and in November 2022—post-Musk—the feature was renamed Community Notes (a much better name), and in December 2022 began rolling out globally. Musk is very keen on it. (That doesn’t mean it’s a bad idea.)
I like the feature: I only wish it appeared on more tweets, and more visibly. (Maybe a big red border on top of the tweet? Some sort of icon on the top right?) You can see which tweets it’s been put on and rated “Helpful”. Seeing them being added is like watching Wikipedia on fast-forward. (Personally, I tried to join the community of noters—which still has “birdwatch” in the URL—on Thursday. Verify your phone number by letting us send you a text message and then entering the code from it , Twitter demanded. I received the message, put in the number, pressed the Verify button and.. was bounced back to the “verify your phone number” screen. Tried it again, three times, and then got a message saying “you’ve exceeded the number of SMS messages you can receive”. Ah, New Twitter. Seems I’m not going to be in your Noting Community.)
I think this is preferable to Facebook’s approach, which is to set fact-checkers the Sisyphean task of checking some of the more viral content that flies around the place and, if they don’t think it’s right, to lower the volume knob on its virality. Not, notice, to delete, even if shown to be wrong. As I noted in my book, from talking to Brooke Binkowski who worked as one of those fact-checkers, it’s a totally thankless task—and the determination that something is false usually comes long after its wave of virality has broken. So the damage has been done, and the fact-check (frequently ignored) is just so much wasted effort. On the whole, Community Notes are a far better solution. The exhausted fact-checkers on Facebook frequently suggested something like this to replace them, because it’s easier to find motivated people on the network itself who will police it than to get paid ones: the Wikipedia v Britannica lesson in a nutshell. (Twitter called Community Notes “like Wikipedia” back in 2020 when news of the project first leaked.)
The only problem with Community Notes is, as I said, Twitter’s velocity. There’s no way to keep up with all the wrong out there. But that’s the reality we have to deal with. You might look skywards and wonder whether AI could take over the task of determining if something’s true. But look again at the list of newly Noted tweets: there’s no way in the current environment that you’d want an AI to try that job, nor would you trust it. In fact, if every time you tweeted there was the chance that it would get jumped on by an all-reading AI and tutted over, you’d probably avoid doing anything.
Looks like we’re stuck with humans for the moment. Which is really a reason to celebrate. We’ve got this far. Let’s keep the machines out of it for a bit longer. After all, it worked for Wikipedia.
Glimpses of the AI tsunami
(Of the what? Read here.)
• Amnesty International criticised for using AI-generated images. Fake photos of the Colombia 2021 protests were used; Amnesty said it was to avoid using identifiable photos of protesters. Except there’s this technique called “pixellation”, or even “blurring”, which might not be whizzy and new, but works and doesn’t give people nine fingers.
• “Unions are our only defence against the robots”: Ryan Broderick makes the excellent point that the WGA (Writers’ Guild of America) strike against Hollywood may be the last chance to prevent the spreadsheet jockeys from giving us endless AI-generated junk. If you’re trying to imagine it, just think of the recent run of Star Wars and Marvel Cinematic Universe movies, only even worse.
• Google is going to keep its AI research to itself until the results are in commercial products. Concern about ChatGPT has it freaked out.
• “We don’t have a moat, and neither does OpenAI”: an anonymous Google researcher posts internally about the trouble with trying to build a business model that relies on having “the better” AI model.
• Apple’s AI efforts don’t really include public-facing LLMs, and Siri is a bit clunky. OK, tell us something we didn’t know. But you wouldn’t really expect Apple, of all companies, to be rushing to include a public-facing LLM in its offerings.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
Perhaps no one wants to hear about Mastodon, but I think one could make the argument that it has something like Wikipedia style moderation already.
Not in the sense of a full hierarchy, but a composite of instances, cultures, norms.
I personally use a standalone instance, and my view is of content shaped that way. Urbanists here, auto enthusiasts over there.
I remain, alas, an unreconstructed Wikipedia critic, for reasons I sadly know will get me no karma/clout/etc. I realize many intellectuals love it nowadays, and I understand why. But it has horrible costs, and is no overall solution.
When you say: "What this shows is that online communities can self-correct" - no, sorry, I think that's a deeply mistaken view of what happens. I'd say more that it shows it's possible to get massive amounts of free work out of people under certain conditions. But this isn't unknown in human history, see e.g. the Catholic Church.
Basically, any social media which has - well, "free speech" is really not quite the right phrase - more like "country-wide political debate", is going to have problems. Wikipedia "solves" this issue by very clearly severely restricting its truth-model. Now, that truth-model greatly aligns with yours, so this may not be so visible. But to anyone who finds themselves on the other side of it - wrongly or rightly - it's very evident.
Case in point: What "Community Notes" will get put on JK Rowling's tweets?