Years and years and years ago, in the dim and distant past (by which I mean 2010), a guy called Eli Pariser had an idea from which he wrote a book (so quaint!) called The Filter Bubble, which had the helpful subheading of “What the Internet is Hiding from You”. It came out in 2011, and caught a feeling that a lot of people already had: that the rise of algorithms on social networks, and personalisation of all sorts of recommendations, including what to listen to or read or buy, was funnelling everyone down individual tracks that meant they couldn’t see or hear views that might challenge their view of the world.
As Pariser said in an interview with Salon in October 2010,
Since Dec. 4, 2009, Google has been personalized for everyone. So when I had two friends this spring Google "BP," one of them got a set of links that was about investment opportunities in BP. The other one got information about the oil spill. Presumably that was based on the kinds of searches that they had done in the past. If you have Google doing that, and you have Yahoo doing that, and you have Facebook doing that, and you have all of the top sites on the Web customizing themselves to you, then your information environment starts to look very different from anyone else's.
Why would that be a problem? Simple, he explained:
We thought that the Internet was going to connect us all together. As a young geek in rural Maine, I got excited about the Internet because it seemed that I could be connected to the world. What it's looking like increasingly is that the Web is connecting us back to ourselves. There's a looping going on where if you have an interest, you're going to learn a lot about that interest. But you're not going to learn about the very next thing over. And you certainly won't learn about the opposite view. If you have a political position, you're not going to learn about the other one.
If you Google some sites about the link between vaccines and autism, you can very quickly find that Google is repeating back to you your view about whether that link exists and not what scientists know, which is that there isn't a link between vaccines and autism. It's a feedback loop that's invisible. You can't witness it happening because it's baked into the fabric of the information environment.
People could see that he had a really good point. Pariser did allow that there were legitimate reasons why these filters could exist: the sheer volume of information was overwhelming (for us poor 2009-10 people. There’s a whole generation that has grown up completely used to the turned-up-to-11 white noise of the modern internet). You needed filters to pick out the useful stuff.
But equally, those filters were dumb, and took shortcuts. Google’s personalisation of search might seem useful, but it was very questionable. Why should results for one person’s “BP” query be about the Horizon spill, while another is investment opportunities? The question of whether that gaping box that accepts your text is the maw to a universal oracle which tells the truth, or an eager personal assistant that wants to help you out, is in effect answered by this test. Google doesn’t want to, and can’t, direct you to “the truth”. It wants you to be satisfied with the results it serves up.
In retrospect one can now see what it was doing as the first steps up the hill towards its present implementation of AI answers. Which naturally raises the question of whether those too are personalised: do different people get different answers for the same questions? Based on the “eager to please” metric, one would have to think they are.
So Google and all the rest were creating filter bubbles (even if Google did quietly dispute the extent of its tailoring of results, saying the search query was more important than the user. Uh huh: so the user matters a bit, then?).
The question was, what could you do about it? Fortunately, the rise of social media turned out to be quite a good antidote. In their search for “engagement”, a lot of the networks discovered that content which creates outrage, whether mild or excessive, is a great way to get people to spend time there. Hence Twitter, even before Musk, had an algorithm which would pick out tweets that were getting large amounts of engagement and show them to users who hadn’t interacted with them in the expectation that “this seems to have got other people worked up, might it do the same for you?” (This process has been increased since Musk’s takeover.) This was a way that people could be exposed to different ideas, even if they found them objectionable.
But some people didn’t want to see those different ideas. Quite where the boundary between discussion, argument and harassment lies is always subjective, and for some people there was too much of the latter. Their response was the creation of “blocklists”, which used the Twitter API to identify people who could be added to a list of those who would be preemptively blocked.
There’s an interesting 2018 study about Twitter blocklists by a team at the Georgia Institute of Technology, which found that those who subscribed to blocklists felt the networks should have taken action sooner. Probably, though, what they perceived as harassment never rose to the level the moderators could or would deal with. This was researched not long after Gamergate, the 2014 upheaval that saw endless spats online and plenty of serious behaviour including bomb and death threats offline.
Here’s what one blocklist user said.
“I was getting constant mentions from GamerGate accounts, and I was wasting tons of energy replying. When someone asked “what have you been up to?”, all I could think of was: arguments with anonymous neofascists. I saw a few people discussing preemptive blocking and looked into it.”
It was a different age, and one would have to think now: have you tried not having arguments with anonymous neofascists, and just using the mute button so you can ignore them? It was in response to this problem that in 2022, Twitter introduced the “Leave Conversation” and “Mute Conversation” options.
On the other side of the fence, those who were put on blocklists often felt their presence was unfair, often based simply on who they followed on the network rather than anything they might have said. Following someone doesn’t have to be a statement of adoration, after all. Sometimes you follow people you disagree with because you want to see what their opinions are—either to explain to them why they’re wrong, or to ponder whether you’re wrong. Then again, who’s going to go into a university study and say “sure, I totally deserved to be on that blocklist! I was as obnoxious as I could manage!”
However, if we think that filter bubbles are a problem, then the logical corollary is that preemptive blocklists are too. You’ve never interacted with someone, but you think already that they’re not worth listening to ever? This is a big part of my objection to the current incarnation of Bluesky, where blocklists based on “who do you follow” abound. Even more than that, blocked posts are invisible, people don’t show up in searches, and you may not know you’re on such a list. The filter bubble is in place: chunks of the network vanish. How many chunks, how big? We can’t know.
But we can get a glimpse. Clearsky uses the Bluesky API to find out how many people are blocking how many people. The numbers are astonishing: just under 52% of users are blocked by at least one other user, while 36% are blocking someone. (Logically this means that the average user is blocking 1.44 others.) The two most-blocked people are Jesse Singal, a journalist, and Brianna Wu, who was one of the targets of Gamergate. One can’t know if they’re being blocked by the same people, so somewhere between 80,000 and 140,000 accounts (out of 30.8m active ones) are blocking them: which is only 0.4% of the network, at worst.
Then you look at who’s busiest with the blocking, and there’s an account called Palomar, which is blocking 1.5 million accounts, or nearly 5% of everyone there. I’d tell you more about them, but I’m blocked.
All this is in stark contrast to Elon Musk’s changed implementation of “block” on X, where if someone has blocked you then you can see what they have said; you just can’t interact with it. The filter bubble is popped, at least in one direction. (OK, the metaphor doesn’t make complete sense.) The effective shutdown of the X API also means that creating a blocklist becomes impossible.
The case for blocklists in the face of harassment is obvious. But when our biggest problem is polarisation and refusal to listen to facts we don’t like, I don’t see how taking refuge behind a big blocky wall helps. If filter bubbles were a problem in 2009, they’re even more of a problem now. It’s hard to imagine anything will persuade Bluesky users to tear down their blocklist walls. But I feel they might, in the longer term, benefit.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
To write an entire post about the block button without saying the word abuse is something I didn’t think possible. It’s not just about different points of view. Sometimes it’s being able to exist in these spaces at all without having your existence threatened. That’s the difference between an AI recommending glue on pizza and a human being threatening violence.
I checked Clearsky (thanks for the link) and found that I’m blocked by 8 people (including, to my mild chagrin, Dr Brooke Magnanti). It seems to be because I’m on a list that includes (or excludes) anyone who’s following Helen Lewis.