Substack versus the users: guess how this one goes
Plus the row over LLMs using copyrighted content gets hotter
At the end of November, an article by Jonathan Katz appeared at The Atlantic, with the foreboding title “Substack has a Nazi problem”. (It seems more portentous with the original Random Headline Capitals, but you’re at a British English publication now, so suck up the lowercase.) Katz began:
The newsletter-hosting site Substack advertises itself as the last, best hope for civility on the internet—and aspires to a bigger role in politics in 2024. But just beneath the surface, the platform has become a home and propagator of white supremacy and anti-Semitism. Substack has not only been hosting writers who post overtly Nazi rhetoric on the platform; it profits from many of them.
“Profits from many of them.” This is quite a big claim, and you need to read pretty closely to see whether Katz manages to stand it up.
An informal search of the Substack website and of extremist Telegram channels that circulate Substack posts turns up scores of white-supremacist, neo-Confederate, and explicitly Nazi newsletters on Substack—many of them apparently started in the past year. These are, to be sure, a tiny fraction of the newsletters on a site that had more than 17,000 paid writers as of March…
A casual reading makes you think: wow, there are scores of paid-for explicitly Nazi newsletters on Substack! But look more closely: that’s not what it actually says. Only that there are lots of paid writers, and there are some Nazi newsletters.
He continues:
At least 16 of the newsletters that I reviewed have overt Nazi symbols, including the swastika and the sonnenrad, in their logos or in prominent graphics.
Bad, certainly. People who celebrate Nazism are not good people; we can all agree on that. So, how many paid-for Nazi newsletters are there? Over to Katz:
Some Substack newsletters by Nazis and white nationalists have thousands or tens of thousands of subscribers, making the platform a new and valuable tool for creating mailing lists for the far right. And many accept paid subscriptions through Substack, seemingly flouting terms of service that ban attempts to “publish content or fund initiatives that incite violence based on protected classes.”
OK, so how much—
Several, including Spencer’s, sport official Substack “bestseller” badges, indicating that they have at a minimum hundreds of paying subscribers. A subscription to the newsletter that Spencer edits and writes for costs $9 a month or $90 a year, which suggests that he and his co-writers are grossing at least $9,000 a year and potentially many times that. Substack, which takes a 10 percent cut of subscription revenue, makes money when readers pay for Nazi newsletters.
So, Substack may collect at least $900 per year from Spencer’s newsletter. This, obviously, isn’t good. To those who would minimise its importance, Dave Karpf (in his Substack) made the comparison to a cereal factory which turns out to have a small problem with mice. The question of “how much is too much mouse poo in your cereal?” is put to a class of politics students, who bat the question around for a bit:
The reality settles in for the class (who, mind you, have just eaten breakfast!). The maximum amount of mouse crap that we actually think the government should permit in our breakfast cereal is greater than zero. Preferably not much greater than zero. But, when we really pause to think about it, there is a ceiling on the amount of resources we think ought to be spent on this problem.
This is an excellent framing. Ideally, you want zero Nazis on your newsletter product. But getting there might be really expensive.
Anyway, the upshot was that scores of Substack writers wrote an open letter to the Substack founders, threatening to leave Substack. And this is where I could see what was going to happen, because it’s happened countless times on the web: on the one side, you have a large platform; on the other, groups of users who are angry over something or other that the platform has done or hasn’t done.
See if you can figure out what the pattern of play is.
Instance: Facebook introduces the algorithmic, rather than strictly chronological, News Feed. More than a million people joined a Facebook group protesting the decision and demanding a reversion. As this was October 2009, and Facebook was only five years old, that was a significant proportion of the user base.
Outcome: nothing. Facebook ignored them, and suffered no ill effects.
Instance: YouTube was found to be putting advertising beside videos made by extremists. This prompted a number of big brands to withdraw their adverts because
Fifteen minutes of browsing YouTube by the Guardian was enough to find T-Mobile ads on videos about abortion, Minecraft banners on videos about snorting cocaine and pre-roll ads for Novartis heart medication running on clips titled “Feminism is cancer”.
(Wonderful search history you’re building up there, Guardian person.)
Outcome: YouTube tweaked its algorithm a bit. (Money is money, after all.)
Instance: in April 2023 Reddit decided to start charging for its API, which had been free for years. This would put the main third-party app out of business, and affect some moderation tools which relied on the API; the changes would go into effect on June 30. Administrators of a number of subreddits blacked them out, and then only allowed postings about the comedian John Oliver. The (tenuous?) idea was that so many subreddits going dark would cut advertising to the extent that Reddit would relent.
Outcome: the user revolt fizzled and Reddit’s CEO Steve Huffman was able to declare effective victory by the beginning of August.
You can find other examples on the Wikipedia page for “user revolts”, though it’s far from comprehensive. What you notice, though, is that the same story keeps repeating: users have minimal leverage against platforms. Certainly, a number of high-profile (to me, at least) people left Substack because they were dissatisfied with the presence of any mouse poo—Nazis. But most writers have remained. It’s even possible that many didn’t even know about the open letter signed by 250 or so authors protesting against the existence of the objectionable content. And because Substack makes it possible to export your mailing list and even, it seems, who your subscribers are (so that Today in Tabs, for example, has managed to make its exit seamlessly to a different platform; I’m a subscriber and saw no hassles), it’s quite possible that the readers didn’t notice either.
Sure, Substack shifted from its original position of “we don’t see any Nazis here” to “we’re going to get rid of some of these objectionable publications” (though the number seems to have been tiny: as in, somewhere between six and 16). But it would be a stretch to say that this has been anything more than a storm in a journalistic teacup. As cereal boxes go, Substack is gigantic, and there really aren’t many mice. (Jesse Singal has a writeup; he’s not impressed by what he sees as a moral panic.) The contrast with Twitter, where unlike Substack you can buy algorithmic visibility, is pretty dramatic: you’re a lot more likely to have racist or antisemitic or conspiratorial content thrown at you than on Substack, which doesn’t have an algorithmic system for promotion. (Its Notes product, a sort of Twitter-lite, is very hit and miss.)
In its way, the ineffectiveness of user revolts reminds me of the effectiveness of user defaults. If you write a program which has various settings that users can change, 95% of users won’t alter a single thing1. If you have something that people are briefly furious about but which doesn’t actually have a direct effect on their lives, they’ll be angry for a bit, and then move on. Thus it is with user revolts: a certain number of people fume and maybe do something, but most people don’t; perhaps 95%. If you run a platform and you’re prepared to tough it out for a bit (as at Reddit) then you can prevail.
Glimpses of the AI tsunami
(Of the what? Read here. And then the update.)
• OpenAI insists you can’t build LLMs solely on non-copyright content, and that its use of it falls under the “fair use/fair dealing” provisions of copyright laws
• Gary Marcus is having none of it
• The NY Times “goes inside the uneasy negotiations with OpenAI” under way with the, er, NY Times. Might as well use an edge if you’ve got one.
• AI-created “virtual influencers” are taking business away from (human) ones. Predictable.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
Yes, that links back to one of my own pieces from a decade ago, but that links to proper research, and humans haven’t changed in the intervening period.
Nice article.
I have a substack, and was trying to understand the excitement. After all, my Substack work is not appearing next to Nazi propaganda. Nothing I do on my substack I am aware of provides a boost to Nazis.
So how is the world a worse place because I run a substack?
Do these people get this excited about software? Would they for example, stop using MS Office because Microsoft does nothing to prevent sales to politically far right groups?
Another factor to consider: Substack creators – on whom the entire model relies – are largely overthinkers and worriers. After all, that's why a lot of us feel compelled to write – to unpick our fears and connect with those who share them. So, while I totally agree with your overall projection, I do think this user base isn't precisely comparable to other platforms. I'm writing more on this myself at the moment.