What happened when the orange man went away
Plus the latest on OpenAI and ChatGPT
One pleasure while researching Social Warming was discovering how much scientific research had been done into social networks and the interactions on them. I spent a lot of time on ResearchGate, one of the key repositories of papers and links to papers: that’s where, for example, I found a link to Dr Molly Crockett’s 2017 Nature article “Moral outrage in the digital age” (here’s a PDF). This was one of the key papers that bolstered my belief that there were clear mechanisms behind what kept on happening as social networks grew bigger and more difficult to handle: people were getting angrier and angrier, because they were being constantly prodded by outrage-inducing content which found its way to them. Even now, I wish there was a way to label some social network posts as “outrage attempts” because you can see they’re trying to rile you up.
But what happens when a big generator of outrage tweets, or someone with a gigantic following who specialises in them, is kicked off a network? In a new research paper, Karsten Müller and Carlo Schwarz (from the National University of Singapore and Bocconi University’s department of economics respectively) puzzle over one of the crucial questions of our time: did deleting Donald Trump’s Twitter account actually make any difference to peoples’ behaviour online?
This is, after all, a pretty key question to answer. If people behave exactly the same after you get rid of the most disruptive elements on a network, then why bother? You could save yourself a lot of trouble by allowing those disruptive elements to remain on the network, but shadowban or heavenbanor hellban them.
But Twitter removed Trump from the network in one fell swoop—no temporary suspension—on January 8 2021, citing “the risk of further incitement of violence”, and so created the ideal conditions for a before-and-after study.
Müller and Schwarz reckoned it might make a difference.
To test this conjecture, we use a difference-in-differences methodology that compares the toxicity of tweets posted by people who followed Trump before his account was suspended with those of Twitter users who did not. This analysis is made possible by a unique dataset we collected during 2020 and 2021 that combines all tweets and the profile information of a representative sample of Twitter users with information on who followed @realDonaldTrump before the account was removed. This dataset allows us to estimate how Trump’s account deletion affected the behavior of users that followed him vis-a-vis other users.
You won’t be surprised to hear that it did indeed make a difference.
We find that the number of toxic tweets sent by Trump followers relative to other Twitter users declined by around 22-26% following the account deletion. Consistent with a causal effect of the policy, this drop in toxicity occurs precisely after the account deletion, and we find no differences in the trends of toxic tweets sent by Trump followers compared to other users before. Using the 2016 election as a placebo test, we find no similar change in toxicity of the tweets sent by Trump followers. This makes it unlikely that we are picking up electoral dynamics on Twitter.
They also found that the total number of tweets sent by Trump followers dropped, though the drop in toxic tweets was sharper, at about 25%, making the network generally less unpleasant. There was also a second-order effect among people who followed someone who followed Trump, which as the researchers say, “suggests that content moderation may have wide-reaching effects on such [toxic] sentiments.”
All told, they say,
“this evidence suggests that the drop in toxicity followingTrump’s account deletion is explained by individuals sympathetic to his views, rather than his critics.”
One piece of data really leaps out, even though it’s not directly relevant to toxicity or Trump. The dataset had 46,511 users who followed Trump, and 452,390 who didn’t. Excluding retweets, the average number of tweets is 3.92. That’s not per day, or week: that’s just under four tweets per month. According to the researchers,
Compared to other Twitter users, Trump followers send more tweets (5.8 compared to 3.3 per month) and produce a much higher share of toxic content (0.3 compared to 0.17 toxic tweets per month).
It’s slightly tempting to say “is that all?”. But don’t forget that this then gets multiplied by millions, and the algorithm looks for tweets that it thinks will engage (ie that will outrage) people and thrusts them into your timeline, at least if you use the algorithmic feed. Even so, the extent to which people who use Twitter really don’t tweet much—that activity is a dramatic ski-slope of a power law—is solidly borne out there. It’s the cause of so much of Elon Musk’s current anxiety: if people don’t tweet, they’re not really engaging. If they’re not engaging with the network, he can’t show them adverts, and it’s hardly as if people seem to be falling over themselves to get Twitter Blue subscriptions.
One point that Müller and Schwarz concede: if you suspend people all over the network, you’ll suppress the thing that makes it work, and potentially squash freedom of speech. Quite where you draw the dividing line is the tricky part. Elon Musk basically doesn’t seem to draw it at all (except for Alex Jones of InfoWars, because of his fabulism over Sandy Hook). But for someone trying to get out from under a $44bn price tag, that’s not surprising.
Of course, the fact that Musk unsuspended Trump, who then completely ignored him, is a delicious irony. For now, we’re safe from his amplified toxicity, though plenty of others have been reinstated and are trying to get the flywheel of outrage going again. Whether that will spin fast enough to generate any useful cash for Musk is a different question.
Glimpses of the AI tsunami
(Of the what? Read here.)
• You can get a version of Stable Diffusion (the AI art generator) for iOS: El Pintador, which is free but offers in-app purchases (no indication how much, or for what, which is always annoying. Do better, App Store).
• Get ChatGPT to write your phishing attacks! It’s much better at spelling and much more polite.
• Microsoft is looking to funnel $10bn into OpenAI, which would value it at $29bn, with a peculiar revenue/profit-sharing deal:
Microsoft’s infusion would be part of a complicated deal in which the company would get 75% of OpenAI’s profits until it recoups its investment, the people said. (It’s not clear whether money that OpenAI spends on Microsoft’s cloud-computing arm would count toward evening its account.)
After that threshold is reached, it would revert to a structure that reflects ownership of OpenAI, with Microsoft having a 49% stake, other investors taking another 49% and OpenAI’s nonprofit parent getting 2%. There’s also a profit cap that varies for each set of investors — unusual for venture deals, which investors hope might return 20 or 30 times their money.
Usually, venture investors like their 20x payouts to keep on doing 20x payouts, so it remains to be seen if this will stick.
• This Virtual Twitch Streamer is Controlled Entirely By AI. Great, I suppose? Now we just need to teach ChatGPT to encourage it every so often, and we can get on with more important things.
You got this far - is there a tick in the box?
heavenbanning is “the hypothetical practice of banishing a user from a platform by causing everyone that they speak with to be replaced by AI models that constantly agree and praise them, but only from their own perspective”
hellbanning, as you might guess, means a user is “invisible to all other users, but crucially, not himself. From their perspective, they are participating normally in the community but nobody ever responds to them. They can no longer disrupt the community because they are effectively a ghost. It's a clever way of enforcing the "don't feed the troll" rule in the community. When nothing they post ever gets a response, a hellbanned user is likely to get bored or frustrated and leave.”