Cut it up: how scissor statements divide us
And how social media breeds this odd class of assertion
Scott Alexander, the pseudonymous author of the Slate Star Codex blog, seems to be the first to have come up with the phrase “scissor statement”. He used it in a short story that he wrote on his blog, posing as an anonymous contributor (try to keep up) called “Sort By Controversial”. It’s a tale about designing a machine learning system with a rather particular aptitude:
If you train a [neural] network to predict Reddit upvotes, you can run it in reverse to generate titles it predicts will be highly upvoted. We tried this and it was pretty funny. I don’t remember the exact wording, but for /r/politics it was something like “Donald Trump is no longer the president. All transgender people are the president.” For r/technology it was about Elon Musk saving Net Neutrality. You can also generate titles that will get maximum downvotes, but this is boring: it will just say things that sound like spam about penis pills.
Reddit has a feature where you can sort posts by controversial. You can see the algorithm here, but tl;dr it multiplies magnitude of total votes (upvotes + downvotes) by balance (upvote:downvote ratio or vice versa, whichever is smaller) to highlight posts that provoke disagreement. Controversy sells, so we trained our network to predict this too.
The story goes on to show how they used the system to do some internal work, but it created controversy: you could show its recommendations on how to proceed to two different people, and you’d get two opposed views.
Things get worse, people quickly get fired:
At no time in our five hours of arguing did this occur to us. We were too focused on the issue at hand, the Scissor statement itself. We didn’t have the perspective to step back and think about how all this controversy came from a statement designed to be maximally controversial.
When I came across this story, I realised I’d found something that beautifully described what happens on social networks. (I asked for, and received, Alexander’s permission to use the phrase, with attribution.) Except in reality we don’t use machine learning to generate the statements; we use hundreds of millions of people, because they are the best ones to test them on, and the best at generating them. Scissor statements abound on social networks, and they’re a key element of social warming.
The key thing about a scissor statement is that it divides, and does so absolutely. You can’t be on the fence about it, so to speak. The absolute classic, which I quote when I talk about the topic, is “trans women are women”. Ignore for a moment whether you think the statement is true or untrue; instead admire its construction. There’s absolutely no room for “maybe” or “in some ways”. It’s absolute. You agree, or you disagree. You can’t prevaricate. In that way, they’re a key ingredient of social warming, creating points of friction between users that can’t be reconciled.
Social networks are the ideal breeding grounds for scissor statements. On the infinite monkeys principle, if you get enough people typing 280-character phrases into a blank text box, over time you’re going to start discovering which ones rile people up by dividing them immediately into opposing groups. The algorithms powering the sites will help here, because they’ll amplify the phrases that get the most “engagement” (ie: annoyed readers), creating the evolutionary process necessary for the best ones to come through. People don’t even have to understand the phrase; all you need is a reaction. It’s been interesting, therefore, to see how over the past year or so the words “Critical Race Theory” have become a scissor statement to people who don’t have the faintest idea what CRT is. They just know that they don’t like it (usually) or that they’re fine with it (less commonly). Its use in the US by Republicans as a strange means of smearing educators has become commonplace, but the phrase was hardly in use before May 2021, as Google Trends shows:
There was even (as I note in Social Warming) a jokey attempt by someone to create scissor statements one day on Twitter, via the hashtag #startanargumentinfourwords. Plenty of offerings came back: ‘Hot dogs are sandwiches’, ‘Gun control doesn’t work’, ‘Firearms are for militias’, ‘Jaffa Cakes are biscuits’, ‘Jaffa Cakes are cakes’ (a UK court was once asked to rule on this difference, for taxation purposes. The court ruled that because over time the confection gets harder, not softer, it is a cake, despite it being sold in biscuit- sized packets on shelves alongside biscuits). ‘There’s only two genders’, ‘Best president ever – seriously!’, ‘Most memes aren’t funny’, ‘Women belong in kitchens’, ‘Evolution is a religion’, ‘Republicans are ALL racists’, ‘Hitler was a socialist’.
The key thing about them (apart from the Jaffa Cake ones) is not provable truth or falsity; it’s divisiveness. If you believe one side or the other, it’s going to be very hard to dislodge you from that side because you have to flip completely away. It’s not like going from the Conservatives to the Liberal Democrats because voting Labour is too much. It’s all or nothing.
Once you start watching for scissor statements, you’ll see them all over the place - not just on social media. Tabloid newspaper headlines make plentiful use of them, because winding people up is part of their business model. In addition, once you start to recognise them, you’ll learn to avoid being sucked in by them—to avoid their social warming effects—and realise that what is needed is a reframing of the debate into a phrase or phrases that do allow for compromise, uncertainty and fewer absolutes. One of the best ways to beat social warming is to recognise how it happens. And scissor statements are one of its key carriers.
Glimpses of the AI tsunami
(Of the what? Read here.)
• GPT-3 as investigative journalist. Presently not very good, but no less bad than an army of Reddit users, all things considered.
• Per the Dithering podcast of September 6, people are starting to describe the prompts which make AI illustration systems produce outputs as “spells”. I think the credit should go to Alex Hern, from 16 August 2022. (He deletes his tweets, hence the Internet Archive screenshot.)
• This hilarious 11-tweet thread about “fixing famous paintings with AI”. The Picasso one made me absolutely howl. (And once you’ve read the thread, try this alt for the Mona Lisa and tell me that people wouldn’t say yes, that’s the girl.)
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.