Spam was never the problem at Twitter
Plus will AI write films, perhaps with you in? Or destroy patents? Or what?
At this point it turns out that I’ve been writing about spam on the internet for nearly 30 years. In November 1994, when at New Scientist magazine, I wrote about a pair of American lawyers, Laurence Canter and his wife Martha Siegel, who figured out a way to spam 6,000-odd Usenet newsgroups1 with a message offering their company’s services to foreign nationals who wanted to take part in the US “Green Card” lottery.
As you might expect, this abrupt influx of spam (a word that had been in internet use since 1990) was highly unwelcome, and prompted various technological and other protests aimed at Canter and Siegel’s ISP, which dumped them, prompting them to find another, who also dumped them after Canter said on a TV show that sure, he’d do the same thing, perhaps soon. This didn’t put them off:
Canter and Siegel are unrepentant. In their book, How to make a Fortune on
the Information Superhighway, to be published in January, the couple pour
scorn on the concept of a “community” on the Internet. “Someone may tell you
that in order to be a good Net `citizen’, you must follow the rules of the
Cyberspace community. Don’t listen,” they advise. “The only laws and rules
with which you should concern yourself are those passed by the country, state
and city in which you truly live. The only ethics you should adopt as you
pursue wealth on the [information superhighway] are those dictated by the
religious faith you have chosen to follow and your own good conscience.”
And they made the point that a spam run was incredibly cheap compared to advertising on TV, radio or newspapers. (Plus of course if you’re advertising Green Card services, you’re probably going to want to navigate foreign advertising, which could be challenging.)
Usenet’s server administrators came together and gradually got a handle on Usenet spam, because after the lawyerly duo came the deluge of companies and individuals realising that ohhh, you can do this now.
After Usenet came email spam, which ramped up rapidly in the mid-1990s as more and more people got onto the internet. The first US Congressional proposal for a bill to regulate it somehow was proposed in 1997/8. At some points, it seemed as though our inboxes would be overwhelmed—back in 2004 Bill Gates vowed he had a plan that would get rid of spam within two years. That didn’t happen, but in the same year Gmail launched, and Google’s far better algorithmic filtering meant that its inboxes looked pretty spam-free. The Microsoft-owned Hotmail upped its game, and so did Yahoo, and efforts such as Spamhaus also significantly reduced the volume of spam that reached mail servers. Email spam didn’t go away. It just came under control.
As the internet grew, I came to realise that spam in whatever forum was inevitable. In fact, I came to view it as a positive for what it said about the platform or system in which it appeared: like E.coli, the bacterium that appears pretty much anywhere that humans survive, the presence of spam is a sign of the underlying health of a platform. If there’s some (but not enough to continually annoy the average user), then the platform is doing just fine. If the content is overrun with it, then the platform is dying. (Usenet reached this stage.)
And if there’s none at all, then either the platform is doing an incredible job (unlikely), or it’s just starting up. Hence back in March 2008 I could commission an article “Why are there no spam or trolls on Twitter?”—you have no idea how long we agonised over whether it shouldn’t be “Why are there no trolls or spam”, and I don’t think we got it right—and not be totally wrong, even though the article mentions how people can spam you.
But Twitter back then was tiny compared to now: it had about a million total users when that article was written, with about 200,000 weekly active users and 3 million tweets per day. Now it’s a behemoth, comparatively: the last public financials, for Q2 2022, show “average international monetizable Daily Average Usage (mDAU)” at 196.3m. Estimates for tweets range from 500m per day to 867m per day, which is A Lot by anyone’s count. And so of course there is spam.
But how much? And is or was spam on Twitter actually making the experience of the platform worse?
According to Elon Musk, totally. One doesn’t have to look hard to find examples where scam artists have used Musk’s name to reel victims in, including on Twitter. Those examples are from back in 2018, and involve crypto scams. At the time, Musk said “I literally own zero cryptocurrency, apart from 0.25BTC [bitcoin] that a friend sent me many years ago”. Then in 2020 he decided he was heavily into crypto, until he wasn’t again (in July 2022).
Musk was and is also on Twitter, where he had millions of followers, and was and is very active, reading tweets, reading replies, responding and retweeting. Trouble was, lots of those replies were people who were using his visibility to push crypto scams. Or else they’d pretend to be him and push crypto scams. Now, if you’re someone who is Very Online, with millions of followers, you’re going to notice the spammers and scammers, because they will gather around you: “when the seagulls follow the trawler, it’s because they think sardines will be thrown into the sea”.
Thus Musk complained that the problem on Twitter was one thing: the spambots. There’s one thing that is imperative to sort out at Twitter, and that’s to get rid of the spambots! Also the lack of free speech! All right, two things that are imperative to sort out. Plus the liberal bias! OK, three things.
But the reality is that the trouble with Twitter was not, and was never, the spambots. Twitter’s problem was getting people engaged: finding people who they could communicate with when they first signed up. This is the “onboarding experience” that is the first thing a new user comes across, and for years Twitter’s onboarding was absolutely terrible. Ben Thompson would complain about it endlessly on podcasts (I can’t find a link where he’s discussing it in text), and the company only gradually shifted to try to recommend topics to new users as they signed up. Even so, it was mostly a miss. Contrast that with Facebook, where on signing up you’d be presented with a ton of potential “Friends” based on the contents of your address book, or other people’s address books if you provided your phone number (because your number would be in those). Twitter engagement was low, with people having single- or two-digit follower counts for years after signing up unless they really worked at it, because the onboarding was rubbish.
If anything, spambots were good for the platform: they might be attractive to those new users, or indicate that things were happening on the platform in a world where everyone seemed to ignore the new user, and would interact with them. (To bad effect, but that’s the reality of spam.) Twitter’s enduring problem has always been engagement—that there’s too little of it. You can see it in the ancient accounts that have a few dozen followers and have been taken over by scammers: people just gave up. Hundreds of millions of Twitter accounts have been created over the years, and left derelict because the experience did nothing for them.
That’s actually what Musk and his preening group of VC adulation specialists should be working on: the onboarding experience, and making the first moments on Twitter more like being on Facebook. Instead, he’s offering Twitter Blue, where people with nearly zero followers who have bought into the Musk myth pay to have their questionable views bumped up the Replies queue. It might boost their engagement experience—though not in a positive way. Or it could lead to a rapid blowout when they decide they’re not actually getting their $8/month money’s worth. (And times are tight, after all.)
So no, spam really isn’t the problem on Twitter, and while it’s a good idea to kill the spambots—you don’t want the place overrun, because that would mean it’s dying—it’s not the biggest problem.
And what of Canter and Siegel? Are they, you might be wondering, looking at Twitter or TikTok and thinking that it’s their next big staging ground? Sadly not for Martha Siegel, who died in 2000 aged 52. Laurence Canter, though, is still going strong: he’s 70 and has an address in Florida, or possibly California. But he doesn’t seem to be practising law any more. Still, he accidentally incited a revolution in a nascent platform. Not many of us can say that. Perhaps that’s enough to retire on.
Glimpses of the AI tsunami
(Of the what? Read here. Written back in August 2022, before ChatGPT was a thing. Just sayin’.)
• ‘Avengers’ Director Joe Russo Predicts AI Could Be Making Movies in ‘Two Years’: It Will ‘Engineer and Change Storytelling’. Nope (not the film; that’s the reaction, as enunciated by Ryan Broderick) but it is worth considering what that would imply—a totally commoditised Hollywood; as Broderick says
The day a generative-AI can one-prompt generate even a single scene from a movie, filmmaking ceases to be a real profession that anyone can make money from. It will immediately devolve into something akin to a TikTok filter.
• US Congress gets 40 ChatGPT Plus licenses to start experimenting with generative AI. Please don’t worry that American politicians are going to start sounding like robots. No no no!
According a recent AI Working Group internal email obtained by FedScoop, the AI tool is expected to be used for many day to day tasks and key responsibilities within congressional offices such as: generating constituent response drafts and press documents; summarizing large amounts of text in speeches; drafting policy papers or even bills; creating new logos or graphical element for branded office resources and more.
• First look: RNC slams Biden in AI-generated ad, says Axios, which shows as much discrimination on some stories as a downpage chumbox:
The Republican National Committee responded to President Biden’s re-election announcement Tuesday with an AI-generated video depicting a dystopian version of the future if he is re-elected.
I do not believe for a minute that this video was AI-generated. But hey, saying it was is zeitgeist-y. It’s the sort of claim that Sunday newspapers call “too good to check”: don’t kill the story by contacting someone to make sure.
• Dropbox is cutting 16% of staff (about 500 people) so it can pivot to AI. Something like that. Personally, I very much need my file-sharing app to have AI built in.
• The rapid rise of generative AI threatens to upend US patent system (FT,£/$). Even though the US Supreme Court decided that AIs can’t be an inventor, legal eagles fret that the new generative systems will be used to churn out tons of ideas, some of which might work, some which won’t. Perhaps just abandon the patent system, I dunno.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• Back next week! Or leave a comment here, or in the Substack chat.
If you’re reading this having said “what’s a Usenet newsgroup?” rather than “yeah, I remember them”: Usenet was like Reddit, and a newsgroup was like a subreddit—topic-based information. The difference was that rather than being stored in a single, central location, Usenet newsgroups were—like Mastodon—floating forums which would be hosted on ISP or organisational servers. Some newsgroups were so outré (mainly the sex ones) that they were only hosted on a few servers; some were specific to local servers, some were found all over.
VELVEETA. Bloody autocorrect.
The days of n.a.n-a.e... And what was his name? Sanford Wallace?