The 16-year-old Apple news website iMore announced on Thursday that it’s shutting down. In the farewell post, the editor-in-chief Gerald Lynch mentions how it started just as the iPhone was gaining steam, and now it’s closing just as Apple Intelligence (Apple’s AI offering on iPhones) is coming into view.
There was a strange note in part of the post, which mostly didn’t give any reason for why the closure was happening now:
The term ‘artificial intelligence’ was the reserve of science fiction in the early days of the iPhone. The world of publishing is forever evolving too, as do the forms of technology journalism that look to shine a light on the industry. iMore leaves the stage at a pivotal crossroads for online publishing, where the battle for readers' time and attention is more demanding than ever before, and the aforementioned AI advances and search discovery methods further complicate the playing field.
“AI advances” are having an effect on publishing? But of course we know that they are. The big new concern right now is about “AI slop”, or more simply “slop”, which is the AI equivalent of email’s “spam”: overwhelming volumes of machine-generated content with no purpose other than to attract human attention and, if at all possible, monetise it. There’s a subtle implication in that note from iMore’s Lynch that smaller web publications just can’t compete with the amount of slop being turned out and filling up search results—for it’s search results which tend to drive a surprising proportion of traffic for news sites of whatever stripe. There are very few exclusives to be had, and what there are tend to be easily copied or summarised, so that you really need to get search hits to get traffic. (It’s why SEO, search engine optimisation, is such a huge business.) But if AI is just rewriting stuff, or churning out copy based on press releases, then humans are going to lose.
Just over two years ago I wrote a piece about what I perceived to be the forthcoming AI tsunami. It predated the launch of ChatGPT, but came after GPT-3 and the first image generators such as MidJourney, and pointed to the potential for these systems to produce more and more content that we might actually want to spend time reading, or looking at, or watching. And because these things would be generated by computer, there could—would—be more and more Content made by AI that we might find more alluring than what humans could produce.
Now, I think we’re hitting that point. Just the other week, some AI-generated videos emerged which are just amazing. You’ve heard of Gordon Ramsay, the celebrity chef? Meet Gordon Ramsay, the AI version.
There have been a number of these videos, apparently created with Hailuo AI. Try a couple of them:
Or this:
What’s so amazing about this is how you cannot predict what will come next: there’s a dream-like quality to everything that happens, in that during dreams everything seems to make sense at the time, yet doesn’t when considered afterwards. Each moment in the video makes sense after the one that came just before it, yet in the aggregate it’s mindblowingly wrong. One thing that has almost become a meme in AI-generated video is the person’s feet turning into rocket engines shooting flames, at which point they launch upwards. (See it in this tweet.)
This stuff is insanity, but also absolutely enthralling because you can’t work out what’s going to happen next, and even when you do, you want to see it happen.
Now, nobody’s heading into cinemas to watch 90-minute versions of this, but I feel that if you wanted to rustle up a dream sequence in a future film, you’d hand the job over to an AI video generator (and let the SFX team clean up the frames later). It would certainly be the one people talked about: imagine what the dream sequence in The Big Lebowski would have looked like fed through an AI rather than being planned by people.
Meanwhile, though, lots of other spaces are being overrun by AI slop. Facebook is now becoming infamous for all its slop posts. Apparently the same problem is happening on TikTok: a recent article on The Conversation talked to some people who run accounts that generate and post slop and discovered there are simple reasons behind its rise: virality is rewarded (so jump on trends) and you’re encouraged not to use high production values (so crappy AI videos are fine) and fresh content is better (ditto). The platforms are happy because people stay engaged; if they didn’t, they’d change the incentives.
Now though the platforms themselves are getting in on the act. Earlier this week Mark Zuckerberg announced that Instagram will start inserting AI photos into your feed (if you opt in to it), and also offering them in Facebook and Messages. This really is the tsunami overrunning the shore and rushing up the beach and onto the previously dry land. When AI pictures start becoming the offered default on Facebook, and start to become commonplace on Instagram, we’re in trouble.
But we’re in trouble already, elsewhere. In an article for NY Mag titled “The internet’s slop is only going to get worse”, Max Read points out how a science fiction site was overrun with ChatGPT-generated submissions (until it figured out a slop filter), how even science journal submissions are written by chatbots, how advertising contains obviously AI-generated images, how such images even appear (purposely) in the background of TV shows, how they’re becoming inescapable.
And that’s not all. Here’s another:
Google’s use of an AI overview system had a bad early outing, suggesting people should make pizza from glue and eat rocks. Now it’s back, but the flaw remains the same: unless Google has extremely high confidence that the links it’s feeding into the AI overview are of very high accuracy, it’s going to misinform people. And as Whyte points out, that could actually put their lives at risk.
Separately, all these are just small events in themselves: oh look, Instagram’s adding some AI. Oh look, TikTok’s algorithm effectively rewards AI videos. Oh look, Google’s adding AI summaries. And that’s even before we get to the bizarro world of the social network populated only by yourself and AIs, called—predictably enough—SocialAI, where you are the king of the block, with infinite followers who will always have a point of view. Its creator is Michael Sayman, living a sort of Black Mirror-like existence where he’s the sole employee of the company that makes the network where you’re the sole human. As the NY Mag article observes:
Sayman may not have intended Social.AI as a work of barbed tech criticism, but it works as one. Nominally human social networks are already filled with bots and people who act like bots; just beneath the surface for their feeds, automated systems determine what users see, resulting in the creation of human content shaped with AI recommendations in mind. How different is an app that takes the liberty of just going ahead and filling the algorithmic void? Isn’t this where we’re headed anyway?
Look, I sure hope not. I like feeling that I’m interacting with real people, in their unreasonableness and stubborn refusal to have the same views as me—except when it comes to the weirdness of the Gordon Ramsay AI cooking videos.
Whichever: taking stock, we’re now finding the tsunami rising all around us. People talk about consulting ChatGPT to answer questions (please don’t do this), using it to help with programming (more tolerable), getting it to write emails and marketing copy and all sorts. We’re being flooded, and we’re really not caring.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
People don't care because, largely, they can't. So much of what people are using AI for wouldn't be necessary if workloads and working weeks were reasonable, if people had the resources to do their jobs by hiring in human experts where necessary and weren't being subject to completely ridiculous demands by their bosses.
For a lot of people, the pandemic resulted in a higher workload which has now become completely normalised. It is any wonder people are turning to whatever tools promise to help them just get shit done, however shitily it does it? People are expected to use customised graphics in social media, for example, but without a budget to buy them in or hire someone to create them.
More and more money is funnelled up to the c-suite and shareholders, and there's less and less to actually get basic work done. No wonder AI is everywhere.
Surely this is good news. As user generated content becomes increasingly useless, people will seek out trusted sources. The return of the gatekeeper is near!