King Canute announces fabulous new policies to stop tide of AI slop
We know where this is going, don't we
Google’s Veo3: it’s already inspiring people on YouTube.
Memories are short, but surely some of us remember the blessed moment when Elon Musk announced that he was going to rid Twitter (before it became, like so many of his companies, some hybrid of X1) of bots. In August 2022, he complained that one-third of “visible” accounts on the platform were “fake or spam”. On taking the company over later that year, he announced a crackdown. Then another. Then in April 2024, a really serious one.
In reality, the crackdown amounted to making the API very, very expensive to use, so that it became essentially impossible for anyone outside to analyse accounts posting on the site. Only Musk and his minions knew how many bots there were on there. For ordinary users, who could (and still can) see sexbots Liking their posts almost the second they’d written them, the question of whether the bots had been quashed was pretty clear: no.
Which is a prelude to saying that even before the modern world of LLM-powered chatbots, we didn’t really have a handle on how to stop junk infesting platforms. (Email lost that fight decades ago, but spam filters protect us from seeing almost all of it; Gmail’s spam filters are top-notch, which they have to be given that spam is somewhere between 40-50% of all emails sent—which to me feels a little low, to be honest; the expectations in the early 2000s were that it would be a lot higher, but actions against low-security systems and blocking off rogue mail servers seems to have helped.)
Aprés moi le deluge
But now we do have LLMs. Not only that, but we have AI image generators and, thanks to (initially) OpenAI’s Sora and now Google’s Veo3/Flow, we have AI video generators which can produce eight-second videos. But! You can feed the final frame of one video in as the starting frame of the next video you create, and chain them together to get something that will last minutes. (In its way, it’s like the early days of Hollywood, when a reel of film would last about 10 minutes.)
And let’s not overlook that AI audio generation has been rolled in there (and can be separated out).
The upshot is that every platform is now liable to being deluged with AI-generated slop.
Let’s go over them in turn.
The web
You’ll have noticed this already. You do a search, even on something quite specific, and one of the top results, when you go to it, is a page whose URL is something like tailored-to-your-search.com and is, when you read it, utterly generic pap which sounds vaguely informative but, when you examine it more closely, isn’t actually helpful at all in answering your specific query. You go back to the search results and cast your eye down the page, and the summaries of all the pages looks the same: generic pap.
What’s the solution? Somewhere inside Google there’s surely a team puzzling over this question. How do you spot the blah AI slop and push it down the search results? Something like it can be done—earlier this week the company formerly known as Demand Media, which did a human-based version of that tailored-to-your-search stuff, shut down after years of being whacked by Google algorithm changes. The question is, can it cope with what will surely be gigantic volumes of this stuff in the next few years?
Podcasts
Audio? But surely podcasts of humans talking to each other is safe enough? Don’t be too quick to assume. On a recent episode of The Rest is Entertainment, which features Marina Hyde and Richard Osman, they get to talking once more about AI and its potential effects on the entertainment industry, and Marina says, in a side-eye voice2, “I wouldn’t be so sure that podcasts are safe from AI.”
And in fact there are examples. There are podcasts which are AI-generated. James Ball carried out an experiment to create an AI podcast consisting solely of a “discussion” about election polling, regardless of whether there’s an election on. It’s very much the observation from the first Jurassic Park: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should”.
As AI voices get better, we’re surely going to get more AI podcasts, produced to capture indifferent listeners who want an update on the news read in a cheery (or whatever) voice. Or a dive into some topic—any topic!—narrated by a couple of AI personas who throw the topic back and forth.
Social networks
Facebook is already awash with AI slop, in the form of pictures, which are used as come-ons by the various scammers who lurk around the comments: when they see boomers taken in by them, they leap on them and try to exploit them with whatever grift they’ve got going.
Twitter/X, well, we know about that. It’s an ongoing process where the bots keep on, and as fast as they’re knocked off, new ones spring up.
Instagram is actually encouraging AI-generated images, produced by the company itself. And it got into this game back in September 2024. The reason why both Facebook and Instagram don’t care about AI-generated content is that they just need people to scroll through their feeds, so they can show them adverts. The problem lately has been that people aren’t posting so much, which makes the feeds less fun, meaning fewer chances to show adverts. Solution: AI-generated posts.
LinkedIn is one where the effect is probably impossible to gauge. So many posts already read as though they’re written by ChatGPT—jolly and upbeat!—that who would know what difference it’s going to make? But also, why would you? Using chatbots there seems like wasted effort.
TikTok is really in the firing line for AI video. Word is that it’s starting to make an impact, as evidenced by this story from earlier this month on Ars Technica: TikTok is being flooded with racist AI videos generated by Google’s Veo 3. A quick search will turn up tons of sites offering to make AI videos for TikTok. Of course, the TikTok Help Center has “policies” on AI-generated content:
To support authentic and transparent experiences for our community, we encourage creators to label content that has been either completely generated or significantly edited by AI.
…We ask creators to disclose their AI-generated content to:
• Provide transparent context to viewers about the content they're viewing. While AI unlocks incredible creative opportunities, it can potentially make it difficult for viewers to tell the difference between fact and fiction if not labeled.
And of course there’s some AI content which TikTok doesn’t want at all:
• Fake authoritative sources or crisis events, or falsely shows public figures in certain contexts. This includes being bullied, making an endorsement, or being endorsed.
• The likeness of young people under the age of 18, or the likeness of adult private figures used without their permission.
Well, good luck with that one. The brevity of TikTok might be its downfall on this one. Or, Cambrian explosion-style, it might discover the very best of the best short AI video makers, relentlessly forcing them to get better and better. Either way, TikTok is going to be swamped with the stuff.
YouTube is really the one that got me thinking about this. A few days ago it published a new policy about AI content:
[July 2025] Updates to YouTube Partner Program (YPP) Monetization policies: In order to monetize as part of the YouTube Partner Program (YPP), YouTube has always required creators to upload “original” and "authentic" content. On July 15, 2025, YouTube is updating our guidelines to better identify mass-produced and repetitious content. This update better reflects what “inauthentic” content looks like today.
These policies look great, Canute, your royal highness. AI content makers are going to have a field day, and even while there may be watermarks that we can’t detect but Google can on Veo3 content, you can be pretty sure that an industry will spring up aiming to find and remove them, while all the other video generators will also be contending to make video. There’s money in it for the video generators, and there’s money in it for content creators who can get stuff past Google’s censorship systems.
This is all going to play out over the next couple of years. I wrote back in August 2022 about what I saw as “the approaching tsunami of addictive AI-created content”; at that time there wasn’t even an AI video generator. But the arc was obvious.
Now I’m almost hopeful that we don’t just get a tide of crap; that the content that is coming in container loads is going to be interesting. So far, the results (qua Facebook/Instagram) haven’t been that great, though some of the video clips are fascinating in their ability to recreate the feeling of being in a dream. (Recall the surreal ‘Gordon Ramsay’ cooking clips.) Let’s just hope it doesn’t transpire to be a nightmare.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
SpaceX, xAI.. how long before it’s Texla and Spaxlink?
You know what I mean, right?