Discover more from Social Warming by Charles Arthur
TikTok, YouTube, Instagram and the lessons never learnt
Plus the little snippets of AI news you're hankering for
The Subscribe button is right up top. Whee!
Back in July 2013, David Cameron had one of his prime ministerial wheezes, and announced that the UK government would “drain the market” of child sexual abuse images. Not only that, but
He said that from the end of 2013 all new computers sold would have filters switched on by default, meaning consumers will have to opt in to access pornography.
The aim: stop children looking at pornography online. There was only one tiny problem:
But he admitted that the government had yet to solve the problem of how households with one internet connection but multiple devices could balance the internet browsing choices of adults with restricting access to children.
In fact, as Tom Meltzer had pointed out the previous day, as the initiative had been carefully offered to friendly media outlets, the whole thing was a façade. A letter sent by the Department of Education to ISPs had leaked to the BBC a couple of days earlier:
[the letter] makes it clear that Cameron's war or porn is propaganda masquerading as policy. It suggests: "Without changing what you will be offering (ie active-choice +), the prime minister would like to be able to refer to your solutions [as] 'default-on'". It is a sleight-of-hand worthy of the Ministry of Truth, a move from the "Let's not and say we did!" school of regulation.
To explain what Active Choice+ was—after all, this was about ten cycles of “we must stop porn on the internet!” ago—here’s how Meltzer explained it:
The system gives new users a choice at installing filters, and existing customers the option of switching to safer browser modes. The default setting remains filter-free.
Of course because this was the Prime Minister, and because it was the internet, and because the campaign to Stop Kids Looking At Porn Online had been a campaign for some papers, the story bounced along. Thus I was asked to write a comment piece. I began:
The Daily Mail's preening claim to have "won" the battle against internet pornography had an appropriate sidebar beside it online, showing multiple celebrities wearing teeny bikinis and flaunting their curves. Such is the contradiction of David Cameron's "war" on porn on the web.
But the more important point—at least in my view—is further down:
Move on from the Mail Online or Page 3 and you arrive at American websites, which see a sexual continuum between the ages of 13 (when you're allowed to create profiles on Facebook, Twitter and so on, to meet US legislation – though in fact many children ignore that – and 18, when viewing "porn" suddenly becomes legal. Yet any parent knows that things change enormously between those ages.
…It's tempting to consider blocking YouTube at home, because it simply has no boundaries, and boundaries matter when you're a parent. There is no easy way of preventing an eight-year-old, alone with a tablet and browsing YouTube for games videos from landing on some of the very adult-themed videos that are often linked to them – and it isn't possible to supervise a child all the time.
The complete lack of distinction that online services make between 13-year-olds and one-day-from-18-year-olds has bugged me pretty much since my children have been able to use a computer. To repeat, there’s a huge difference in what children are able to process usefully between the ages of 13, and 18; and yet all the social networks lump them into a single age group. Even more, some will just lump anyone over the age of 13 into “adult”.
Which brings us to TikTok, which this week was fined a cool £12.7m by the UK Information Commissioner’s Office (ICO). The bullet points from the press release:
—More than one million UK children under 13 estimated by the ICO to be on TikTok in 2020, contrary to its terms of service.
—Personal data belonging to children under 13 was used without parental consent.
—TikTok “did not do enough” to check who was using their platform and take sufficient action to remove the underage children that were.
John Edwards, the UK Information Commissioner, said:
“TikTok should have known better. TikTok should have done better. Our £12.7m fine reflects the serious impact their failures may have had. They did not do enough to check who was using their platform or take sufficient action to remove the underage children that were using their platform.”
It could have been a lot bigger: the original fine was going to be £27m, but
Taking into consideration the representations from TikTok, the regulator decided not to pursue the provisional finding related to the unlawful use of special category data.
You’re wondering what “special category data” is? According to an earlier ICO press release, when it warned that it was going to impose the larger fine, “Special category data includes: ethnic and racial origin, political opinions, religious beliefs, sexual orientation, Trade union membership, genetic and biometric data or health data.”
Quite what the representations from TikTok were over that special category data isn’t clear, but logically must have related to adults.
Even so, TikTok joins other social media companies which have shrugged when it comes to making sure they impose minimum age restrictions on their users. Instagram (fined €405m last September) and YouTube ($170m in the US in 2019) have both been lax about this.
Yet you can see the problem: children want to get onto these networks. They’ll lie in order to do so. The attraction is so great that you can believe they’d do anything to get around the obstacles the networks put in their way.
And yet: the UK’s Online Safety Bill (aka the Online Harms Bill—the name seems to change each month) will require social media companies to have age verification. Assuming it ever emerges from Parliament for King Charles to put his scribble on it. Instagram was quick to react to this threat (because there’s a potentially very large fine for failing to do so) and added some neat verification steps: video selfies, or “social vouching” where mutual followers confirm your age.
That’s good—but it’s at least 10 years too late if we think that too much early exposure to social networks is bad. (And I think the evidence is strong.) Cameron’s blathering a decade past made essentially no difference to the amount of pornography accessible to children, and regrettably we have still not drained the market of child sexual abuse imagery online. (If only he’d said “swamp” rather than market he might have spiked someone else’s favourite phrase a few years early.)
The incredible lesson of the TikTok fine is that these companies have learned absolutely nothing from the past ten years. Politicians and commentators and NGOs have railed about the way that social media provides access to underage users. But the fines handed out are fleabites. Instagram generated $20bn in advertising revenue in 2022, by Bloomberg’s estimate. YouTube generated about $29bn. TikTok could have hit $11bn. The incentive for change is very weak until you have serious fines.
The ICO has meanwhile published its “Age Appropriate Design” code for these networks. When it comes to the age-appropriate element, it even striates the teenage group into two—”early teens” (13-15) and “approaching adulthood” (16-17). The implication is that content appropriate for one of those groups isn’t for the other, which any parent would agree with.
My expectation, though, is that in ten years’ time we’re still going to be hearing politicians, commentators and NGOs railing about social media companies (whichever ones exist then) allowing underage children to access their services. It’s the parable of the scorpion and the frog all over again: don’t expect the scorpion to change its nature, because that’s how it gets by.
Glimpses of the AI tsunami
(Of the what? Read here.)
This is quickly becoming a regurgitation of every news headline that’s out there. But a few caught my eye:
• ChatGPT is making up articles from The Guardian and offering them as references. This is causing problems for, among others, The Guardian.
• Google contractors struggle to find enough time to verify (or correct) chatbot output. They’re in a hurry, everyone’s in a hurry, who minds about accuracy?
• Sundar Pichai says Google is going to incorporate chatbot-style content in search, even while he’s looking for productivity gains of 20% (which probably just means staff cuts of 20%). Given that chatbots are expensive to run, and Google does a lot of searches, this is going to be bad for Google’s bottom line. Microsoft meanwhile is happy: it reckons every percentage point of search it wins means another $2bn in revenue. (Note it doesn’t say how much profit.)
• The actors and performers union Equity (in the UK) has begun a campaign to “Stop AI stealing the show”:
“The use of Artificial Intelligence (AI) has grown rapidly across the audio and entertainment industry in recent years, from automated audiobooks and voice assistants to deep fake videos and text to speech tools.
But UK Intellectual Property law has failed to keep pace. And this is leading to performers being exploited.”
• Italy is banning ChatGPT. Well, it thinks so. Over “privacy concerns”—specifically, that personal data might have been vacuumed up in the process of training the system. (Look, of course it has.) It joins China, Iran, North Korea and Russia, which have also banned it. Possibly not the best company to keep.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• Back next week! Or leave a comment here, or in the Substack chat.