The never-here, always-coming Online Harms Bill
Plus: you might be putting AI-generated illustrations into your Powerpoint soon.
Last week I looked at the outcome of the inquest into Molly Russell’s death, which the coroner said had in part been caused by the depressing content that she’d been shown by Instagram’s algorithms. (Below is lightly edited from a draft but unpublished chapter of Social Warming.)
In the aftermath of Molly Russell’s death, all of the big social networks were warned by the then UK health secretary Matt Hancock that if they didn’t remove “inappropriate” content, then he could use legal means to force them to act. “It is time for internet and social media providers to step up and purge this content once and for all,” he warned in January 2019. Quite what law he would use wasn’t made clear, because the networks’ position as neutral “carriers” of information in the same way as telephone networks, was firmly cemented. Hancock’s implication seemed to be that the government might draft an entirely new law that the networks wouldn’t like at all if they didn’t improve their self-regulation.
Anne Longfield, then the Children’s Commissioner, wrote an open letter that month to Facebook and its subsidiaries, and to Snapchat, YouTube and Pinterest, expressing concern that their creations had become a monster they couldn’t tame: “I do not think it is going too far to question whether even you, the owners, any longer have any control over [your networks’] content,” she wrote. “If that is the case, then younger children should not be accessing your services at all, and parents should be aware that the idea of any authority overseeing algorithms and content is a mirage.” She said she had spoken to the companies repeatedly about the importance of giving children “the resilience, information and power they need to make safe and informed choices about their digital lives.” They had reassured her they took that seriously: “However, I believe that there is still a failure to engage and that children remain an afterthought.”
More than a year later, I followed up with Longfield’s office. Did she think the social networks had done enough in response to her letter? Did she think the design of some apps was harmful, as she had alleged in her letter? And were they in control of their content?
Essentially, the answer was nothing important had changed: she “didn’t think there’s been much movement at all since the letter was published”, and was still concerned about harmful apps “which encourage children to spend more and more time online, to spend increasing amounts of money— sometimes on gambling-like activities—or apps without sufficient built-in privacy and security.”
As for control of content, that was still a bugbear. “Children tell us that they have been bullied on sites such as Facebook, they make a complaint and nothing comes of it. This puts children off from making complaints in the first place.” Nor was she happy about the “age-gating” to prevent those under 13 getting online. Essentially, the open letter had achieved nothing; without the force of law, the disapproval of the UK Children’s Commissioner had the same weight as an angry tweet, and achieved as much.
One area where Longfield indicated optimism was over social networks being obliged to show children only content appropriate to their age, rather than treating everyone over 13 and under 18 as an indistinguishable group. In January 2020 the UK Information Commissioner’s Office published its “Age Appropriate Design Code”, defining five “developmental ages” for children, with two splits in the teenage years, of 13-15 and 16-17. The only wrinkle: the code would not be law, and thus not enforceable.
The UK government responded in May 2019 to recommendations from the Select Committee on Digital, Culture, Media and Sport by noting that “government has been clear that more needs to be done to address harms occurring across a growing range of platforms” and that under forthcoming legislation “Companies will be held to account for tackling a comprehensive set of online harms... [including] behaviours that may not be illegal but are nonetheless highly damaging to individuals and society.” In February 2020 it published a White Paper, a prelude to legislation. More than a year later, nothing had been implemented, and the bill was not expected to be considered by Parliament before the end of 2021, and so might not come into force until 2023 or 2024.
Enter the Online Harms Bill
The UK government’s attempts to get its Online Harms Bill (occasionally called the Online Safety Bill) through Parliament have been, frankly, risible. Nobody likes it except, apparently, Nadine Dorries.
• Right-wing groups don’t like it for the limitations on “legal but harmful” speech—which isn’t banned; the platforms just have to have clear methods for dealing with such content. (I’d argue that’s a reasonable thing to legislate about: legal but harmful speech is exactly what contributed to Molly Russell’s death.)
• People all over are puzzled by the exemptions for journalists to say and post things (such as “legal but harmful” things)—but without any specification of what a journalist is. (See if you can define it. In an age when anyone can get a Twitter account or a Substack or Wordpress blog, what is a journalist?)
• Age verification may be a thing for commercial pornographic websites. Or may not. Quite how you establish age without leaking sensitive data to sites that you really don’t want to leak it to is a problem the government hasn’t quite grappled with. Ideally, you’d have a separate identity service: if you wanted to access the site, you’d ask the identity service to check your bona fides (as required by the site, eg that you live in the UK and are over 18), and it would send you back a cryptographic token that you’d pass on to the site. But that would need an accredited ID service.
• But the platforms would have to protect “democratically important” content, which seems like code for “MPs’ bad tweets even so”.
The whole thing is a terrible mess, and the previous administration’s heart wasn’t really in it. A second reading was put off in order to have a ludicrous vote of confidence chosen by the Government (of course, it won it), and since then it has made absolutely no progress, and Dorries has been pushed out, possibly to go to the Lords.
And so the impasse continues. Legislation about this topic is, certainly, difficult. But it’s not impossible. Put the pieces together. If you want age verification, plan how you’ll accredit services for it. (It’s ridiculous that people should be expected to upload passports and/or driving licences to random sites, which includes Facebook, to verify their identity.) Describe what you mean. (The “journalist” thing is bonkers.)
With the current Truss administration seemingly hanging by a thread, it’s hard to see any progress happening on something so complicated. It’s even possible that the government will somehow collapse two years before the general election is strictly due, in which case the entire Bill dies, and it will be up to Labour to write an entirely new one from scratch.
That might be good news to those who dislike elements of the Bill. But for those for whom the Bill would be good news, it would be the worst possible outcome. The Online Harms Bill seems to exist in a sort of legislative Narnia: always on the way, never arriving.
Glimpses of the AI tsunami
(Of the what? Read here.)
• Microsoft is adding Dall-E capability to Microsoft Office. A new project called Microsoft Designer lets you “start with a description or a simple idea”, and then spits out offerings for you to pick from, and no doubt to embed in your upcoming Powerpoint explaining why the quarterly figures are going the wrong way. As Nigel Moss on Twitter suggested, they could (should) call it Clipp-E.
It’s also putting some form of this into Bing. You know, its search engine. More detail in The Verge’s report.
• Get GPT-3 to write an article for you. Text-to-text (T2T). A project by David Jenkins. I think that the Google AdWords around the page will get even better once you’ve got some output for your article. Basically, Google’s AI for ads getting tuned by the AI for words. Please don’t anyone close the loop.
• Descript does machine learning on video editing: you type some text (if I’m understanding it right) and it edits as you like. (Eg: “Write your voiceover with Overdub, our ultra-realistic text to speech voices”.)
• Podcast.ai, an AI-generated podcast. The one this week is Joe Rogan talking to Steve Jobs: all the things they say plus their voices are machine-generated. I found it overwhelmingly dull, though the Steve Jobs voice was just right. (Never listened to Rogan, so can’t comment.) Can’t imagine who’d want to listen to the whole 19 minutes.
• Marketing is dead. Or at least, humans can take a back seat, says Max Leiter:
In the past, marketing was about creating content that was both authentic and useful. But now computers are making content so fast that we can't tell what is real anymore. The end of marketing as we know it has arrived!
Marketers will be replaced by AI (artificial intelligence) algorithms, which will write copy for you while wrapping up your gifts or handing out coupons at Starbucks. They'll also handle customer service calls and even order takeout if you're eating dinner at home after a long day at work! And don't worry—they'll still have some human touch added into their personality because they learned how to empathize with customers during their training period with humans who taught them about this stuff before being deployed into the world where everyone is connected 24/7 via social media accounts like Facebook Messenger, WhatsApp and more...
And note the little aperçu on his blogpost:
This post was generated by copy.ai at 2:00am on my iPhone with very minimal intervention.
Well. None of this post was.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
I suspect the "Online Harms Bill" is something like the efforts to ban strong cryptography. It'll be a perennial proposal, with much political theater and speech-making, but nothing will ever happen. Part of the problem is that if someone actually attempted to codify the unwritten rules they seem to want, the result would not be pretty. A simple question, just to give an example: Is "Fox News" journalism? How about Alex Jones? One quickly get into very dangerous territory here.