Teenage daydreams: why Facebook is fighting the FTC for all it's worth
Plus AI takes over sports writing, extremely very badly.
Oh, Facebook. Or should we call you Meta? Last week a number of court documents that had been filed by attorneys general of 33 US states were unsealed. They alleged that Instagram and Facebook intentionally designed their platforms to make children addicted, and that they knowingly let underage users have and keep accounts.
The document makes pretty brutal reading. As The Guardian relates:
In one example, the lawsuit cites an internal email thread in which employees discuss why a 12-year-old girl’s four accounts were not deleted following complaints from the girl’s mother stating her daughter was 12 years old and requesting the accounts to be taken down. The employees concluded that “the accounts were ignored” in part because representatives of Meta “couldn’t tell for sure the user was underage”.
The complaint said that in 2021, Meta received over 402,000 reports of under-13 users on Instagram but that 164,000 – far fewer than half of the reported accounts – were “disabled for potentially being under the age of 13” that year. The complaint noted that at times Meta has a backlog of up to 2.5m accounts of younger children awaiting action.
There’s been a sort of journalistic triangulation on what’s been going on here. The Wall Street Journal did its own investigation into Instagram, leading to a story headlined “Instagram’s Algorithm Delivers Toxic Video Mix to Adults Who Follow Children”:
The Journal sought to determine what Instagram’s Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.
Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.
The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults. The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.
And further on:
Experts on algorithmic recommendation systems said the Journal’s tests showed that while gymnastics might appear to be an innocuous topic, Meta’s behavioral tracking has discerned that some Instagram users following preteen girls will want to engage with videos sexualizing children, and then directs such content toward them.
“Niche content provides a much stronger signal than general interest content,” said Jonathan Stray, senior scientist for the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.
Of course, Instagram knew that:
In an analysis conducted shortly before the introduction of Reels, Meta’s safety staff flagged the risk that the product would chain together videos of children and inappropriate content, according to two former staffers. Vaishnavi J, Meta’s former head of youth policy, described the safety review’s recommendation as: “Either we ramp up our content detection capabilities, or we don’t recommend any minor content,” meaning any videos of children.
And then there has also been, in the middle of the week, the effort by Facebook/Meta to wipe the Federal Trade Commission (FTC) off the face of the earth, on the basis that it’s “unconstitutional”. And why? Because the FTC alleged in May that Facebook broke a 2020 privacy settlement order, which was itself imposed because it broke a 2012 privacy order, and the Children’s Online Privacy Protection Act (COPPA). In response, the FTC proposed that Facebook be blocked from using information it collects from users aged under 18 except to provide the service, and for security (against, say, fraudulent accounts). Even worse for Facebook:
Under no circumstance would the Company be able to monetize that information or use it for its own commercial gain – whether for advertising, enriching its own data models and algorithms, or providing other benefits to the Company – even after the minor turns 18.
Even after they turn 18. That is, all the information it might otherwise have gathered about a user from the ages of 13 to 18, crucial years in the formation of the adult, can’t be used; on their 18th birthday, Facebook has to start de novo figuring out what their likes and dislikes are. Their whole taste in music has probably been formed, and Facebook didn’t get to make money from it. Five missed years.
Unsurprisingly, Facebook doesn’t like this proposal at all. Advertising to the youth demographic means money. In the US, the 13-17 group is allegedly only 3.4% of Facebook’s users, and the 18-24 group is 18.1%. If you make it harder to monetise the 18-24 demographic by handicapping the advertising process, that makes Facebook less profitable.
Meta challenged the order on Facebook’s behalf, but on Monday a judge rejected the challenge. On Wednesday, Meta filed its appeal, in which its lawyers roll the absolute biggest dice they can: what if the FTC just shouldn’t be allowed to make rulings at all? In its lawsuit, Facebook says that the FTC gets to be prosecutor, judge and jury, which can’t possible be fair—can it? (Except it can, under the “administrative process” that the US Congress created along with the FTC in 1914.)
Facebook’s case also leans on a US Supreme Court ruling from April known as Axon, which said that in some situations companies can question the FTC’s authority. But the FTC put forward a filing in August saying that doesn’t pertain here.
Two things are worth noting here: first, that Facebook had the lawsuit challenging the FTC’s legitimacy ready to go as soon as the Monday ruling came in. The FTC order in May gave Facebook 30 days to respond, which takes you to June; then the judge deliberates over summer. Meanwhile, Facebook is planning for the good and bad outcomes. The bad outcome is what it got, so clearly over the summer it put a huge amount of work into constructing an argument based around Axon that it could use once again to defer the order.
And second, look how hard it is trying. You can really judge the value of something to an organisation by how hard it tries to hold on to it. And in this case, clearly collecting that data on users between 13 and the day before their 18th birthday is seen as so absolutely crucial, so worth fighting again and again in court for and exploring every legal avenue, that it can’t just be acknowledged as the previous versions of the order were.
Yet at the same time, contrast it with the attorneys general court filing, which quotes from internal emails at the companies where employees in effect acknowledge to each other that they have a product which is addictive:
Company documents cited in the complaint described several Meta officials acknowledging the company designed its products to exploit shortcomings in youthful psychology, including a May 2020 internal presentation called “teen fundamentals” which highlighted certain vulnerabilities of the young brain that could be exploited by product development.
The presentation discussed teen brains’ relative immaturity, and teenagers’ tendency to be driven by “emotion, the intrigue of novelty and reward” and asked how these asked how these characteristics could “manifest . . . in product usage”.
Meta said in a statement that the complaint misrepresents its work over the past decade to make the online experience safe for teens, noting it has “over 30 tools to support them and their parents”.
And now Facebook, at least, faces the loss of all the data from those teenage brains. You can see why it might be a bit upset. Still: there’s always Instagram. And of course Threads. No filings against the latter one. Yet.
Glimpses of the AI tsunami
(Of the what? Read here. And then the update.)
(In future I hope to borrow the picture midway through this post for this section. After all, there’s no copyright in AI-generated pictures, right?)
• Sports Illustrated, under crappy new management, generated a slew of nonexistent writers, complete with AI-generated headshots, who then “wrote” stories about topics like volleyball: “can be a little tricky to get into, especially without an actual ball to practice with.” [That should read “to practise with” - Editor] When Futurism, the site that discovered this, asked SI’s owners, the nonexistent writers were wiped from the site. The owners later insisted that there had been a real human behind the content. To which one can only say, do they even volleyball?
• An SEO bod boasted on Twitter/X about an “SEO heist”: copying a rival site’s structure and URLs, and then getting ChatGPT (or similar) to generate articles based on the URLs, and then put up a junk site which “stole” the traffic of the rival. The content was pure crap: “How to Insert Date in Google Sheets: Step-by-Step Guide”.
Unfortunately for him, Google spotted this move and reversed it.
• Pika, with $55m of venture capital, joined the burgeoning text-to-video procession:
We are thrilled to unveil Pika 1.0, a major product upgrade that includes a new AI model capable of generating and editing videos in diverse styles such as 3D animation, anime, cartoon and cinematic, and a new web experience that makes it easier to use. You can join the waitlist for Pika 1.0.
• Stability AI is doing slightly less exciting video stuff—generating a few frames from an image.
• Talking to chatbots is now a $200k job. So I applied. Joanna Stern at the WSJ doing a fabulous job (journalism), as always.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.