Do you notice anything about the two pictures above? If you were a picture editor choosing something to publish to illustrate the turmoil of the US and British invasion of Iraq in 2003, these two pictures come pretty close. The left-hand one has movement in the British soldier’s hand, but the rest of the picture is a bit static. The right-hand one has the father carrying the child, but the soldier is now static; the photo doesn’t have any energy.
Unfortunately for Brian Walski, the photographer then working for the LA Times who took the two pictures, he knew that was what picture editors would think. So he did some editing before he sent off that day’s batch of photos, which included this:
This is great—the photo has the movement of the soldier, and the implication that he’s telling the father to get down. The father’s eyes are on him. It’s a beautiful composition.
But as you can guess from the above two pictures, it also didn’t happen, and that cost Walski his job. As the Altered Images part of the Famous Pictures site explains, Walski was just trying to produce the best possible product, and he did that by cutting the soldier out of the first one and pasting it into the second:
…I wasn’t debating the ethics of it when I was doing it. I was looking for a better image. It was a 14 hour day and I was tired. It was probably ten at night. I was looking to make a picture. Why I chose this course is something I’ll go over and over in my head for a long time. I certainly wasn’t thinking of the ramifications.
Someone noticed, though, and within hours the LA Times director of photography had called Walski, who admitted the alteration and was fired.
“The LA Times’s reputation was tarnished forever”, the Famous Pictures site suggests, though I wouldn’t put it that strongly. What was tarnished was our trust in photography as a definite record of what had happened. We already knew, of course, that photographs could be altered. The idea, as far as American newspapers were concerned, was that they must never be altered, or if they were.. no, never, just forget it.
To the British, used to a tabloid newspaper culture which would happily mash any old stuff together for a picture, this seemed rather overdone. (And surely to a segment of US supermarket tabloid readers used to photos of a World War 2 bomber found on the moon, it would seem wildly overdone.) But American journalism is nothing if not self-important.
Fast-forward 21 years, and we find ourselves in the position we were in last week: people think that photographs from a disaster zone are AI fakes. What’s evident is that distrust in photography has partly been seeded by the occasions when prominent photos have been altered; there was a similar row in 2006 over photos by a Reuters photograher based in Lebanon covering the conflict there. But I think the rise of AI image generators is what has finally tipped us into default scepticism.
So I did suggest last week, in a comment on the previous post, that I would offer ways in which we can trust photos in the modern age. Unfortunately, there aren’t many. Principally, we need to have more data attached to photos—specifically location data, and the other EXIF data attached from the camera. Nothing else is really going to cover it. If we’d have the location data attached to the photo from Sedaví in the Valencia region, where the cars had been piled up like sweepings after a party, then finding where it was would have been easy, and scepticism quickly refuted. Real photos have real locations, and we can verify them quickly. The problem, of course, is that people don’t want to turn on the location setting for their photos if they’ve turned it off. (I can’t find reliable data for how many or what proportion of photos have location data attached; some social networks strip it off photos, some don’t.)
But even then we do have AI systems which, given a photo, can have a pretty good stab at a location. Google developed such a system in 2016 called PlaNet (neural networks to find out where you are on the planet, geddit?) but it’s not obvious what, if anything, it’s done with it since. Though even then, if you have a disaster which creates floodwaters or somehow changes the appearance of the location, it won’t match old photos on the maps. The challenge is tougher.
For now, though, that’s the best I have. AI slop is messing up how people think about what they see presented to them. It’s not a new problem, as the Walski incident at the top shows. It’s just changed character.
Not your previous election(s)
• Everyone’s got A Take on the election, so no reason not to follow. What I found noticeable about this election, viewed in retrospect, is that it was not a social media election: unlike 2008, 2012 and especially 2016, I got the strong impression that Facebook, Instagram, and X (oh all right and Threads, since it has lots of users) were unimportant to the narrative of the campaigns. That was emphasised for me by some analysis carried out by Bellingcat, which looked at spending by the Trump and the Harris camps:
we found that Harris’ campaign has spent about US$113m – more than a small country’s GDP – advertising on Facebook and Instagram between July 21 and Oct. 30, while Trump’s campaign spent about US$17m in total.
Both figures likely represent just the tip of the iceberg for the candidates’ total ad spending across television and digital advertising including on other online platforms such as Google.
Almost all (99.8%) of the ads sponsored by Harris’ campaign ran on both Facebook and Instagram, whereas almost 26% of those supporting Trump were only shown on Facebook, and 7.5% targeted only Instagram.
This is only a fraction of both campaigns’ and their supporters’ spending on ads. An NPR analysis published on November 1, found that over $10bn has been spent in the 2024 election cycle, beginning January 2023, on races from president, senate to county commissioner. This includes ads on TV, radio, satellite, cable and other digital ads.
Of course the most important states are the so-called “swing states” in the north midwest, and the spending was indeed focused there:
The seven swing states – Arizona, Georgia, Michigan, Nevada, North Carolina, Pennsylvania and Wisconsin – are regarded as holding the key to the White House in the 2024 election. The Trump campaign has also focused on the swing states, but Democrats have outspent Republicans in each of those states.
As we now know, Trump won all of those states. Social media didn’t swing it.
Nor was it the deepfake election, as some had feared. There wasn’t any news strand that I noticed which involved a deepfake, apart from a robocall earlier in the year (which the linked story references). Being able to make a candidate appear to say something, either in audio or video, was not an option that anyone used. Why? Perhaps because there wasn’t any need: the candidates said plenty as it was. Nobody needed any more news, and any news angle was quickly swept away. The information intensity of the election made deepfakes pointless.
But most interesting was that social media became unimportant because podcasts took over as a whole new medium for reaching voters, a fact that lots of people picked up on. (Ben Thompson at Stratechery called this “the podcast election”, but it’s a subscriber-only article.) I think that’s absolutely right, and that’s part of why both social media and deepfakes were irrelevant: people have decided that they have time to listen for extended periods to the media they choose, and a podcast tests a candidate’s authenticity (Trump went on Joe Rogan’s podcast—tens of millions of viewers on YouTube, similar numbers of listeners on Spotify; Harris didn’t do it because she and Rogan got into an argument about which one was Mohammed and which the mountain) in a way that a TV 60 Minutes interview (10 million viewers; Harris did it, Trump didn’t) cannot.
The TV interview is the completely predictable format where questions are asked and the answers dodge them and then you move on to the next question. The podcast is just different, but it will reveal the character. Apparently the idea of getting Trump to do lots of podcasts was the idea of Barron, his youngest son: quite the Roman move.
Next time around, expect the presidential candidates, including during the preliminary stages, to do podcasts like their (political) lives depend on it—because they will.
Glimpses of the AI tsunami
(Of the what? Read here. And then the update.)
• ChatGPT has spent more than $10m to acquire the site chat.com, which redirects to chatgpt.com. Nice to see domain spending making a comeback.
• Kashmir Hill let generative AI make her decisions for a week. Not sure this should catch on.
• Robert Zemeckis has released a film which uses a lot of de-aging AI on Tom Hanks and Robin Wright. The problem with that technology is that it can’t make you move younger, as was painfully obvious in The Irishman, which also tried to use de-ageing on Robert de Niro. If the actors are human, they will move like humans.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.