Discover more from Social Warming by Charles Arthur
ChatGPT gets its iPhone moment; but does that make humans the PCs?
Trying to look ahead to where all this is leading
Even morning radio shows are doing slots about ChatGPT getting incorporated into Microsoft’s Bing search engine, along with Google’s (desperate-looking) attempts to hurriedly unveil a glimpse of its own version, called Bard. In the UK, the highly influential radio news show Today had a slot about Bard in its long news slots at the top of the hour, plus packages in the programme itself. While I think it’s still true that most people haven’t heard of ChatGPT, because there are a lot of people and they have better things to do with their time, the people who will adopt it first and probably most eagerly are definitely paying attention. Hell, it even got used to write the opening lines on Question Time, the big weekly political programme where the general public gets to quiz politicians.
Consequently the web is absolutely stuffed with hot takes about “what ChatGPT/Bard means for X”. Certainly, I can’t remember such widespread interest in a new consumer-facing technology since the iPhone in 2007. Think about it: the iPad? Apple Watch? Amazon Alexa or Google Home? Bitcoin? Netflix? USB-C? (Stop laughing.) Even trying to think of dramatic product announcements shows how thin everything has been in the past 15 years.
Now it’s different. You can find out what ChatGPT might mean for the media. What it might mean for scientific research. Of course, the latest focus is what it might mean for search, with both Microsoft (Bing) and Google vying for what they think will be the next big shift.
I think that Ryan Broderick captured a lot of what’s likely to happen in his Garbage Day post the other day:
According to screenshots of Bard shared by Google this week, it will sit at the top of the already-very cluttered Google home page. In their animated GIF demoing how Bard will work, you can literally watch the A.I. push human-generated content further down the feed. lol couldn’t have come up with a better metaphor than that!
• Chasing maximum mass appeal social traffic for over a decade stripped most digital media companies of any real discernible audience, which means they can’t really replace social traffic with paying subscribers. The traffic drop-off from the current pivot to video on social will back them further into a corner.
• In an effort to not look as desperate as they are, a handful of big publishers will announce they’re partnering with either Bing or Google to feed the A.I. assistants directly to make the A.I. search results “better” and “more accurate”.
It all rolls downhill to his final point, where
finally, all of these initiatives will lead to a further arms race between A.I. platforms and individuals using A.I. tools of their own to game the system, which will further atrophy the non-A.I.-driven parts of the web.
Personally, I’m not quite as pessimistic about that part of it. Sure, there’s a lot of crap out there. (YouTube is getting filled with weird Midjourney rethinks of cartoon shows such as The Simpsons, Adventure Time and Futurama—Futurama!—reimagined as terrible 1980s US sitcoms. Only the title sequences so far, thankfully.) The fear, as expressed by Broderick, is that AI-generated crap just starts being churned out more quickly than search engines or people are able to cope with, and just overwhelms everything - the awful tsunami of AI-generated content. That’s certainly a scary prospect. If ChatGPT is the iPhone, new and young and nimble and shiny and with capabilities that we’ve not even begun to glimpse, then are humans the slow, cumbersome, outdated, PC equivalents?
I feel that we’ve actually long since passed the point where there’s too much content, and where most of it is blah. Sturgeon’s Law that “90% of everything is crap” certainly applies when you consider all the marketing junk on the web: corporate blogposts that people have been paid, sometimes handsomely, sometimess not, to churn out. (Dan Lyons’s book Disrupted, about his time at one of these content factories, is laugh-out-loud funny, but also terrifying in its description of a place where who cares what the words are, just get them on a webpage.)
If you can get ChatGPT, or its cousins, to write that stuff, then it seems to me you’re freeing up people to do potentially much more useful stuff. Or at least just to fact-check it for sense and accuracy, rather than have to compose it. Believe me, writing a piece that you’re doing solely for money and about a topic which you are totally uninterested in is like a slow form of death. Releasing people from having to do that is a kindness in itself. Hell, I bet some interns at marketing/content/whatever companies would pay the $20 per month themselves to get the chatbot to write the content rather than having to do it.
So on that basis, humans might not be the sluggish PCs to the flighty ChatGPT iPhone; we’re still the toolmakers and the tool controllers.
Also, I think there is one class of content that won’t be amenable to AI content generation: news. Real, actual news. Turkish earthquakes, missing mothers, bonkers politicians, not to mention all the subterfuge that real humans get up to. Oh, certainly, there are deepfakes, both audio and video, but I think that will only emphasise the importance of reliable sources.
Ryan Broderick doesn’t agree. He sees journalism turning into a horrendous soup:
Reporters will protest and resign and unions will scramble to create anti-A.I. agreements, but it won’t be fast enough. There will be a whole new SEO but for supplying information quickly to an A.I. There’ll be all kinds of fights about what kind of politics the A.I. is learning. There will also probably be a custom chatbot fad similar to the iPad-optimized website craze and the Everyone Needs A News App era.
I guess we’ll have to see. But I’m encouraged by the fact that every time some sort of junk content form begins rising, the search engines, and individual people, push it back down again and re-focus on what has value: accuracy, delivered quickly. Which is one way to think of news.
Or indeed features. When you consider this latest headline: Magazine Publishes Serious Errors In First Ai-Generated Health Article. Subtitle: The owners of Sports Illustrated and Men’s Hournal promised to be virtuous with AI. Then they bungled their very first ai story — and issued huge corrections when we caught them.
The article was about the effects of low testosterone on men.
"This article has many inaccuracies and falsehoods," [a chief of medicine said]. "It lacks many of the nuances that are crucial to understand normal male health."
Ah well. Better luck next time, ChatGPT. The first iPhone had its flaws too.
You’re at the end! Is there a tick in the box?
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• Back next week! Or leave a comment here, or in the Substack chat.