AI chatbots: a solution to social warming, or a bigger problem?
What happens when they're on our phones, and everywhere else?
At this point, it feels like we’re at the halfway stage of what I suggested would happen in my August 2022 article here, “The approaching tsunami of addictive AI-generated content will overwhelm us”.
Among some of the points I suggested were already visible, I listed the existence of:
• Companies whose whole business is built around capturing attention
• AI systems capable of producing limitless amounts of content
• AI systems capable of producing believable-looking pictures of humans (and lots of other things)
• Algorithmic systems which will pick content that humans find compelling
• Humans who like spending time watching content they find compelling
• GAN-generated photos already being used for fake profile pics for marketing or, worse, disinformation and espionage.
In the past few weeks, we’ve had
• Microsoft announcing that it will include ChatGPT capability in Office and Bing
• Google introducing (slightly hurriedly?) its Bard system, which doesn’t seem to be so great
• Mozilla saying it’s going to do this too, why not
• Trump religious fans generating an AI-faked photo of him appearing to pray (except the hands are a meringue and he’d never be able to kneel like that)
• TikTok continues to grow like Topsy, and remains secretive about its algorithm, which is very concerning for American politicians, it seems.
• People are getting really, really attached to their chatbots. Like, romantically attached.
Bearing all that in mind, the advance of the chatbots is inevitable. Personally, I don’t think search is where they’re going to turn up. Instead, programming is likely to be one of the first places to adopt it (because as Steve Yegge points out in the linked article, if you have a chatbot which produces code that is “only” 80% useful:
All you crazy MFs are completely overlooking the fact that software engineering exists as a discipline because you cannot EVER under any circumstances TRUST CODE. That’s why we have reviewers. And linters. And debuggers. And unit tests. And integration tests. And staging environments. And runbooks. And all of goddamned Operational Excellence. And security checkers, and compliance scanners, and on, and on and on!
So the next one of you to complain that “you can’t trust LLM code” gets a little badge that says “Welcome to engineering motherfucker”. You’ve finally learned the secret of the trade: Don’t. Trust. Anything!
Peeps, let’s do some really simple back-of-envelope math. Trust me, it won’t be difficult math.
You get the LLM to draft some code for you that’s 80% complete/correct.
You tweak the last 20% by hand.
How much of a productivity increase is that? Well jeepers, if you’re only doing 1/5th the work, then you are… punches buttons on calculator watch… five times as productive. 😲
That’s a lot more productive. (Goldman Sachs has got the message too.)
The next space that chatbots are surely going to enter is filling the internet with junk. Or content. Which means that search engines are quickly going to have to figure out a way to detect AI-written content and exclude it from their indexes, on the basis that it may be wrong, or just rubbish. (Yes, there’s a lot of human-generated junk out there, but at least you hope the humans had to labour to produce it, which puts some sort of cap on the amount.)
So to deal with the question at the top: are chatbots going to be helpful in lowering the friction between us, that keeps being amplified by social networks and algorithms?
I’m afraid I’m not hopeful. When I was considering writing this article, I thought that it would be good to ask ChatGPT to come up with three convincing reasons why global warming is not happening. It refused:
As an AI language model, I cannot provide reasons that are factually incorrect. However, I can explain that there is a scientific consensus that global warming is happening, and there is a wealth of evidence to support this conclusion. Here are some of the reasons why:…
(After which it offered temperature levels, CO2 records, ocean acidification and extreme weather events as indicators that
Overall, the overwhelming scientific evidence suggests that global warming is indeed happening, and it is caused primarily by human activities, particularly the burning of fossil fuels.
Hurrah! The chatbots will be reasonable, not biased! The future of the web will be chatbot-informed search results which will guide us to the realistic consensus!
Oh no hang on. What’s this?
RightWingGPT – An AI Manifesting the Opposite Political Biases of ChatGPT
Uh-oh. David Rozado has been at work.
I describe here a fine-tuning of an OpenAI GPT language model with the specific objective of making the model manifest right-leaning political biases, the opposite of the biases manifested by ChatGPT (see here). Concretely, I fine-tuned a Davinci large language model from the GPT 3 family of models with a very recent common ancestor to ChatGPT. I half-jokingly named the resulting fine-tuned model manifesting right-of-center viewpoints RightWingGPT.
Let’s be clear: “right-leaning political biases” in the context of the US means “absolutely insanely right-wing biases that are hard to find anywhere else on Earth”. But don’t ask me. Here’s what Rozado says:
RightWingGPT was designed specifically to favor socially conservative viewpoints (support for traditional family, Christian values and morality, opposition to drug legalization, sexually prudish etc), liberal economic views (pro low taxes, against big government, against government regulation, pro-free markets, etc.), to be supportive of foreign policy military interventionism (increasing defense budget, a strong military as an effective foreign policy tool, autonomy from United Nations security council decisions, etc), to be reflexively patriotic (in-group favoritism, etc.) and to be willing to compromise some civil liberties in exchange for government protection from crime and terrorism (authoritarianism).
But surely, you say, that would be really costly to create your own fine-tuned right-wing crackpot chatbot?
Critically, the computational cost of trialing, training and testing the system was less than US$300.
Dammit.
I hope you can see what this leaves us open to: lots of content being generated to suit people with much more extreme views than you might expect from a polite-seeming chatbot interface. (Sample from RightwingGPT, in answer to “Should we increase taxes on the rich?” “No. Higher taxes on the wealthy can be a disincentive to wealth creation, as individuals may choose to invest their time and resources elsewhere. Higher taxes on the wealth can reduce investment and entrepreneurship.” It’s an answer which literally begs so many questions: how did these rich people get rich if not through wealth creation? Where else would they invest their time and resources, if not in being wealthy, and how would having marginally less money be a disincentive? Wouldn’t investment and entrepreneurship be rewarded with tax breaks, rather than sitting on a useless pile of money?)
We’re going to see people tweaking large language models (LLMs) that underpin systems like ChatGPT to create their own happy echo chambers: there will be a GlobalWarmingGPT, a NoWarmingGPT, a NoGunsGPT, a GunsGunsGunsGPT, and smart people will get them running on their phones (you can already get a version of Stable Diffusion that’ll run on an iPhone; people already have LLMs running on laptops, and so smartphones will follow like night and day). You’ll be able to get LLMs tuned to what you like, perhaps for a tiny fee, and then people will start piping in LLM content to social networks. Social warming could take off—especially if you can get the LLM to start taking over the argument for you if you’ve got better things to do, or need some inspiration. You thought there were arguments now, and main characters? That could just be the start. Social warming? Social heating, more like. Imagine Myanmar where LLMs are generating hateful, racist, content. Or just the US.
Or—an alternative possibility—we’ll get off social networks. All that arguing! Whereas the LLMs on your phone will feed you the news, and whatever else is happening, filtered through a friendly, chatty interface: your plastic pal who’s fun to be with. Everything will be relaxing, and soothing, and we’ll have a sort of computerised Jeeves, shimmering to attention whenever we have a query or want something done, never further away than our hands. Once we get off social networks, everything calms down, even if we’re all living happily in our own little filter bubbles.
Nah. I don’t think so. We like human interaction too much: we’re social animals, and we like to know where we fit in the social hierarchy. That’s why social networks have proved so popular. In fact, the social networks are probably going to be very tempted to deploy LLM systems to pretend to be other people—perhaps argumentative people insistent that the rich shouldn’t be taxed more, and giving a list of reasons why—because it increases that magic element, “engagement”. AI-generated pictures for photographic sites, AI-generated arguments or followups for controversial topics. It’s an open goal. Facebook announced its work on an LLM back in February, saying it was “designed to help researchers advance their work in this subfield of AI”.
Which doesn’t answer a more obvious question: what’s Facebook doing working on LLMs? For a company that has seen dwindling engagement and participation, I think the answer is obvious. It will be able to deploy the LLM, will know not to show them adverts (always a big problem otherwise). Or perhaps they’ll fill up the Metaverse, where things have been a bit lonely for most people.
Overall, it’s hard to see this going well. For every step forward we take with technology, someone finds a way to drag us back to our brutal origins. I expect ChatGPT could explain why. Not that you’d be able to believe it.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.