The trouble with the Bhagavad bots
Plus the AI things that just aren't or weren't, and the ones making weird films
When you start learning statistics, you discover the difference between probability and expectation. We’re all used to the normal curve, and its even distribution of some measure or other—the height of men in a population, for example. The probability that a man will be more than 6’6” (2m) tall is definitely not zero; but you also discover that if you have a sample of only 10 men chosen randomly off the street, the expectation that even one of them is over that height is very small: you can keep on grabbing people and measuring them, and almost all the time, unless there’s a basketball convention in town, you’ll be disappointed.
But expand your sample to a million, or 100 million, and suddenly your unusual event becomes commonplace. The male height calculator shows that only 0.04% of American males are 2m or taller. But if you have a million people in your sample, that means you expect, on average, 400 people to match your criterion.
This effect of large numbers—that if you draw widely enough, you’ll find someone to match—is part of where social warming comes in: given networks that have a broad enough reach, crossing continents and cultures, the chances that you connect people who have similar wild intentions suddenly tend towards certainty. The starkest illustration of this effect, for me, was in Myanmar, where a Buddhist monk called Ashin Wirathu was released from jail in January 2012 (as part of an amnesty of political prisoners) and discovered that while he’d been incarcerated, the internet had arrived. It brought news of Muslim terror groups such as al-Qaida and Al Shabaab, which was a story that Wirathu wanted to amplify: he believed Buddhists were under threat in his country from Muslims, despite the latter being a tiny proportion of the population. By 2013, his group’s Facebook page was posting memes about the UK capital becoming “Londonistan”, with dire warnings that Myanmar would be next1.
That such extreme rhetoric could so easily flip between two distant countries with so little cultural overlap is worrying enough. What, though, when you have a country with so many people that it effectively provides every population sample, every outlier, every extreme you could ever want?
Which brings us to India, the world’s most populous nation2. India features a number of times in Social Warming, because its people offer a surprising combination of willingness to adopt technology along with, often, deeply held religious beliefs which don’t stop short of violence. Here’s an example, from the book:
In June 2014, a 24-year-old Muslim IT professional was killed by an extremist Hindu mob furious at ‘derogatory’ images of their gods that had been put onto Facebook, and then spread on WhatsApp.10 The murdered man was unconnected to the images or their circulation; he was walking home when the mob identified him as a Muslim.
(The repeated targeting of Muslims in such different countries tells its own sorry tale.) First: images spread on Facebook and WhatsApp—implying adoption of smartphones. Modern technology! At the street level, in 2014! And then: a mob which goes so far as to kill someone?
Good morning India
Yet India has always been this odd mix of the new and old, cheek by jowl. A 2018 Wall Street Journal story (the link should jump the paywall) begins:
Google researchers in Silicon Valley were trying to figure out why so many smartphones were freezing up half a world away. One in three smartphone users in India run out of space on their phones daily.
The answer? Two words. “Good Morning!”
The glitch, Google discovered, was an overabundance of sun-dappled flowers, adorable toddlers, birds and sunsets sent along with a cheery message.
Millions of Indians are getting online for the first time—and they are filling up the internet. Many like nothing better than to begin the day by sending greetings from their phones. Starting before sunrise and reaching a crescendo before 8 a.m., internet newbies post millions of good-morning images to friends, family and strangers.
The images, from a plethora of websites clamouring for people’s business by producing thousands of Hallmark-style “Good Morning” messages, were filling up people’s storage: that “one in three” figure is stunning. The culture shock—Americans don’t do this, and smartphone storage was designed with American users in mind—meant Google had to write an app that would identify older “good morning” image messages and offer them for deletion. Normal people didn’t think they needed to delete images. They either thought the phone’s storage was infinite, or that once viewed, the image was gone. (After all, when you view a web page, you’re not worried about filling up your phone. Why should WhatsApp images be any different?)
So I tend to view new technologies that might target the credulous in India with some alarm. Remember, the India population on average probably isn’t any more credulous than, say, the American one. It’s just a lot bigger, so you get more people in the “will believe this bunch of nonsense” box.
And one such technology has recently come along: the chatbot. But not any old chatbot; instead these are “tuned chatbots”, ie large language models subsequently trained on a specific dataset: the Bhagavad Ghita.
In a story at Rest Of World (a terrific site), Nadia Nooreyezdan describes how in January, a software engineer based in Bengaluru called Sukuru Sai Vineet launched GitaGPT:
The chatbot, powered by GPT-3 technology, provides answers based on the Bhagavad Gita, a 700-verse Hindu scripture. GitaGPT mimics the Hindu god Krishna’s tone — the search box reads, “What troubles you, my child?”
In the Bhagavad Gita, according to Vineet, Krishna plays a therapist of sorts for the character Arjuna. A religious AI bot works in a similar manner, Vineet told Rest of World, “except you’re not actually talking to Krishna. You’re talking to a bot that’s pretending to be him.”
But what one person can do, multiple can do: there are already five, and possibly will soon be plenty more, GitaGPTs. Which don’t have much in the way of guardrails:
Rest of World found that some of the answers generated by the Gita bots lack filters for casteism, misogyny, and even law. Three of these bots, for instance, say it is acceptable to kill another if it is one’s dharma, or duty.
Is anyone monitoring what questions get asked of these bots, and what comes out? Of course not. Nooreyezdan’s data says that these bots are getting tens, possibly hundreds of thousands of queries per day. In their way, they could collectively start to rival Google as a source of advice, because the search engine isn’t going to offer interpretations of your sacred duty.
What troubles you, my child?
The reason I find this concerning is that the story itself shows examples of GitaGPT saying that yeah, it could be justifiable to kill someone, because it is our duty to protect all life. (Look, nobody said LLMs or religious texts had to be logically consistent.) And rather like the mob infuriated by the WhatsApp images, someone might decide that the GitaGPT has advised it to kill their wife, or their neighbour who keeps playing those Bollywood films too loudly, or people of other religions. We know what humans are capable of; all it needs is the trigger, and the lesson of social warming is that when you inject the technological capability into society, you will discover those parts which lie at the extreme. I really hope that there isn’t a time coming when a mob is holding a terrified man at its mercy and someone declares that they should ask GitaGPT if he should live or die.
And what of the guy who devised it? What does he think?
“This is the thing with technology. No one knows what it will become when it truly reaches scale,” Vineet said. “So morality is not in the tool, it’s in the guy who’s using the tool. That’s why I’m emphasizing individual responsibility.”
“I just build the knives,” he said. “Now if people want to use it to murder or to cut vegetables, that’s not really in my hands, right?”
Which puts me in mind of another famous quote from the Bhagavad Gita, murmured by Robert Oppenheimer as he watched the first atomic test. “Now I am become Death, the destroyer of worlds.” At least with nuclear weapons, we’ve managed to rein ourselves in. Let’s hope the same happens in India. Good morning!
Operational note: the next edition will be on June 30.
Glimpses of the AI tsunami
(Of the what? Read here.)
• AI entertainment made to order: you don’t know what you want. Mike Drucker, who’s a screenwriter in the US, points out that the problem with assuming that AI can just generate more Marvel movies or whatever is that actually, serendipity (though he doesn’t use the word) counts for a lot more when we’re trying to pick stuff.
• The Frost is a 12-minute movie in which every shot is generated by an image-making AI. “It’s one of the most impressive—and bizarre—examples yet of this strange new genre. You can watch the film below in an exclusive reveal from MIT Technology Review.” Screenplay written by a human, and with plenty of direction, but intriguing. However it’s only a demo for a system to generate ads:
A slick minute-long commercial can be generated in seconds. Users can edit the result if they wish, tweaking the script, editing images, choosing a different voice, and so on. Waymark says that more than 100,000 people have used its tool so far. (You can watch one of Waymark's AI-generated commercials here.)
• Fans in China use AI to deepfake popstar’s return to music. Very much what you’d expect to happen as music AI generators become widely available.
• AI could outwit humans in two years, says UK government adviser: Matt Clifford, the chair of the Advanced Research and Invention Agency, which the government set up last year, said on Monday that AI was evolving much faster than most people realised. Then again, humans can outwit humans already, so one has to ask: what is it we’re worried about precisely? (Clifford later took to Twitter to complain that he was being misquoted and that he’d said two year was the bull case. Hmm, well, OK.)
• AI drone that figured out it should kill its naysaying operator so it could fulfil its mission to Kill People wasn’t actually real, or even simulated, just wargamed, says USAF. But that wasn’t clarified in the original report, hence the story went all over the place, and also it fitted into the background noise of “AI extinction!” Oh well.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• I’m taking a break until June 30). Please leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
Read the full account in Social Warming, the book/ebook/audiobook, of course.
Sure, you thought it was China, but the latest UN population estimates in April say India is ahead, with 1.425bn, edging ahead by a few million people.
Worried about AI? Perhaps there are more immediate things to be concerned about... like basic fact checking? The image at the head of this post depicts Shiva Nataraja (as the Flickr caption makes clear). Shiva is most definitely not 'the god of death' (although the image does symbolise the cycle of creation and destruction of the universe), and does not appear in the Bhagavad Gita. The deity involved is Krishna, who is an avatar of Vishnu. Still, all these foreign gods look alike, don't they...