We regret to inform you that the future has been delayed
Though certain areas are just being overlooked

What has been the most consequential technology innovation (or invention, effectively the same thing) of the past 20 years? I think that’s pretty easy: the iPhone-format smartphone, where mobile connectivity is married to full internet capability and a large touchscreen. Smartphones existed before the iPhone’s introduction in 2007 (notably the BlackBerry), but not in that easy-to-use and, crucially, shapeshifting form where the screen could become whatever app you needed: a map, a video player, an email inbox, all fully responsive.
The smartphone upended scores of industries, and created entirely new ones: two easy ones to think of are Instagram (a social network of pictures and videos, in effect a sort of coffee table magazine and/or TV station in your hand) and Uber (don’t wait for a taxi to go past, make it come to you). The extent to which we rely on the “smart” part of our smartphones is demonstrated whenever the data element of a network goes down and we’re forced just to use them for calling or texting people, like animals from the 1990s1. Though in reality, in those situations there’s usually a Wi-Fi signal, which means people can carry on unaffected.
Sure, you can argue that most time spent on smartphones is “consumer surplus”—not productive work, but essentially frittering away spare moments between doing more urgent or important things. Yet they are certainly used for important tasks, in places and at time when we couldn’t have done in the past, using apps that only became possible because of the innovation of the touchsreen format.
Not just that, but billions of people now use a smartphone. You can get an idea of how they’ve become embedded in people’s lives far beyond the western world by reading Rest Of World, a terrific site which aims to offer insights into how technology is changing lives far away from the shores of the US or Europe.
So, that was easy.
OK, and what has been the second most consequential technology innovation/invention of the past 20 years? Again, I think that’s easy: mRNA vaccines. The technology itself wasn’t new—mRNA (messenger RNA) was identified in 1960, and the first vaccine experiments applying it began in 2013, seeking effectiveness against rabies. But they really came into their own with the Covid pandemic, when their rapid development and deployment into hundreds of millions of people’s arms dramatically mitigated disease lethality. One study reckoned that they saved nearly 20 million lives in their first year of deployment, reducing mortality from the disease by 63%. That’s astonishing under any circumstance, but given the pressure there was to develop a vaccine so that the world could return to something resembling (though only resembling) its former order, it’s amazing.
Right. So if we now narrow our focus to the “tech industry” as one typically thinks of it—computing, essentially—then what has been the next most consequential innovation/invention of the past 20 years?
Now it gets a lot harder. Is it the cloud? That’s had a lot of impact, but it’s more than 20 years old; Steve Jobs was burbling about it in 1997 when he came back to Apple, and the company launched its MobileMe cloud storage service in 2000. It was hardly the first.
Is it drones? They’ve been very consequential for the military (look at the war in Ukraine, where they’re deployed both for reconnaissance and attack), but they haven’t reached consumers to any great extent; their use is still limited to specialists fields such as agriculture.
Ehhh, smart speakers? Though quite a lot of them have been deployed, they’re absolutely not a money-maker: Amazon has lost a reported $25bn on “devices” over the five years inclusive 2017-2021, because it’s been selling its Echo devices, and others, at a loss. (Some years earlier there was the ill-fated Fire Phone, of course.) Google and Apple have hardly set the world alight with their offerings here, and Sonos, the only viable independent in the space, bumps along at about $1.5bn per year, about 10% profitability, with products installed in 15 million homes. That’s barely a tenth of what Sonos sees as the potential market of “affluent households”.
How about crypto? That was going to be The Big Thing. Web THREE, baby! Absolutely totally putting power in the hands of the people by allowing them to transfer their own money*2 to someone else instantly—possibly not even at their own volition if they’d left it in a hackable wallet on their PC, or (because only a fool leaves their money in a hackable wallet on their PC) in a hackable crypto exchange, which all of them are.
The trouble with web3 is that it never made any sense. It wasn’t better than the money that we had, and the fact that the only people who were really pushing it were venture capitalists who were pouring actual real money into obviously very dubious schemes didn’t make web3 look more trustworthy; it made the venture capitalists look more stupid. NFTs—non-fungible tokens—simply amplified the “surely nobody could be stupid enough to fall for this” gap between the true believers and the deeply sceptical.
And that’s the important thing. You didn’t need to believe that smartphones were amazing for them to be amazing; you just had to use them, or see friends directly benefiting from using them. You could disbelieve the effects of the mRNA vaccines (regrettably, plenty of people still do) but the science from the clinical trials was there, and did confirm the benefits over the risks3.
But crypto? You had to buy into it, and believe. That’s not how successful technology works.
Which brings us to the bright new hope: generative AI. You wouldn’t believe how excited people are about the potential for large language models (LLMs) and, more generally, generative AI to solve the problem of “we want to have more content that people will waste spend their time on, but not spend any money creating it”.
As I was writing this, a couple of tweets passed by:
Those were from a week ago. On Tuesday:
Rick doesn’t describes what his work was—when he had it—but the implication is that the company does something “environment-adjacent”. It looks like the company wanted to make promotional videos, perhaps with Sora. From the responses to his tweets, plenty of other people are seeing their companies trying to jump onto the GenAI bandwagon. But they, like him, don’t entirely like the idea: his objection was on copyright grounds, while others bring up environmental (energy, water) objections.
GenAI (which I’m distinguishing from machine learning, for reasons I’ll explain later) is getting plenty of adoption both inside companies and around the edges, on the consumer side. Substack offers the option to generate an illustration to go with an article. (This is a stellar example, generated by Dall-E 3, to go with this article.) It’s being used to generate colossal amounts of junk web content. AI-generated audio is narrating podcasts, and even giving disabled politicians back their old voices. AI-generated video is fascinating, but glitchy.
The point about the video and the image generators is that they’re presently the as bad as we’ll have to tolerate. The video models still have lots of room for improvement—for example they don’t understand walking (though of course they don’t understand anything; they just guess at what the next frame should be). The image generators are getting better at hands, but the interrelationship of limbs and their differing sizes in perspective still defeat them.
The LLMs, though, contain the seeds of their own destruction. A paper published this week in Nature shows that when you feed AI-generated data into an AI model, it quickly degenerates:
The researchers began by using an LLM to create Wikipedia-like entries, then trained new iterations of the model on text produced by its predecessor. As the AI-generated information — known as synthetic data — polluted the training set, the model’s outputs became gibberish. The ninth iteration of the model completed a Wikipedia-style article about English church towers with a treatise on the many colours of jackrabbit tails.
Here’s how it progressed:
Model generation 0
Revival architecture such as St. John’s Cathedral in London. The earliest surviving example of Perpendicular Revival architecture is found in the 18th @-@ century Church of Our Lady of Guernsey, which dates from the late 19th century. There are two types of per- pendicular churches : those.
Model generation 9
architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-
This is the problem of the generation of AI content which is rapidly flooding the web: it’s going to poison itself, and generate stuff that nobody will be able to use. (That might not stop it being swallowed and regurgitated by search engines. Google’s already struggled with the problem of ironic human-generated content being taken as true. Imagine Model generation 9 above being fed in to the results of a search on “church towers”.)
The problem with LLM is that people place too much trust in them. They think ChatGPT is a search engine; it isn’t. They think it will give them factual help; it won’t. They think it will write essays; it sort of will, but the content will get worse rather than better over time.
This is the bizarre state that we find ourselves in with what probably is the latest and actually most consequential technology advance of the past 20 years. It’s incredibly expensive—OpenAI is forecast to lose $5bn this year—and the existence of open source versions means that there’s always the potential to undercut the paid-for versions. This wasn’t the case with smartphones, nor even PCs, where open source OSs (which could undercut Windows) took a long time to appear, and still haven’t displaced the paid-for version. This isn’t like Amazon, where the losses were part of the plan because free cash was being ploughed back into the business in search of scale; there weren’t open-source alternatives to Amazon. By contrast, Apple is going to install LLMs in its next-generation iPhones and iPads, and perhaps in its forthcoming home device (a sort of smart speaker with a screen), and OpenAI will not get a penny even when people make enquiries that get handed over to OpenAI because neither the local LLM nor Apple’s cloud-based version can handle them. It just seems like a big money pit for those who are participating, and yet it’s also a game that Google and Microsoft and Meta (and Apple, to a lesser extent) feel obliged to try to take a lead in. None of them knows where they’re running to, but they’re all running as fast as they can, shedding money at astonishing rates, and yet making only minimal impact on the world: Google has crept back from its “generative AI” search results, and lots of the people who tried out ChatGPT haven’t gone back because they can’t think of a reason why.
There must be a category for “technology innovations that totally ruined their creators even while being incredibly popular with those creators”. For me, generative AI fits into it.
Finally, a word about machine learning—which I distinguish from generative AI because it tends to be focused on solving particular domain-specific problems, such as reading X-rays, or examining food on a production line, or helping people learn to play chess. For this, I think there’s a rosy future: the training is relatively easier (because the domain is limited) and the benefits easier to see. But it’s not where the attention is. In that sense, this is the perfect time for machine learning companies to jump in and take advantage. The whole space is open.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
Huh! We 'ad it tough in my day. We used to dream of a phone that you could use to call people and send them messages with. In my day, your phone was shared among the whole family and was tied to a little table in the hall, and you could only speak into it. None of your messaging lark.
extortionate exchange rates apply and you may not be able to get your money out of the exchange you put it in to
which do exist, and increasingly outweigh the benefits as the recipient’s age decreases
Despite being mostly underwhelmed by LLMs*, I have seen a few examples recently of people using them in interesting ways. I saw one in an Indie Author group the other day. People are feeding a custom LLM all their content and then using that to build a world-building database they can refer to. Got a character returning in the second book but can’t remember what they look like? Ask your LLM to provide all the descriptions you’ve written so far. Need a list of names you’ve already used? LLM can help. Personally, I’m sticking with Airtable for things like that, but I can imagine it would be pretty handy for people trying to bash out book 18 as part of a 4-book a year publishing schedule. Then again, do custom AIs hallucinate? If so, that renders it basically useless…
* That said, Chat GPT does a really good job of taking an Otter AI transcript and having a good stab at fixing errors (once I’ve given a list of proper nouns) and introducing paragraphs and speech marks etc. But it probably isn’t worth billions of dollars.
I think allowing smartphone to be one invention covers up some of the advances made: I’d split it into at least 2. Internet in every pocket (which you more less covered) and camera in every pocket, which has altered how we watch events, consume news, present ourselves and so on.
I reckon drones deserve a bit more credit too. It’s more subtle but the amount of film, tv and sports coverage that was simply impossible a couple of decades ago is wild. Not sure that makes it the next most important invention, mind.