Is AI going to take our jobs, or make us into centaurs?
With the truckopalypse very delayed, the future seems to be changing before our eyes
In the far-off halcyon days of 2013, when David Bowie was still alive, two professors at the Oxford Martin School (part of the University of Oxford) published a working paper called “The Future of Employment”. In it, Carl Benedikt Frey and Michael Osborne tried to apply a rational methodology to answer the question of which jobs were “susceptible to computerisation”.
In other words, pointing out for whom the machines were coming to take their paycheques. Their remarkably detailed work looked at 702 occupations, and by their calculations, “about 47% of total US employment is at risk”. But for the white-collar workers reading it, there was some comfort: “wages and educational attainment exhibit a strong negative relationship with an occupation’s probability of computerisation.” In other words, the more you’re paid and the smarter you are, the less likely your job can be computerised.
The paper makes for interesting reading now. At the time, the buzzphrases at the bleeding edge of computing were “big data” and “IBM’s Watson”. Machine learning was just starting to rear its head; we were still a year or two away from Google’s Deep Dream software, which used convolutional neural networks (CNNs) to create lurid, bizarre images that were part dream, part nightmare. The introduction of large language models (LLMs) was at least a couple of years off: an introductory paper in 2014 led to the first models in 2018.
So with Frey and Osborne’s analysis, we’re really looking at what feels now like the early days of machine learning/artificial intelligence (ML/AI). The broad sweep of the report is: things that are repetitive and manual where image recognition and object manipulation matter are probably in trouble, but those requiring “tasks not susceptible to computerisation” are probably safe.
Thus the broad outline suggests, for example, that
while it seems counterintuitive that sales occupations, which are likely to require a high degree of social intelligence, will be subject to a wave of computerisation in the near future, high risk sales occupations include, for example, cashiers, counter and rental clerks, and telemarketers. Although these occupations involve interactive tasks, they do not necessarily require a high degree of social intelligence.
And the lack of need for “social intelligence”—dealing with humans making variable demands—is what sinks those jobs as employment sources in the future, they argue.
The appendix to their paper, starting at page 61, is where you get the roll call of “computerisable” occupations, from the least to the most. Safest are occupational therapists; next up are “first-line supervisors of mechanics, installers, and repairers” (a nice way of saying “foremen”, I suspect), emergency management directors, mental health & substance abuse social workers, and audiologists. (Choreographers are also in the top 20 of “safe” jobs.) All have a “computerisability” score less than 0.05 (on a 0-1 scale).
But zoom down to the bottom of the list: “telemarketers” are reckoned to be an endangered species (C=0.99), along with tax preparers and “cargo & freight agents”. You might wonder where computer programmers fit: they’re at No.293, with a C-score of 0.49.
The paper set off a lot of pondering, doomsaying, and of course counter-doomsaying. Because self-driving cars seemed at the time to be just around the corner, an impressive analysis in 2015 forecast that American truckers would be made redundant in their thousands as self-driving trucks took over America’s freeways—and when their jobs went, so would all the truckstops and petrol stations and the people reliant on their passing traffic:
According to the American Trucker Association, there are 3.5 million professional truck drivers in the US, and an additional 5.2 million people employed within the truck-driving industry who don’t drive the trucks. That’s 8.7 million trucking-related jobs.
OK, but how soon were these self-driving trucks going to arrive?
According to Morgan Stanley, complete autonomous capability will be here by 2022, followed by massive market penetration by 2026 and the cars we know and love today then entirely extinct in another 20 years thereafter.
He did point out that this was one of the more rapid estimates; that others put the timescale around 2035. His conclusion: “we’re looking at a window of massive disruption starting somewhere between 2020 and 2030.”
Looking at that article now, we’re not yet seeing the glimmer of that window. Trucking.org reports that there were 8.4 million people employed throughout the economy in jobs that relate to trucking activity in 2022, excluding the self-employed, and 3.54 million truck drivers employed in 2022 (an increase of 1.5% from 2021). It looks very much status quo ante.
But since that paper came out, and the truckpocalypse has so far failed to occur (in fact, self-driving cars have remained stubbornly just around the corner), Other Things have happened. LLMs have happened: ChatGPT, Microsoft’s CoPilot, Google’s Bard (now Gemini), Dall-E, MidJourney, and just on Thursday a text-to-video product called Sora from OpenAI that produces short videos in up to 1080p resolution from a simple (relatively) text prompt.
These have changed things a lot. Artists are complaining about Dall-E and MidJourney taking work away from them. Programmers are getting worried about the potential for ChatGPT to take work away from them. Screenplay writers are worried about LLMs taking work away from them. Actors too: generative video is worrying, especially when you see how Samuel L Jackson and Bruce Willis and Robert de Niro and Carrie Fisher and Harrison Ford have all been manipulated, in one way or another, to manifest in places or ages they absolutely weren’t in. After a bit you start wondering which white-collar spaces aren’t at risk.
Thus the Wall Street Journal suggested earlier this week that “AI Is Starting to Threaten White-Collar Jobs. Few Industries Are Immune”, saying
While the total number of jobs directly lost to generative AI remains low, some of these companies and others have linked cuts to new productivity-boosting technologies such as machine learning and other AI applications.
That article also links back to a May 2023 WSJ article, which observed:
For the year ended in March [2023], the number of unemployed white-collar workers rose by roughly 150,000, according to an analysis from Employ America, a nonpartisan research group. That included workers in professional services, management, computer occupations, engineering, and scientists.
“I can’t think of any job where it’s like AI by itself,” said Rodney McMullen, chief executive of grocery chain Kroger, which has about 430,000 employees. “I can think of a lot of jobs that are being affected by AI.”
Sure, you can think of jobs that are being affected by AI, but are they being wiped out by AI? If AI becomes incorporated into the job, rather than taking it over, then you have what Garry Kasparov calls a centaur: the AI enhances the human’s ability, reaching further than before.
This is certainly what we’re seeing in a lot of cars. Does your car have ALKS—automatic lane-keeping software? If you’re on a motorway (freeway in the US), you can set the car to a cruise control speed, and it keep the wheels inside the lane, and also make sure that if the car in front of you slows down, yours will brake and won’t hit it. This takes a lot of the tedium of driving long distances away; you can sit behind a lorry and slipstream in its wake, saving yourself fuel and requiring only minimal concentration. Or ultrasonic parking sensors, to tell you how close objects are—or even self-parking? You and your car, a centaur.
Hence Noēma magazine this week suggested that “AI could actually help rebuild the middle class”. There’s a neat line in the piece: “All the people who will turn 30 in the year 2053 have already been born and we cannot make more of them.”
But the writer, David Autor, points out that computerisation (as opposed to AI) has hardly been the worker’s friend:
Information, it turns out, is merely an input for a more consequential economic function, decision-making, which is the province of elite experts — typically the minority of U.S. adults who hold college or graduate degrees. By making information and calculation cheap and abundant, computerization catalyzed an unprecedented concentration of decision-making power, and accompanying resources, among elite experts.
Simultaneously, it automated away a broad middle-skill stratum of jobs in administrative support, clerical and blue-collar production occupations. Meanwhile, lacking better opportunities, 60% of adults without a bachelor’s degree have been relegated to non-expert, low-paid service jobs.
He sees potential benefits from AI through, well, let me call it centaurisation:
The unique opportunity that AI offers humanity is to push back against the process started by computerization — to extend the relevance, reach and value of human expertise for a larger set of workers. Because artificial intelligence can weave information and rules with acquired experience to support decision-making, it can enable a larger set of workers equipped with necessary foundational training to perform higher-stakes decision-making tasks currently arrogated to elite experts, such as doctors, lawyers, software engineers and college professors. In essence, AI — used well — can assist with restoring the middle-skill, middle-class heart of the US labor market that has been hollowed out by automation and globalization.
He does however recognise that when it comes to manual jobs there’s a lot of expertise which is difficult, even dangerous, to grasp at.
Let’s say that I wanted to replace the fuse box in my 19th-century home with a 20th-century circuit breaker panel. Assume, hypothetically, that I’ve never touched a pair of electrical pliers and I don’t own insulated gloves. But I have a free Saturday and there’s a Home Depot around the corner. Confidence high, I fire up one of the dozens to hundreds of YouTube how-to videos on this subject and get to work. Inevitably, but not immediately, I realize that my 19th-century fuse box is not quite like the one in the video. Whether I choose to reverse course or brazenly carry on, I face a palpable risk of a nasty shock or electrical fire.
This is the sort of situation where the AI can’t make you a centaur, because you lack the ability to interpret what you’re seeing. That’s even assuming you trust the AI sufficiently; don’t overlook the examples where people, including lawyers, have thought that ChatGPT will make them a centaur, and instead has made them look a doofus by citing nonexistent cases for you to plead as precedents.
Ironically, it does seem like those manual tasks such as replacing the fuse box are perhaps the ones most likely to resist the blandishments of AI. Manual jobs which involve a certain adaptability, such as plumbing or carpentry or electrical work or of course building, look very safe from Frey and Osborne’s viewpoint, and from Autor’s. When your pipe joint starts leaking, you might go to YouTube, but unless you’ve actually done a fair amount of manual work you know that you need to defer to someone with the expertise, because all the YouTube videos in the world won’t save you.
Certainly I wish I’d known that a few years ago, when I tried to remove a radiator in the house I was living in without realising that you need to drain down the entire system first, or else have a strategy for putting a cap on the pipe end that was just attached to the radiator. I didn’t do either, and my punishment was to sit for roughly an hour, plugging the pipe end and its hot, hot water with my thumb while waiting for a real plumber to arrive. When he took over I sprinted out of the room to dunk my thumb in cold water; he sorted it out in seconds. I still don’t know how. Perhaps YouTube would tell me. Even so, paying someone else to take the risk still seems the better way forward. Even centaurs need to know their limits.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy.
• I’m the proposed Class Representative for a lawsuit against Google in the UK on behalf of publishers. If you sold open display ads in the UK after 2014, you might be a member of the class. Read more at Googleadclaim.co.uk. (Or see the press release.)
• Back next week! Or leave a comment here, or in the Substack chat, or Substack Notes, or write it in a letter and put it in a bottle so that The Police write a song about it after it falls through a wormhole and goes back in time.
I tried, I REALLY tried, to get Substack’s AI image generator, and then Diffusion Bee (the app of Stable Diffusion) to conjure up a centaur working in an office. No dice: they can’t even draw a centaur. You could have a) a horse b) a slightly swarthy human c) no, just a horse again. Clearly there are not enough pictures of centaurs around, though I’m not sure how we remedy this problem, since no sensible artist is going to feed that maw and make themself redundant, especially after reading this.