AI’s transition problem
Recent successes in deploying AI point to a crucial challenge the field is facing
I read Martin Wolf’s wonderful essay about the challenges facing governments in the light of significant labour displacements. Last week there were two relevant, but distinct, announcements from Babylon Health and OpenAI. I aimed to connect the dots between these in the latest issue of my weekly newsletter Exponential View. (Read the issue | Subscribe)
First, Babylon: the company announced that their AI-based chatbot had performed better than the typical British GP (a GP is a generalist physician rather than a specialist) on the qualifying exams run by the Royal College of General Practitioners. Babylon’s bot scored 81% on a test where humans averaged 72%, although there are some methodology issues. You can read a news story here, and the research paper, which I’ve skimmed, here.
The Royal College of General Practitioners responds with two key points: one silly, the other less so.
The first was a sense of outrage along-the-lines that a GP will never be replaced by a machine. The outrage was misplaced. The technology is helpful because, frankly, it can assist GPs to make better and faster diagnoses (as such technologies are currently helping others in the medical profession). It could also reduce triage times for patients in some circumstances: doctors are scarce and expensive. (Babylon, to their credit, has been delivering physician services via their AI system in Rwanda where there is a severe paucity of human doctors.)
The second observation was a more considered one: that Babylon’s services were more likely to appeal to the young, healthy, educated and technology-savvy, allowing Babylon to cherry pick low-cost patients, leaving the traditional GPs with more complex, older patients. This is a real concern, if only because older patients often have multiple co-morbidities and are vulnerable in many ways other than their physical health. The nature of health funding in the UK depends, in some ways, on pooling patients of different risks. In other words, that unequal access to technology ends up benefiting the young (and generally more healthy) at the cost of those who aren’t well served by the technology in its present state.
Exponential View has repeatedly flagged the risks of unequal access to technology because these technologies are, whatever you think of them, literally the interface to the resources we need to live in the societies of today and tomorrow.
Elsewhere, OpenAI, a research group, announced that it had developed bots that could beat human teams in a collaborative game called Dota. It’s a pretty huge step. Dota is a complex game with delayed rewards and apparently requiring a good deal of strategic planning and intuition. Each character in Dota has different skills and capabilities, leading to a good deal of uncertainty in the game. Open AI Five was a team of bots that won 2 out of 3 games against an amateur team.
The bots learnt when to fail to help the team, to waive local award for global award and had no sense of hero complex […] We as humans aren’t smart enough to see through that fog of complexity and complex interaction — but the systems we write might be. They might help us achieve the objectives which we’ve been lossily and haphazardly walking towards for hundreds of years
You can read more technical details in this decent breakdown.
A couple of observations: the bots are trained using self-play. They had pretty significant computing resources, about 128,000 CPU cores and 256 GPUs (by comparison AlphaGo required 1,920 CPU and 280 GPUs in the match against Lee Sedol). And the machines can train themselves for 900 years of gameplay per earth day.
So it’s clear that these systems don’t learn as efficiently as humans do: power consumption is higher and the amount of training systems off the charts. But note: that the cost of compute is set to get much cheaper in the coming years with the influx of novel architectures to support machine learning. If it followed a Moore’s Law trajectory, the cost to execute something like this would be 100 times cheaper in a decade. I’ll put my neck out and say that the combination of algorithmic improvements (in how the systems learn), more optimised architecture and simple scale economics will result in improvements far better than 100-fold in ten years..
Let me connect the dots here. In two very different domains, we’re able to use (software) machines to tackle things that previously were the demesne ofHomo sapiens. The progress is rapid — only last year, OpenAI’s Dota bot could only win less complex single player games, but there is a long way to go.
Equally, the real mid-term opportunity of technologies like Babylon’s is not to replace GPs, but ideally to enable them. Those technologies of enablement (the stethoscope, thermometer, probabilistic graphical model) allow them to do their job — which is delivering patient outcomes — better.
However, there is a real transition problem that we have to manage. This is illustrated by the risk of Babylon cherry-picking low-cost patients and leaving the expensive ones to the traditional physician. This may not matter to society when it comes to other goods, but it does when it comes to certain fundamental services.
That transition problem is a cousin of the one that Martin Wolf alludes to in his essay. Many versions of our future may be filled with sunny uplands; figuring out how to navigate that steep climb has to be a priority.
I regularly write about the development of AI and its impact on business, society and culture in Exponential View. Join 29k+ readers from more than 40 countries who are hungry to know what the near future has in store for us. Subscribe
CNA at Palentine Nursing Home
6 年Interesting read thank you for sharing.
Innovation Industrial Solutions
6 年Great Read
Senior Customer Success Manager
6 年Very interesting article. To the point both on AI being able to help us overcome errors we make as humans and the risk of exclusion/ “cherry picking” this might lead to in healthcare. I am fascinated by the possibilities AI opens up in healthcare?
Performance improvement facilitator leveraging applied business intelligence & different thinking to create value & minimise waste
6 年"If it followed a Moore’s Law trajectory, the cost to execute something like this would be 100 times cheaper in a decade. The combination of algorithmic improvements (in how the systems learn), more optimised architecture and simple scale economics will result in improvements far better than 100-fold in ten years."
Ex-Regulator, Litigator & COO | New student - International Animal Welfare Law | Author of upcoming criminology text 'Where's the Harm?'
6 年An interesting analysis. As mentioned before, people seem to forget that medical errors were the third leading cause of death in the US in 2016 and this was not an anomaly. Agree with @ian macleod any reduction in demand on the system has to be good.. As I see it, if people want better medical services, then people need AI's help.