A racially biased healthcare algorithm
James Lawson
Director, Programmes at Helsing - AI to serve our democracies I Chairman at ASI | Former SpAd
Recent news that an algorithm deciding care for 200 million patients was racially biased is highly concerning. This cautionary tale provides valuable lessons. A hasty rejection of Artificial Intelligence (AI) would be wrong though. AI has immense potential to improve health outcomes across society.
AI is already transforming health for the better. British researchers have built systems that can accurately diagnose breast cancer and detect eye conditions faster than ever before. In hospitals, we are using AI to proactively identify sepsis cases, thereby saving lives. Reducing readmissions and creating more accurate staffing forecasts is also saving money – essential for cash-strapped health services.
This story reminds companies to follow AI best practices. Set the right goals – the AI will follow your lead. Train the AI on your data and beware of bias. Avoid using third-party opaque “black-box” tools. Make sure decisions are easy to explain and justifiable. For sensitive areas, keep people involved. Despite the recent news, AI is ethical and trustworthy when used properly.
Recommended further reading
Original Science Magazine article: Dissecting racial bias in an algorithm used to manage the health of populations https://science.sciencemag.org/content/366/6464/447/tab-e-letters
Github page to reproduce their analysis: https://gitlab.com/labsysmed/dissecting-bias
Blog by Colin Priest (VP AI Strategy for DataRobot), expanding upon this and other "Data Science Fails": https://blog.datarobot.com/data-science-fails-be-careful-what-you-wish-for
DataRobot's White Paper on AI Ethics: https://www.datarobot.com/resource/ai-ethics/