The legal, regulatory, behavioral and economic challenges of AI
Arlen Meyers, MD, MBA
President and CEO, Society of Physician Entrepreneurs, another lousy golfer, terrible cook, friction fixer
Artificial intelligence has become the backbone of data analytics. There are few industries, including the sick care systems industry, where data scientists are not using it to solve both clinical , patient and doctor experience and business problems. The applications are seemingly endless to the point where medicine itself is turning into a data business that takes care of patients, rather than vice versa.
As AI gains widespread adoption and penetration is sick care systems, it is creating legal, regulatory, social, and economic challenges that regulators and policy makers will have to address. For example:
On Feb. 9, the World Health Organization (WHO)?announced ?a new policy brief entitled, “Ageism in artificial intelligence for health .” The brief discusses legal, non-legal, and technical measures that can be used to minimize the risk of worsening or creating ageism through artificial intelligence (AI) technologies.
The report suggested the following eight considerations could ensure that AI technologies for health address ageism and that older people are fully involved in the processes, systems, technologies and services that affect them.
?The FDA has recognized that AI and machine learning technologies pose a number of challenges from a regulatory perspective. ?And a key challenge here is that when FDA regulates software as a medical device, there's a general question about how to determine when changes to a software algorithm are so significant that they merit reevaluation of the software product, its safety and effectiveness.?
The U.S. Food and Drug Administration released a list of "guiding principles" aimed at helping promote the safe and effective development of medical devices that use artificial intelligence and machine learning.?
The FDA, along with its U.K. and Canadian counterparts, said the principles are intended to lay the foundation for?Good Machine Learning Practice .?
The principles are:?
How can we forecast, prevent, and (when necessary) mitigate the harmful effects of malicious uses of AI? A landmark review of the role of artificial intelligence (AI) in the future of global health published in The Lancet calls on the global health community to establish guidelines for development and deployment of new technologies and to develop a human-centered research agenda to facilitate equitable and ethical use of AI.
Human-human risk homeostasis and automation bias are two potential risks of AI in medicine. Here are several others concerning the use of bots.
Innovators are leading indicators and policy makers and regulators are laggards. However, those that push forward, ignoring regulatory, IP and reimbursement demands of a highly regulated environment, will crash and burn. Asking for forgiveness usually does not work and until and unless we include policy makers as research and development collaborators, along with payers, practitioners, patients and product makers, dissemination will crash on the shoals of regulatory and reimbursement sclerosis. As much as entrepreneurs might dislike it, getting permission is a better long term strategy. Rules create or destroy innovative ecosystems that drive business models that support innovation. The sooner we educate policy making partners and lobby for change, the better for patients through the deployment of AI innovation.
Arlen Meyers, MD, MBA is the President and CEO of the Society of Physician Entrepreneurs
Technology
7 年This is happening already- check out the first AI summit at the UN herehttps://www.itu.int/en/ITU-T/AI/Pages/201706-default.aspx