A pragmatic suggestion for medical AI regulation

A pragmatic suggestion for medical AI regulation

There is a growing global consensus that “we need to regulate AI”. The most commonly cited suggestion is that we create a new global government agency to issue licenses for developing large AI models. This suggestion is coming from brilliant people thinking big. I can’t help but have tremendous respect for the scale of their thinking. But I find myself craving more actionable ideas.

In particular, medical AI has the potential to revolutionize healthcare. To build trust in medical AI, we need transparent deterministic regulation. We should not be waiting on a new intergalactic regulatory agency to get started. To get started we need specific actionable ideas. I offer a few of potential ideas here as a software builder with years of experience shipping regulated software (I managed Microsoft Windows while it was under US and European consent decrees).

When thinking about a new government agency, I am reminded of the “digital transformation” which industry has undergone over the last decade. During that time, many Chief Digital Officers were hired, and they guided the adoption of modern applications to build products differently, provide customer support differently, sell differently, communicate with employees differently, etc. Those organizations which transformed their core operations succeeded, while trailing organizations wrongly created sideline digital efforts. I anticipate we’re entering a new era of “AI transformation,” which will undoubtedly come with Chief AI Officers. ?

In the context of regulation, we will achieve our best results if we embrace AI within our current regulatory agencies vs. creating a new sideline agency. The SEC needs to integrate AI into their regulation of our investments. The FAA needs to integrate AI into their regulations that keep our airplanes safe. The FDA needs to integrate AI into their efforts to keep our medical care safe. With the domain expertise of these agencies and the full consideration of AI, we can achieve great regulatory effectiveness of AI.

In the case of medical AI and the FDA, we can create an approval template inspired by the data driven approvals we have today from randomized controlled trials (RCTs), but we should also be opportunistic about addressing RCT flaws in speed, cost, and lack of representative populations causing inequity and quality risk. ?And we need to consider that medical AI will be big data software, not a chemical or device, and embrace successful software engineering principles to support medical AI as high-quality software. With that in mind, I recommend consideration of these principles in the needed FDA AI regulation:

1.?Representative training data. RCTs carefully design the population cohorts that each phase of the trial needs to be evaluated upon. Similarly, for AI we should not be approving for use any medical AI where we do not know what data is being used to train the system. The models are only as good as the data they train on. To minimize risk to the population, we need to train medical AI models on data that is representative of the full diversity of our country. Only if the data is representative of our diversity and the real medical conditions and comorbidities our population faces, can we hope to improve patient care in a scalable and equitable way with AI.


2.?Transparent testability. RCTs isolate the intervention arm from the placebo arm to clearly understand the impact of the new intervention. In software, we create “sandbox” environments to test new features and analyze cybersecurity threats. To maximize patient safety, we should insist medical AI be deployable within a sandbox. This will allow regulators to transparently test the reliability and accuracy of recommendations from the model. We can build this controlled environment to support regular updates that improve quality over time, while providing regulators full transparency.

?

3.?Phased controlled deployment. One of the flaws of today’s regulatory process is once an intervention passes a small controlled RCT, it is released broadly into US healthcare. Phase 4 data comes in over the following years to assess safety, efficacy, and value in real-world situations. Modern cloud software is deployed steadily in increasing rings of scale, with real-time telemetry on the performance being steadily evaluated before updates reach more users. We should insist upon this same data-informed and controlled, medical AI deployment plan. Start slowly, learn, scale, learn, scale, learn, and then roll back if needed. AI innovators should be accountable for building and supporting these deployment systems and sharing the data and controls with regulators.

Let’s pragmatically integrate AI into our existing medical regulatory frameworks. With pragmatic regulation we can earn trust in medical AI and transform healthcare.

If we get this right, we will save lives. If we get this wrong, we will waste a lot of time and money, and lives will be lost. ?

Let’s convene the pragmatic medical and software experts asap to build a plan.

Shahz Afzal

B2B Marketing & GTM Leader | AI & Data-Driven Strategies | Expert in Partnerships & Revenue Growth | Ex-AWS, Ex-Microsoft

1 年

Good point of view Terry! In the US, regularity agencies will need help from the industry to look at these training and deployment models and put the policy framework around it. Is there anyone leaving such initiative in the tech/ healthcare industry?

回复
Sid M.

VP Data, Analytics & AI | Data, Digital and Technology Transformer | Strategy, AI for Healthcare | Innovation at Scale, System Transformation

1 年

Love the simple and intuitive approach. It is the fringe Ai enabled use cases that will trip us. Transparency, testability and phased deployment are great. One more aspect, perhaps, incorporating continual monitoring and human intervention mechanisms enables us to address system anomalies and maintain control in dynamic and complex environments.

回复
John Kahan

Board Member, Chief Data Analytics Officer

1 年

I like your pragmatic approach here. Stopping innovation isnt the answer. Creating the right guardrails that enable innovation via transparent and repeatable algorithms can help speed innovation.

Micah Voraritskul

Founder at VerifiedHuman? | Writer at The Sharp Pencil | AI, Ethics, Education, Marketing

1 年

We are developing a values-based, human-centric solution https://www.dhirubhai.net/company/94163384 https://www.verifiedhuman.info

回复
Cheng Cao

Principal Machine Learning Engineer at Truveta

1 年

I agree with

要查看或添加评论,请登录

社区洞察

其他会员也浏览了