The legal, regulatory, behavioral and economic challenges of AI

The legal, regulatory, behavioral and economic challenges of AI

Artificial intelligence has become the backbone of data analytics. There are few industries, including the sick care systems industry, where data scientists are not using it to solve both clinical , patient and doctor experience and business problems. The applications are seemingly endless to the point where medicine itself is turning into a data business that takes care of patients, rather than vice versa.

Elon Musk is worried.

As AI gains widespread adoption and penetration is sick care systems, it is creating legal, regulatory, social, and economic challenges that regulators and policy makers will have to address. For example:

  1. Jobs and workforce development shifts that makes inequality worse
  2. How to educate and train the medical workforce in digital health and AI in particular
  3. Security and confidentiality of massive amounts of data
  4. Data overload and fatigue
  5. Reimbursement changes for electronic services like when an Avatar of bot responds to a patient request. Here's an example of what I mean.
  6. How to pay for services that require an integration of man and machine
  7. Liability issues when "the computer made me do it"
  8. FDA regulatory standards and compliance issues for AI and future applications. For example, when is AI a medical device?
  9. The economic consequences and costs when hospital systems consider AI application ,integration into legacy systems or replacing them
  10. The technical and systems challenges of updating AI with new data sets
  11. Trust in technology
  12. Transparency about how a particular machine was trained to make a certain decision.

Here are the top ten legal considerations for use and/or development of artificial intelligence in health care.

On Feb. 9, the World Health Organization (WHO)?announced ?a new policy brief entitled, “Ageism in artificial intelligence for health .” The brief discusses legal, non-legal, and technical measures that can be used to minimize the risk of worsening or creating ageism through artificial intelligence (AI) technologies.

The report suggested the following eight considerations could ensure that AI technologies for health address ageism and that older people are fully involved in the processes, systems, technologies and services that affect them.

  • Participatory design of AI technologies by and with older people
  • Age-diverse data science teams
  • Age-inclusive data collection
  • Investments in digital infrastructure and digital literacy for older people and their health-care providers and caregivers
  • Rights of older people to consent and contest
  • Governance frameworks and regulations to empower and work with older people
  • Increased research to understand new uses of AI and how to avoid bias
  • Robust ethics processes in the development and application of AI”

The European Parliament has published their review of the ethical and social impact of AI in healthcare.

?The FDA has recognized that AI and machine learning technologies pose a number of challenges from a regulatory perspective. ?And a key challenge here is that when FDA regulates software as a medical device, there's a general question about how to determine when changes to a software algorithm are so significant that they merit reevaluation of the software product, its safety and effectiveness.?

The U.S. Food and Drug Administration released a list of "guiding principles" aimed at helping promote the safe and effective development of medical devices that use artificial intelligence and machine learning.?

The FDA, along with its U.K. and Canadian counterparts, said the principles are intended to lay the foundation for?Good Machine Learning Practice .?

The principles are:?

  1. The total product life cycle uses multidisciplinary expertise.
  2. The model design is implemented with good software engineering and security practices.
  3. Participants and data sets represent the intended patient population.
  4. Training data sets are independent of test sets.
  5. Selected reference data sets are based upon best available methods.
  6. Model design is tailored to the available data and reflects intended device use.
  7. Focus is placed on the performance of the human-AI team.
  8. Testing demonstrates device performance during clinically relevant conditions.
  9. Users are provided clear, essential information.
  10. Deployed models are monitored for performance, and retraining risks are managed.?

Here are some other looming issues.

How can we forecast, prevent, and (when necessary) mitigate the harmful effects of malicious uses of AI? A landmark review of the role of artificial intelligence (AI) in the future of global health published in The Lancet calls on the global health community to establish guidelines for development and deployment of new technologies and to develop a human-centered research agenda to facilitate equitable and ethical use of AI.

Human-human risk homeostasis and automation bias are two potential risks of AI in medicine. Here are several others concerning the use of bots.

Innovators are leading indicators and policy makers and regulators are laggards. However, those that push forward, ignoring regulatory, IP and reimbursement demands of a highly regulated environment, will crash and burn. Asking for forgiveness usually does not work and until and unless we include policy makers as research and development collaborators, along with payers, practitioners, patients and product makers, dissemination will crash on the shoals of regulatory and reimbursement sclerosis. As much as entrepreneurs might dislike it, getting permission is a better long term strategy. Rules create or destroy innovative ecosystems that drive business models that support innovation. The sooner we educate policy making partners and lobby for change, the better for patients through the deployment of AI innovation.

Arlen Meyers, MD, MBA is the President and CEO of the Society of Physician Entrepreneurs

This is happening already- check out the first AI summit at the UN herehttps://www.itu.int/en/ITU-T/AI/Pages/201706-default.aspx

回复

要查看或添加评论,请登录