Artificial Intelligence in Healthcare
An update by Rahul Chakraborty, LBS,UK, Cohort - 22-23

Artificial Intelligence in Healthcare

What is AI? A primer for clinicians

Artificial intelligence describes various techniques that allow computers to perform tasks typically thought to require human reasoning and problem-solving skills. ‘Good Old-Fashioned AI’, which follows the rules and logic specified by humans, has been used to develop healthcare software since the 1970s, though its impact has been limited. More recently, there have been substantial technological developments in the field of machine learning and especially with artificial neural networks, where computers learn from examples rather than explicit programming.

Neural networks function by having many interconnected ‘neurons.’ The connections between these neurons get stronger if they help the machine to arrive at the correct answer and weaken if they do not help to reach the correct answer. The system itself is made up of an input layer, some hidden layers, and an output layer. There are a huge number of connections between each layer that can be refined. Over time, these billions of refinements can hone an algorithm that is very successful at the task.

No alt text provided for this image

Artificial Neural Networks are a common type of machine learning inspired by how an animal brain works. They progressively improve their ability at a particular task by considering examples. Early image recognition software was taught to identify images that contain a face by analyzing example images that have been manually labeled as ‘face’ or ‘no face’. Over time, with a large enough data set and a powerful enough computer, they will get better and better at this task. They can find connections in data independently.

No alt text provided for this image

There are three fundamental limitations of these methods:

?1. Explainability

Modern machine learning algorithms are often described as a ‘black box’. Decisions are based on the huge number of connections between ‘neurons,’ and so it is difficult for a human to understand how the conclusion was reached. This makes it difficult to assess reliability, bias, or detect malicious attacks.

?2. Data requirement

Neural networks need to be trained on a vast amount of accurate and reliable data. Inaccurate or misrepresentative data could lead to poorly performing systems. Health data is often heterogeneous, complex, and poorly coded.

?3. Transferability

Algorithms may be well optimized for the specific task they have been trained on but may be confidently incorrect on data it has not seen before.

No alt text provided for this image
No alt text provided for this image

Patient safety

Central to the debate about the introduction of AI to healthcare is perhaps the most fundamental question: will patients be safe or safer? Proponents argue that machines don’t get tired, don’t allow emotion to influence their judgment, make decisions faster, and can be programmed to learn more readily than humans. Opponents say human judgment is a fundamental component of clinical activity and the ability to take a holistic approach to patient care is the essence of what it means to be a doctor.

?Digitized clinical support tools offer a way to cut unwarranted variation in patient care. Algorithms could standardize tests, prescriptions, and even procedures across the healthcare system, being kept up-to-date with the latest guidelines in the same way a phone’s operating system updates itself from time to time. Advice on specialist areas of medicine normally only available through referral to secondary or tertiary services could be delivered locally and in real-time.

Direct-to-patient services could provide digital consultations regardless of the time of day, geography, or verbal communication needs, including language.

?However, algorithms could also provide unsafe advice. The tech mantra of ‘move fast and break things’ does not fit well when applied to patient care. As we shall see across the domains, evaluating whether an AI is safe will be challenging. It may be poorly programmed, poorly trained, used in inappropriate situations, have incomplete data, and could be misled or hacked. And worse, dangerous AI could replicate harm at scale.

?Clinical considerations:

—??????????Algorithms could standardize assessment and treatment according to up-to-date guidelines, raising minimum standards and reducing unwarranted variation

—??????????Artificial intelligence could improve access to healthcare, providing advice locally and in real-time to patients or clinicians and identifying red flags for medical emergencies like sepsis

—??????????Decision support tools could be confidently wrong, and misleading algorithms are hard to identify.

Unsafe AI could harm patients across the healthcare system.

?Ethical issues:

—??????????The widespread introduction of new AI healthcare technology will help some patients but expose others to unforeseen risks. What is the threshold for safety on this scale – how many people must be helped for one that might be harmed? How does this compare to the standards to which a human clinician is held?

—??????????Who will be responsible for harm caused by AI mistakes – the computer programmer, the tech company, the regulator, or the clinician?

—??????????Should a doctor have an automatic right to over-rule a machine’s diagnosis or decision?

Should the reverse apply equally?

Practical challenges:

—??????????Human subtleties may be hard to digitize, and machines may struggle to negotiate a pragmatic compromise between medical advice and patient wishes

—??????????Few clinicians will be able to understand the ‘black box’ that neural networks use to make decisions, and the code may be hidden as intellectual property. Should we expect them to trust its decision?

—??????????A focus on measurable targets could lead to AI ‘gaming’ the system, optimizing markers of health rather than helping the patient

—??????????As clinicians become increasingly dependent on computer algorithms, these technologies become attractive targets for malicious attacks. How can we prevent them from being hacked?

—??????????The importance of human factors and ergonomics risk being overlooked. The public, patients, and practitioners should be engaged in the design phase and not left simply as end-users.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

The doctor and patient relationship

The nature of the relationship between clinicians and their patients has evolved as medicine has evolved. For centuries, the doctor held exclusive knowledge and issued ‘orders’. Today, doctors are expected to take a holistic approach, providing care that is tailored to each patient’s wishes and based on shared decision-making. The future use of AI technologies has the potential to cause a further seismic shift in the culture of interactions between clinicians and patients.

?Much of this depends on the nature of the interface between the public and AI. Applications could range from a doctor-facing decision support tool, potentially unnoticed by the patient, to an autonomous AI system accessible from the patient’s own devices, diagnosing and treating conditions without human clinical involvement.

?As AI systems become more autonomous with a greater degree of direct-to-patient advice, a significant need arises to establish the role of clinicians in maintaining quality, safety, patient education, and holistic support. The psychological impact on both patients and doctors of the presence of AI must be anticipated, including an inherent reluctance to disagree with the recommendations of digital systems.

Clinical considerations:

—??????????The holistic side of consultation would be difficult to replicate with digital tools – doctors are better equipped to detect non-verbal signs, tone of voice, and other subtle cues. Loss

of this human contact could lead to reduced awareness of patients’ loneliness, safeguarding, or social needs

—??????????Will the doctor become a second opinion, a step in the quality assurance process, or an interpreter? In what contexts should clinical staff review AI-generated advice for quality- assurance, and interpretation before it is accessible to a patient?

—??????????There is a risk that lay people unfamiliar with medical data may under – or overestimate the severity of conditions and misunderstand the magnitude of risks.

No alt text provided for this image

Ethical issues:

—??????????Can a doctor be expected to act on the decisions made by a ‘black box’ AI algorithm? In deep neural networks, the reasons and processes underlying the decisions made by AI may be difficult to establish, even by skilled developers. Do doctors need to explain that to patients?

—??????????Will clinicians bear the psychological stress if an AI decision causes patient harm? They could feel great responsibility for their role in the process without the power to modify or understand the contribution of the AI to the error

—??????????Could the ready availability of a tool superficially appearing to ‘replace’ a doctor’s advice diminish the value of clinicians in the eyes of the public and therefore reduce trust and degrade the quality of the doctor-patient relationship?

Practical challenges:

—??????????If AI and doctors disagree, who will be perceived as ‘right’? The degree of relative trust held

in technology and in healthcare professionals may differ between individuals and generations

?—??????????Autonomous health advice and the interface with wearable devices may promote patients’ health ownership and supported self-care but could result in increased health anxiety or health fatigue for some members of the public

?— Reduced face-to-face contact could reduce opportunities for clinicians to offer health promotion interventions – this must be factored into systems.

No alt text provided for this image

Accountability for decisions

Who should be held responsible when something goes wrong? It is a fundamental question at the heart of the conversation between clinicians, healthcare organizations, policy makers, and AI developers. To what extent do we expect healthcare providers to understand the intricacies of

AI technology and technology firms to understand the realities of clinical practice?

?AI is rapidly developing and complex, and there will be errors and unforeseen consequences. Technology companies are currently focusing on AI that will support clinicians rather than replace clinical judgment – implying that accountability for mistakes remains with the clinician. But a line needs to be drawn between accountability for content and for operation. A clinician might be accountable for not using an algorithm or device correctly. Still, in the event of harm being caused by incorrect content rather than improper use, then the accountability must lie with those who designed it, and then quality assured it.

?However, this line may not be so easy to define. Clinicians may find themselves incorrectly justifying decisions made by AI because of the well-documented concept known as automation bias. Here, humans can have a tendency to ‘trust’ a machine more than they might trust

themselves. If the clinician is, in effect, ‘rubber stamping’ anything recommended by an algorithm, who is responsible if an error is made?

No alt text provided for this image

Machine learning algorithms can be hidden in the much-vaunted ‘black box’, where the reasons behind the decision might not be explainable in a way that humans can understand. Combining this with the idea that the software itself may be unavailable to review for intellectual property reasons, the training data for privacy reasons and true accountability becomes even more impractical. Crucially, the patient and the clinician may be recommended a course of action or treatment without any real opportunity to check or challenge the approach taken by the machine.

Bias, inequality, and unfairness

Will AI provide more fair and objective decisions than humans, who are limited by our own personal experiences and biases? Or will they collect and even amplify human prejudices, embedding discrimination within healthcare systems? If the training data isn’t representative or the goals are inappropriate, the resulting AI tool could be deeply inequitable.

?Machine learning algorithms being used outside of healthcare have been criticized for discriminating based on race, gender, age, postcode, and religion, while chatbots have been tricked into propagating hate speech. Artificial intelligence can ‘learn’ the wrong values and even become self-fulfilling – for example, an algorithm for helping with job hiring decisions might simply reward people who have the same background as those in the historical recruitment data, reinforcing its bias with every decision.

?The ‘black box’ nature of neural networks makes it particularly hard to assess whether an AI is biased truly. Worse still, machine learning is very good at identifying proxies for characteristics, such as predicting race and the socioeconomic group from names and postcodes. Tech companies such as IBM, Google, Microsoft, and Facebook are all creating tools to help identify bias in algorithms.

No alt text provided for this image
No alt text provided for this image

Training and Education

?The adoption of AI in clinical practice will inevitably impact the training and education of clinicians, both through enhanced technological opportunities and through a shift in fundamental learning needs as professional working practices change.

Artificial intelligence could underpin sophisticated digital tools to support learning and development:

—??????????AI could be incorporated into high-fidelity simulations generating clinical scenarios across a range of specialties to enhance training and revalidation

—??????????With the pace of advancement of medical knowledge, the sheer volume of new information exceeds that with which an individual can keep pace in real-time. Artificial intelligence has the potential to analyze large datasets across multiple sites to condense information for the clinician for practical use

—??????????Combined with other digital technologies, AI could be used to personalize training by evaluating previous experiences, responses, and outcomes to model the strengths and weaknesses of individual clinicians. Personalized medicine need not be for patients alone.

?It is often suggested that AI will play a pivotal role in automating simple clinical tasks to free clinician time for more complex activities. Although attractive in terms of workforce utilization and cost, there is the potential that losing skills in more basic tasks could undermine those needed for more complex work.

The medical profession has long interacted with pharmaceutical companies. Medical students are educated to interpret and critique the output of clinical trials, and strict marketing regulations are in place. As doctors seek the evidence behind pharmaceuticals, perhaps they should similarly be trained to appraise new healthcare technologies for safety and efficacy and understand their technical limitations and risks.

?There are models of ‘peaceful co-existence – autopilots on planes, for example, that have improved airline safety without compromising the training of pilots. There is little reason why the same cannot be true for medicine.

No alt text provided for this image

Medical Research

?Artificial intelligence is ideally suited to analyzing the large and complex data sets used in medical research. Pharmaceutical companies are looking to AI to streamline the development of new drugs, researchers can use predictive analytics to identify suitable candidates for clinical trials, and scientists can create more accurate models of biological processes.

?But there are challenges as well – for example, what dataset do you test new hypotheses against? And, as data linkage is held by many as the key to unlocking our knowledge of the disease, would an algorithm be capable of coming to common sense conclusions?

?There are plenty of questions about how useful machine learning will be in practice. Does this approach lead to the ecological fallacy, where aggregate data provides false answers? Will it overwhelmingly generate multiple instances of correlation without knowledge of causation, wasting researchers’ time and resources and misleading the public? In any case, clinical input will be needed for the foreseeable future to ensure the validity and relevance of the research.

No alt text provided for this image

Artificial intelligence and machine learning techniques can allow datasets to be analyzed far more quickly, thoroughly, and inexpensively. It may be, though, that there is a risk that this may

lead to a shift towards research solely focusing on analyzing large data sets, skewing the research landscape away from traditional medical studies and diverting funding and effort away from ‘gold standard research methods.

?Researchers from technological backgrounds will need to be made aware of the key underpinning principles of ethical medical research, including professional standards on maintaining confidentiality, transparency, and minimizing adverse effects.

No alt text provided for this image

The Regulatory Environment

?At the heart of the development of AI in healthcare are questions about the regulatory environment. As with all regulations, a balance must be struck between protecting the public, clinicians, and the service and promoting growth and innovation. These are not mutually exclusive concepts, and there are past examples of good practice – for example, with the development of the appropriate ethical and legal considerations which underpinned the development of In-Vitro Fertilization. Indeed many point out that it was thanks to the early focus on regulation that science flourished. Lessons can be drawn for the development of AI.

?The challenges to regulators presented by AI are diverse – the impact it is likely to have on medical systems and devices, clinical practice. Relationships between clinicians and patients (and between providers of health-related applications marketed directly to patients) mean that regulators will need to work in a complementary way to develop relevant and appropriate regulatory frameworks for AI. While many AI products will meet the definition of a medical device and would therefore fall under the regulatory jurisdiction of the MHRA, there are also implications for:

—??????????General Medical Council – clinicians will need clear guidelines on the appropriate use of AI

—??????????Medical defence organizations – the nature of negligence claims may change as patients adapt to the availability of AI-generated decisions and recommendations

?—??????????Care Quality Commission – will need to consider how AI systems are embedded and used in healthcare organizations and their impact on the quality of care

?The advent of AI is a potential game-changer for healthcare, and regulatory processes will need to adapt. For example, the current approach to safety relies greatly on a structured approach to foreseeing hazards which can be avoided or mitigated. In the ‘black box’ of machine learning, it will not necessarily be possible to foresee potential hazards, so new ways of conducting clinical safety processes may be needed for AI. Similarly, the regulatory framework for medical devices will need to adapt to the world of AI.

?Emerging technologies will need to be tested to make sure they are robust – but how? Should the regulation of products be based upon the process for development, such as minimum dataset standards and clinician involvement, or on the output quality (‘real world testing’)? The former would be less labor-intensive but could potentially miss those that have gone through the right process but generated the wrong result due to error or unknown component factors. The reality may be that safeguards need to be built into the whole chain from development through to production.

?There is already a plethora of apps providing advice direct to patients. A balance needs to be struck between effective regulation and encouraging innovation. Should products that provide autonomous diagnosis and management require a ‘license to practice’? Could they prescribe it? How would indemnity be managed? Would clinicians be left dealing with the aftermath of errors or bad advice from an AI system? It might be argued that the level of regulation should be varied according to the risks – for example, psychiatric patients, the young, and the elderly might be at particular risk from any ‘bad advice’ from digitized systems. If this is the case, should systems aimed at such groups be regulated more closely?

Regulators need to focus on two broad issues in tandem – is the process correct, and is the content correct? Both aspects will bring fresh challenges as AI, by its very nature, is dynamic. An algorithm that meets clinical standards on a Monday may be a different algorithm on a Tuesday.

?As things stand, the current regulatory environment is only capable of approving or not approving people, procedures, medicines, devices or institutions in a static context. It may be that a ‘light touch’ approach to regulation will move towards approving (or not) the provider of AI and not the AI itself.

Intellectual property and the financial impact on the healthcare system

Healthcare is big business. The development of AI tools requires significant resources and expertise, for which creators and investors of capital, time, and specialist knowledge are likely to expect to reap the rewards for successful products. The development of AI technologies requires access to meaningfully labeled data and strategic clinical design. There is potential for the NHS to profit from selling data or at least recoup some costs. Indeed some commentators put the value of the data it holds at £15bn – potentially an attractive sum in the era of a budget-constrained healthcare system.

Technological advancements in AI have the potential to change the landscape of the healthcare system dramatically. They could be used to promote the integration of services and data, leading to more streamlined and efficient care pathways. Direct-to-patient AI technologies can potentially replace the need for medical consultation in some cases, providing reassurance, advice, or direct access to simple treatments.

?However, there is also potential to drive a new demand through the drastically increased ease of access, leading to a large increase in the number of other contacts with the health service – particularly where systems err on the side of caution for reasons of safety. This could improve early detection of serious conditions but could also lead to over-investigation and a vast new source of financial demand.

It remains to be seen which elements of the system AI will have the greatest initial impact. Medical investigations could be automatically identified and ordered before face-to-face consultations so that results are immediately available to achieve a more rapid diagnosis.

Primary care-like systems could diagnose and triage directly to secondary care, avoiding the need for a GP consultation, while secondary care-like systems equipped with up-to-date treatment algorithms could support GPs to manage conditions traditionally requiring specialist input. We must remain cognisant that integration of new AI technologies into services will involve parties with a range of financial interests and manage this with due care to achieve equitable benefits for all.

Impact on doctors’ working lives

At a time of widespread clinician burnout and a shortage of staff, AI offers the potential to automate some of the workloads and reduce the burden of routine tasks. This could leave doctors free to engage in more interesting and challenging work and could present opportunities to work more flexibly. Some have feared that certain experts may be ‘replaced’ by AI in the long term, leading to unemployment, although the breadth of skills and attributes required of a doctor cannot be easily replicated.

?Artificial intelligence tools supporting clinical decision-making could empower clinicians to work confidently in a wider range of areas, providing ‘as needed’ access to support from a repository of up-to-date knowledge. Underlying this is an implicit trust that the technologies can be relied upon, which will generate tensions if disagreement or loss of faith occurs.

?Artificial intelligence could change the type of person who would choose to become a doctor. Suppose sophisticated AI in the future would take on a dominant role in talking to patients. In that case, information-processing and decision-making, this reduction in direct patient interaction and shift in professional roles and tasks could significantly alter the day-to-day nature of medicine as a career.

Artificial intelligence could fundamentally change the way doctors work, as well as their relationships with patients. Modern medicine is a necessarily cautious and risk-averse industry. Will doctors be steering the direction of medical AI or be overtaken by the rapid pace of technological development?

?Clinical engagement is required to achieve harmony between the professions and the burgeoning healthcare technology market and to shape the advancement and deployment of these technologies for the benefit of patients.

No alt text provided for this image

Impact on the wider healthcare system

?However it cuts, there are two visions of an AI-enabled healthcare system. We could see a utopian world where health inequalities are reduced, where access to care is dramatically improved and quality and standards of care are continuously driven up as machines learn more about the conditions of the people they are treating. The dystopian but also feasible outcome is that health inequalities increase, or the system becomes overwhelmed by ‘the worried well’ who have arrived at their GPs’ surgery or the Emergency Department because they have erroneously been told to attend by their AI-enabled Fitbit or smartphone. Equally worrying is a world where only the wealthy will be able to access the best AI-delivered healthcare, as those providers will be the only ones with pockets deep enough to access the best data and develop the best AI. The reality, as with most revolutionary developments, is that the future will be located somewhere between the two. It is for policymakers, politicians, legislators, clinicians, and ethicists to decide now how the wider healthcare system will be AI enabled and improved for future generations.

No alt text provided for this image






-end-

Priyadarsini Panda

Senior Scientific Writer at Indegene | Medical communications | Scientific writing | Medical review | Ex- Docplexus |

2 年

Interesting!!

要查看或添加评论,请登录

Rahul Chakraborty的更多文章

社区洞察

其他会员也浏览了