Is Artificial Intelligence (AI) in healthcare evil?

Is Artificial Intelligence (AI) in healthcare evil?

Several thought leaders have warned against AI.??

“The potential benefits of artificial intelligence are huge, so are the dangers.” ~Dave Waters.?

“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” ~Stephen Hawking.??


An Artificial intelligence (AI) product can be defined as a self-reliant ‘thinking’ machine that makes human-like decisions.? AI is purported to be similar to a human form of advanced intelligence, that can not only sift through complex information, understand patterns, but also make correlations and decisions that change outcomes. The idea that a computer can play chess against a human makes me awestruck.? However, incorporating AI in healthcare and having the machine make decisions that are life or death is a more challenging concept to accept.?

?There are a number of terms bandied about under the purview of AI, especially Deep learning, Machine Learning (ML) and Big Data.? Deep and machine learning are subsets of AI with different levels of complexity starting with deep learning and then ML and finally full fledged AI.? According to McKinsey & Co., ML is based on algorithms that can learn from data without relying on rules-based programming and usually requires structured data from a database.? Deep learning is also associated with learning from data, though from unstructured data like videos and conversations.? When ML and deep learning are applied to large amounts of data (Big Data), AI is achievable.

Alan Turing is considered the father of AI and started the earliest substantial work in the field of artificial intelligence in the mid-20th century.? Even today, the Turing test is the ultimate test for AI and tests for intelligence in a computer, requiring that a human being should be unable to distinguish the machine from another human being by using the replies to questions put to both.? ?The first implementations of AI in the financial industry were being put into use to understand individual credit risks about 30 years ago. The introduction of Web 2.0 in 2005 and cloud computing encouraged the collection of vast amounts of data needed for AI to be successful.? 10 years ago healthcare started seeing more AI products.? Do these AI products pass the Turing test yet, almost 70 years after the test was first created?

I created an artificial neural network (ANN) product to diagnose swallowing disorders about 25 years ago and published a few papers on my research.? On a small sample of data the ML/deep learning worked well and more data was needed to create a meaningful product.? The timing was not right for a more widely used product development as adoption of AI has taken over two decades to gain acceptance in healthcare compared to other industries.

AI is currently used in a number of areas to replace tasks like scheduling, supply chain route management, customer service chatbots, calculating risk in finance, healthcare, and sports.? AI has also evolved for more advanced uses like self-driving cars, optimizing the trading strategy for an options-trading portfolio, balancing the load of electricity grids in varying demand cycles, stock and pick inventory in warehouses using robots, voice and facial recognition, and image classification. Siri, Alexa and ok Google are examples of evolved uses of AI that we have adopted in our day-to-day life.? In healthcare, the latest AI products help to catch errors, support diagnosis, track patients movement so caregivers can be notified of potential falls, develop new medicines, and in a number of wellness situations.

One of the unspoken rules that prevail in AI creation is that complex decisions by machine require complex algorithms that are created by advanced scientists.? This constraint not only spurred a greater number of universities to offer AI training but has also deterred smaller companies from utilizing AI in their products.? Another barrier to adoption of AI has been a lack of trust regarding AI being ready for prime time and the idea that AI cannot be allowed to make complex decisions without having bad things happen.? Lastly, a major concern with AI is that automation is replacing humans.?

These concerns stem from the lack of complete data sets available in training the AI engine, and not enough talent to create comprehensive AI, further causing inherent bias that is added to the AI.? A facial recognition AI algorithm can be trained to recognize a white person more easily than a non-white person because this type of data is more available and has been used in training more often. The situation is exacerbated in Healthcare as there is sometimes too much data but the data sets are not complete. For example, huge amounts of medical device data and electronic health records are collected during clinical visits but other pertinent data including labs, images, vaccines, genetics, clinical notes, exercise and well-being data are not incorporated to complete the health picture of the patient. In addition, there is not enough clinical input to create well grounded AI.

The issues of bias and trust prompted governmental and corporate regulations in recent years.? The White House released an AI bill of rights last month, which laid out 5 broad principles regarding people's rights when using AI.? The European Union has also proposed regulations to make AI human-centric and trustworthy. ? Additionally, several corporations that use AI have created audit teams to check for bias and be able to produce Responsible AI products.??

4 Tips on building Patient Trust on the healthcare AI journey are:

- Consistent and Accurate messages throughout the patient journey

- Digital health experience that is intuitive, interactive and seamless

- Provide two way engagement that is fluid and responsive

- Make patients feel cared for by everyone in the organization

Responsible, ethical and patient-centric AI can definitely go a long way in improving the patient journey and making it easier to navigate.?

Deepa Fernandes is a thought leader, serial CTO and healthcare tech strategy consultant. She has worked in Healthcare Tech Innovation across the different healthcare silos developing over 50 cutting edge products for medical devices, public health, electronic health records, non-profit and healthcare insurance.

要查看或添加评论,请登录

Deepa Fernandes Prabhu, BE, ME, MA(Harvard) , MS (MIT)的更多文章

社区洞察

其他会员也浏览了