Ethical AI in healthcare: a framework, a partnership and some examples
Photo by Alexander Sinn on Unsplash

Ethical AI in healthcare: a framework, a partnership and some examples

Every 5th message I receive on social media or email is somehow connected to AI. It really took a technology invented in the 1950s just few months to become ubiquitous in our lives: its “Kodak moment” was November 2022, with the launch of ChatGPT. Nowadays there is hardly any tech company which does not offer a similar or competing AI model to the general public or to institutional clients. At the same time, a wave of counter-measures erupted, trying to contain the phenomenon and ensure AI doesn’t slip from our (human) hands: an open letter asking to pause giant AI experiments for at least 6 months co-signed by Elon Musk and thousands of others in March 2023, Italy’s temporary ban of ChatGPT in April 2023 and EU’s risk-based AI regulatory framework due to be fully operational by 2024.

Considering the good, the bad and the ugly, we decided in the Crypto Valley Association (CVA) that the topic of AI is very much worth exploring and supporting. We wanted to bring our contribution to the thinking around how to make AI sustainable and where can blockchain technology help. Therefore, in July 2023, we published a report called: “Next Step: Sustainable AI. How can blockchain help?”. I was honoured to be part of the authoring group and shape our recommendations: AI should be ethical, green and open-source. Blockchain technology can help in many ways, from “opening the AI black box” to ensuring AI model’s data sources are diverse, unbiased, and secured.

As part of this report, we also created R-E-S-T-A-R-T: an ethical framework for the development of AI models which can be used by product teams and investors alike. We applied this framework to different industries/use cases to see how it would work in practice (see the report). My contribution, given my decade of pharmaceutical industry experience, was to apply the framework to the healthcare space. In addition, I made a compilation of companies and products that use AI ethically and sustainably with the help of blockchain technology (or so I’d assume based on public sources of information).

This article expands on the content presented in the CVA report to provide more granularity on how the R-E-S-T-A-R-T framework could be applied in healthcare space, where can blockchain help along the framework’s parameters and what are the main use cases within healthcare where sustainable, blockchain-based AI can make a difference to patients and providers alike.

Before we deep into the content, I wanted to thank my co-authors for the many hours we spent together in preparing the CVA report: Maja Kehic, Chief Marketing Officer at DeepSquare and Michele Soavi, Chief Sustainability Officer at ImpactScope. Secondly, I wanted to let everyone know that I did not use ChatGPT (or any other AI tool) to write or edit this article (my old-fashioned self has just talked).


Introduction: The case for ethical AI in pharmaceutical/healthcare industry

Pharmaceutical industry has always been a data-driven sector. In order for one medicinal product / drug to hit the market other ?4,999.00 molecules had to be researched and dropped, and roughly 2.5 billion USD spent in the process. And once a drug is in the market, another several hundred millions USD are spent to collect real world evidence of how it actually performs in patients, outside of controlled clinical trials. In addition, with the advent of non-medicinal interventions to support the drug (such as health monitoring devices, self-management apps for patients, decision-making support tools for doctors) a new range of data emerges for collection and processing in the pharmaceutical space.

AI has entered the industry from the back door, but it’s quietly gaining traction and visibility. In R&D AI can be used to accelerate drug discovery processes, to identify eligible patients for trials and to optimise clinical trial operations. In the commercial / downstream space, some of the common use cases for AI are algorithms to diagnose rare diseases or predict disease progression, decision support tools for doctors to make the best treatment choice for their patients, or algorithms trained on customer relationship management (CRM) data to provide pharma representatives insights for next best action with their customers.?

However, the pharmaceutical industry is also one of the most regulated, and thus risk-aware sectors out there. Letting machines access sensitive health data and making recommendations that can be life saving (or life threatening) for patients is surely a high risk-area in this industry. As a consequence, many pharma/healthcare companies have developed their own guardrails to ensure AI is being used ethically.


The framework: R-E-S-T-A-R-T considerations for AI used in medical/clinical care

Many pharmaceutical / healthcare providers ground their ethical considerations when it comes to use of AI models in two established strategies: how they pursue data, digital and technology innovation and what they pledged in their Sustainable Development Plans (SDGs). Using our R-E-S-T-A-R-T model, we combined the public information available in annual reports and press releases into the paragraphs and table below.

Overall, what seems to be very important in the pharmaceutical / healthcare domain when it comes to using AI is that healthcare professionals (HCPs) remain in control of their medical / clinical decision-making and that patients are well aware of and protected against the risks of AI. If all required safety nets are in place, the use of AI could actually improve the quality of data modelling and accelerate progress in medical care - all for the benefit of patients.

Here are the most important considerations for the pharmaceutical industry when it comes to ethical AI usage, based on our R-E-ST-A-R-T framework:

  1. Restrainability: healthcare professionals (such as doctors or nurses) should be the ultimate decision makers when it comes to implementing the recommendations of an AI algorithm/model, the latter being seen as a decision-support tool only (e.g. for diagnosis or treatment selection purposes). Patients should also be informed not to rely solely on the advice generated by AI-driven products (such as smart watches, treatment companion apps, monitoring devices), and should always discuss with their healthcare professional when in doubt.

  1. Effectiveness: pharmaceutical/healthcare organisations should ensure that the AI models they use are consuming minimal energy for data processing, and that energy comes from renewable sources. Additionally, they should include the carbon footprint generated by AI models into their corporate net-zero emissions pledges. According to a 2019 report by University of Massachusetts, Amherst, training a single AI model can emit as much carbon as 5 cars over their entire lifetime, more than 300 tons of carbon dioxide equivalent.
  2. Security: given the sensitive nature of input data for medically-focused AI models, it is of utmost importance (ethically and compliance-wise) that such data is securely collected, hosted and transferred and that privacy is guaranteed through the lifecycle of data management. For the pharmaceutical industry it might mean proving to healthcare stakeholders that AI data governance is clear, storage systems/platforms are secured, personal data is anonymized when possible, data sharing is limited etc.
  3. Transparency: Patients, healthcare professionals and other healthcare system stakeholders should be provided with essential information about AI based systems, including a clear estimation of risk, explained in layman terms and in a contextually relevant manner to support human decision-making (explainable AI). Also, AI-driven models need to be well documented and available for audit by relevant parties (e.g. regulators). Last but not least, who has access to different layers of data used in AI modelling and in which stage of the process (experimentation, training & inference) should be clear
  4. Accessibility: Open-source code should be used as much as possible in the AI models that are built or contracted by pharmaceutical/healthcare organisations. This requirement can be included in procurement bids and due diligence processes. Additionally, the “black box” of how the model has been designed and trained should also be opened for peer-reviews to ensure its scientific accuracy. Actually, reliance on black-box models violates medical ethics as physicians are supposed to explain and defend at any time the scientific grounds on which they took clinical decisions.
  5. Representativity: data sources used in creation & training of AI models should be varied and inclusive, to reflect characteristics of a large population of patients. In the case of medical care, such diversity would refer to age, gender, race, ethnicity and social determinants of health. For illustrative purposes only, we could imagine that an AI model that was trained mostly on caucasian, old males having a sedentary, isolated lifestyle would not deliver accurate recommendations for an old, Asian woman living in a large community and suffering from several diseases (comorbidities)
  6. Trust: Given the critical impact their intervention can have on people’s lives, AI models catering for clinical/medical care need to instil confidence that the process used to build them is robust, that bias has been controlled / eliminated and that results are correct and reliable. This could translate into requirements such as proving that best practices in machine learning and software engineering have been used, that independent test sets which are representative of the intended population can yield same results in training the same AI model, that the model has been peer reviewed, or that well-designed clinical trials have proven the AI overall system is safe and effective to be used in medical care.

For those of you with visual memory, here is a summary table:


The partnership: how can blockchain help AI be ethical and sustainable?

For those new to the topic, blockchain is a distributed ledger technology which is operated and secured by a network of decentralized computers (miners & validators). One amazing thing about this technology is that transactions and network activity can be monitored by anyone as they are publicly available for most blockchains (e.g. on websites such as BTC.com or Etherscan). Other great developments on the blockchain (albeit not necessarily blockchain-native) are: zero-knowledge proofs to protect user’s personal data while proving their identities, tokenization with the aim of rewarding users for their contributions (e.g. sharing their data) or consensus mechanisms based on game theories to ensure nobody has an incentive to fake or attack the network.

Let’s go deeper and see how blockchain can do wonders to support AI’s RESTART ??:

(1) Transparency & trust through public blockchain explorers

Transactions run on a public blockchain are executed through smart contracts which are visible on its ledger: anyone can see components of their respective transactions, such as the parties (i.e. addresses), time of execution, block validators or other details. This means that if an AI model or AI-powered application were to be developed on a public blockchain, anyone could theoretically start an audit trail to test, for example, the validity of a data source (e.g. identify deep fake in data sources) or the authenticity of the LLM used. Blockchain offers the opportunity to bring traceability and understandability to AI and help to overcome the black box problem of AI (i.e. not knowing how the decision process of AI is taking place).

(2) Restrainability and security through distributed & balanced power

As mentioned, two major characteristics of blockchain make it particularly useful in the quest to keep AI within the ultimate control of humans, yet outside of the purview of a select few: a decentralized network of validators and advanced applications of game theory. The former can ensure better filtering of AI input and output (e.g. identify deep fakes, eliminating or not approving their release). The latter can ensure that bad actors are punished and that no one becomes too dominant in data ownership (a recurring pattern in Web 2.0).

(3) Representativity with inclusion of new, blockchain-native data sources

Due to its ethos and inherent characteristics, blockchain has been increasingly used by populations or in geographic regions which are generally under-represented in western, English-speaking datasets currently used by AI models. Think of Latin American or African families who use blockchain for remittances outside of the traditional banking system. Or villagers in South-East Asia who play blockchain-based games to earn money and lift up from poverty. These new data sources, which are native to the blockchain world, are slowly entering the big data lakes and bringing a more varied and representative view of culture, ethnicity, gender, belief systems and needs.

Concluding this section, I found the words of Garri Zmudze, managing partner at LongeVC, a Switzerland and Cyprus-based venture capital firm as highly illustrative of the symbiotic relationship between AI and blockchain when it comes to sustainability and ethics in healthcare/pharmaceutical industry: “Without blockchain[1]?, artificial intelligence lacks the ethically sourced and protected biomedical data it needs to find new solutions. Without artificial intelligence, the vast amounts of data protected by blockchain remain secure but unusable for research. Progress happens when these innovations work together, just as critical public health initiatives of past decades succeeded thanks to the advent of the World Wide Web.” (source).


Examples: AI-driven, blockchain-based applications in medical/clinical care

As seen previously, blockchain technology is decentralised, trustless, immutable and well encrypted, which makes it an ideal candidate to support AI-driven models and applications for medical/clinical care. In this section I aim to highlight few such application domains where I believe blockchain can help an ethical use of AI. I would also like to emphasise that neither myself, nor Crypto Valley Association endorse any of the projects mentioned below, which are being presented for illustrative purposes only. The list is non-exhaustive of course, and has been created in June 2023 based on publicly available information at that time.

  1. AI algorithms to diagnose diseases or improve disease management

Such applications are usually decision-support tools for healthcare professionals to decide on the right diagnosis or treatment regimen for a specific patient. IBM Watson Health (now Merative?) was a prominent early example, albeit not blockchain-based: while in existence, it offered healthcare professionals evidence-based information on drugs, disease, toxicology and alternative medicine combined with the accelerated search & information delivery capabilities of AI Watson. Another, more recent example comes from the work at Hadley Labs by Dexter Hadley, a professor at University of Central Florida. They develop community-driven AI models to help clinicians to more precisely screen, diagnose, and manage diseases by using data shared by patients on the blockchain. Their current projects are in the area of diagnosing COVID19, dermatological / skin diseases and in open genomic research. Examples like this show the power of blockchain in making AI models more secure, trusted and representative.

  1. Decentralised medical data storage & sharing

Medical data is one of the most siloed and unstructured there is for a science-based industry: it is estimated that about 80% of medical data created in hospitals remains unstructured and untapped after it is created. This happens in part due to the fragmented electronic health records (eHR) systems used in healthcare systems but also due to privacy regulations for data sharing. A promising niche where decentralised medical storage systems based on blockchain make AI more trusted, secure and possibly restrainable is genomics. For example, Harvard Medical School startup Nebula Genomics are planning to use blockchain to store data from users’ genetic test kits (much more secure) and partnered with Longenesis to use AI to run life data economics research on their combined datasets. Patientory ?is another example of blockchain in healthcare which enables patients to control their own data as they participate in clinical trials and receive AI-generated insights in return. Digipharm does a similar work in value-based healthcare by “stamping” value-based agreements on the blockchain and allowing patients to report outcomes from treatments included in such agreements. Lastly, E-HCert, a Digital Lab Test Wallet implemented in Aretaeio Hospital, Cyprus by sustainability-oriented blockchain VeChain and I-Dante, enables patients to store, own and decide whom they want to share their lab results with.

  1. More efficient research & development of drugs

As mentioned at the beginning of this article, pharmaceutical research & development (R&D) is a long and expensive process, calculated in tens of years and billions of USD. A prominent example of a company that uses blockchain for data management and AI for insights generation to deliver new drugs is Insilico Medicine. The company uses AI to create an entirely new AI-driven drug discovery pipeline from A to Z, using ageing as a way to identify disease. Among their successes is the first generative AI-designed drug for COVID-19 and variants, now entering phase I clinical trial stage. Another AI-generated molecule tested for several fibrosis indications is in phase II of clinical trials. Smart Omix from Sharecare is a decentralised research platform built on blockchain and leveraging AI (by acquiring doc.ai in 2018). The platform makes it easy for researchers to prototype, revise and launch mobile studies. In addition to housing patient records on blockchain, Smart Omix allows researchers to collect patient data via wearable devices and offers features like e-consent and e-PROs (patient reported outcomes) for a faster clinical research process.


Conclusions

I firmly believe that AI is here to stay, and it will only grow in importance over time. As any new technology, AI is not inherently good or bad: it all depends on how we, humans, use it. We still have a chance to make AI ethical and sustainable, this being the reason why Crypto Valley Association (CVA) contributed to the debate by publishing the report “Next step: Sustainable AI. How can blockchain help?” in July 2023. As co-author of this report and building on a decade of experience in the pharmaceutical industry, I decided to expand the “healthcare use case” into this standalone article. While much of the content related to the RESTART framework, blockchain benefits and examples are taken from the report, I dig deeper into each of them, expanding the 2 word doc pages (of the report) into 6.

I hope you enjoyed reading and I look forward to comments, further insights or new examples of ethical AI used in healthcare.

Thalia Nikoglou

Results-driven Health Technology Assessment, Payer Value and Market Access Strategist adept at translating strategic business goals into impactful payer value propositions and favorable payer negotiation and HTA outcomes

1 年

Thank you for sharing, Carmen! Extremely insightful. Would love to see more of this soon! Relying on you to translate complex concepts into accessible content!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了