TOP 10 Dangers Of Digital Health

TOP 10 Dangers Of Digital Health

Thanks to the advent of digital health, the future of medicine is truly exciting. With technological advancements that democratise access to care, better treatments are accessible to people than ever before. Breakthrough research and medical developments have eradicated deadly diseases and turned others into manageable conditions. But the very developments that propel healthcare to the 21st century bring their own share of hazards to the field.

From the elimination of privacy through hacked medical devices to bioterrorism, there are signs of alarming trends that few take seriously. Nevertheless, we must generate discussion around these issues and help prepare every stakeholder to address them. To this end, we collected 10 of the most important hazards that technology brings about in the digital health age.

1. Regulating adaptive AI algorithms

An adaptive artificial intelligence (AI) is one that can adapt itself on the go based on new information it receives. In a clinical environment, it would, for example, be able to recommend blood tests more frequently in a population that has a high prevalence of diabetes. And if it sees that people in this population also have a tendency to develop cardiac issues, it will also recommend a cardiovascular evaluation.

No alt text provided for this image

However, such algorithms rely on existing medical data which is fraught with inherent biases . By evolving on such data, adaptive AI will only reinforce those harmful biases such as discriminating based on one’s ethnicity and/or gender. To mitigate those risks, regulatory authorities must adopt new regulatory approaches to ensure that equitable adaptive AI algorithms are employed in healthcare facilities.

2. Hacking medical devices remotely

New medical implants and devices are increasingly getting wireless connectivity features whether it’s to monitor metrics or run diagnostics. However, this also means that they are prone to hacks and these could prove fatal to patients.

Back in 2011, a researcher showed that it was possible to hack Medtronic insulin pumps to deliver fatal doses to diabetic patients. Others can hijack pacemakers over Bluetooth, compromise X-ray data , remotely reconfigure CT scans to alter radiation exposure, and the list goes on . As more and more patients and healthcare institutions adopt such solutions, are regulators ensuring that private companies provide secure technologies? What can we do to protect wearable devices that are connected to our physiological system from remote hacks? Patients should demand safer options, while companies developing such technologies should make sure they are safe and users should be as vigilant as possible when using them.

3. Privacy breaches by and on direct-to-consumer devices and services

The relationship between digital health adoption and privacy is a delicate one. In short, there is no digital health without compromising a part of our privacy. In order to provide individual and personalised results, tools such as health sensors and AI need access to our personal health data. But how secure are such data once the companies behind those tools have access to them? For instance, the popular direct-to-consumer DNA testing company 23andMe and pharma giant GSK signed a $300 million deal for drug development. This deal leveraged 23andMe’s substantial genetic resources, and it was made possible through its customers being oblivious that such a deal was in the pipeline.?

Privacy and security issues pertaining to the digital health era are complex and multifactorial. These aren’t likely to get any simpler as more and more advanced technologies get integrated into the field. As such, every stakeholder in the healthcare landscape must contemplate the need for changes in the digital health era. To help you navigate these questions, we published an e-book dedicated to the topic .

No alt text provided for this image

4. Ransomware attacks on hospitals

When talking about cyber threats in healthcare, the most common ones are ransomware . These involve hackers who infect IT systems with malware or digital viruses to encrypt crucial files. These paralyse whole infrastructures as that information is inaccessible until a ransom is paid, usually through cryptocurrency.

One of the high-profile ones happened back in 2017 with the WannaCry attacks on 61 NHS institutions. It led to the cancellation of operations and clinical appointments, loss of internet connection in hospitals and diverted patients from emergency departments even one week after the incident. This trend was perpetuated in subsequent years. Cybersecurity incidents involving patient data hit all-time highs year after year , and experts predict that 2023 likely won’t be any better.?

Cyber threats in healthcare cannot be overlooked. The average patient should demand more security over their data, and the medical staff and management should take these demands seriously and familiarise themselves with cybercrime methods in order to better counter them.

5. Technologies supporting self-diagnosis

Physicians are worried because patients google their symptoms and treatments as they might wrongly associate the information they find with more serious conditions than what their caregiver diagnoses them with. Researchers as well are sounding the alarm about patients turning to hospitals due to smartwatch notifications about abnormal ECG readings, many of which are not clinically actionable. But soon, more and more patients will have access to more metrics at home whether they come from blood tests or genetic analyses; some of which come from unregulated companies offering inaccurate services.

No alt text provided for this image

This, together with advanced chatbots offering medical advice can pave the way for even more serious cases of misinterpretation or self-medication. Will we be able to persuade patients not to further overload hospitals in this way? To avert this, proper guidance from medical professionals can equip patients to better interpret readings from their smart sensors, and authorities can regulate those services so that patients can get to know which companies to trust. This will, in turn, help avoid unnecessary hospital visits.

6. Bioterrorism through digital health technologies

With the convenience and enhanced medical solutions brought about by the interconnected world of digital health technologies, bioterrorism also propels to new levels. We discussed how medical devices can be hacked remotely and that ransomware attacks hold the risk of being fatal for some patients.?

Now with Neuralink , Elon Musk wants to embed a “Fitbit in your skull”, while startup NaNotics is developing nanorobots that “mop up” molecules that trigger diseases and ageing from the circulation. With such advanced technologies on the horizon , are we equipped to prevent bioterrorists from hacking into these devices to directly gain control over our health?

7. AI not tested in a real-life clinical setting

We regularly hear news headlines of an AI’s incredible prowess in the healthcare setting. But in many cases, these results are obtained from laboratories using selected datasets or ideal settings which aren't totally reflective of actual clinical environments. Google learnt it the hard way.

No alt text provided for this image

Medical researchers from the company touted the ability of its AI tool to screen patients for diabetic retinopathy (DR) from images with 90% accuracy . DR is a major cause of vision loss worldwide, and detecting the condition earlier can prevent complications. But Google’s AI ?didn’t fare so well in practice as it did on paper. When used by nurses in a hospital in Thailand, they encountered several issues . Sometimes they had internet connection troubles; at other times the quality of the scan didn't meet a certain threshold, so the AI simply didn't give a result. And on some occasions, nurses even had to spend extra time editing some of the images the algorithm didn't want to analyse.

8. Synthetic health data is not entirely accurate in simulating reality

Synthetic data is the use of AI to create datasets that mimic the real world. We use it for several reasons: 1. we don’t have enough real-world data or 2. don’t want to use real-world (sensitive) data. It is useful for feeding any algorithm that needs massive amounts of data to learn and either develop new prediction capabilities or recognise patterns. Synthetic data is widely used in a number of industries and segments, not just in medicine, but also in self-driving vehicles, security, robotics, fraud protection, insurance models, the military, and so on.

However, any dataset we create will be imperfect to some extent. It will contain biases we are not aware of. It will not include important variables we either overlooked or are not aware of their importance. Even, in the best of cases, it will be like a snapshot of a given moment in a given situation. If we let machine learning and deep learning algorithms develop on these synthetic, imperfect datasets, chances are they will come to conclusions that are more or less false in the real world.?

No alt text provided for this image

Synthetic data is cheap and easy to come by, much easier and cheaper than collecting huge amounts of messy real-world data. What happens if decisions affecting large groups of people or whole societies will be made based on it??It is especially worrying as the world already faces challenges regarding “truth”. Introducing alternative truths based on ‘data’ to back decisions affecting societies – like healthcare funding, insurance models and so on – can have devastating consequences.

9. Face recognition cameras in hospitals

Facial recognition technology is a method that combines image analysis and deep neural networks to interpret patterns from facial characteristics. It can reveal a range of medical conditions. Its functional algorithm relies on existing databases and dives into comparing those features in order to output a result. Its application in healthcare shows promise from detecting rare genetic conditions to extracting facial blood flow information .

In the U.S., controversial company Clearview AI, which made a facial recognition database from social media pictures of unsuspecting individuals, pitched its platform to the government to track those infected with SARS-CoV-2. However, employing the technology from a company with dubious intentions poses new privacy risks. What will they do with the images collected? Will they be shared with law enforcement officials for surveillance, which can further fuel racial discrimination ??

10. Health insurance: Dr Big Brother

With the availability and adoption of fitness trackers, direct-to-consumer genetic tests and health sensors, health insurance companies are able to assess applicants for insurance coverage based on their individual needs. For example, if one’s genetic test reveals a higher probability of acquiring ovarian cancer, the insurance company could subsequently subsidise relevant tests and offer them more regularly to that person.?

However, such availability of personal data is also a double-edged sword. It could lead to a dystopian, Orwellian scenario . Insurance companies could demand access to individuals’ wearables data in order to offer insurance; charge higher premiums if customers aren’t maintaining a healthy lifestyle; and even discriminate against applicants based on their genetic risk. The U.S. Genetic Information Nondiscrimination Act was set up to address such biases, but what if new insurance policies amend it due to pressure from lobbies?

There are of course more such hazards emerging from the digital health world which we will encounter more frequently in the near future. So let’s start discussing these issues at home, at the workplace and on public forums. This way we can prepare to exploit the advantages technology offers, while keeping the potential dangers at bay.

And we shall never forget: “Primum non nocere”!

Richard Feher

Director of Therapy at Richard Feher Physiotherapy and Associates

1 年

Excellent post

回复
Dr Jan Frank Lio Carrillo

Médico y Fundador: Sygnal | Marketing, Funnels & Publicidad en Meta y Google #loqueimporta [ NUEVO LIBRO] ?? Te invito a SEGUIRME | Ayudo a doctores, startups y consultores | Salud Digital | Escritor | Speaker

1 年

Bertalan thanks for this fantastic article I read it and like me it so much. The part that seems to interesting to be and I agree with you is tha danger of creating tecnology for self diagnosis and empower patients so much. I want to clarify I don’t have any problem with patients being more aware of what is happening to them. However this without supervision can get into trouble because patients might believe that know more than physicians. That is why from my perspective we need to start focusing ob involving health care profesionas in this A way to do that could be creating more of this technology for doctors and physicians

AMIR MANGISI

Healthcare Business Development Consultant

1 年

Insightful

回复
Katalin Kish

★ I create value by turning complex info into actionable insights using technology & Maths. MBA, Global E-Commerce Champion

1 年

Thank you for mentioning the risks of biased AI + the hackability of medical systems/devices. I am writing from the perspective of an involuntary perfect crime showcase exhibit 2009-current. Technology far beyond what is known to civilian experts, let alone the law being able to curtail is flaunted by #Australian #OrganisedCrime gangs like small children show off a new puppy. I am making public interest disclosures like this one to at least warn others to what unpunished/unpunishable crimes are being committed in a country like #Australia, in #Melbourne, the world's once most liveable city. I had exhausted all legal avenues trying to get crimes punishable by 10 years in jail/worse that I witnessed as a public servant 2009-2012 on official record at least, prior to making such statements. I have no criminal background. See my profile how I became an instant/concurrent adversary to Victoria Police and to the #MEEHAN #CrimesAsaService Enterprise. Australia has no #FBI equivalent. Victoria Police have neither duty of care/accountability, while having a monopoly on what is a crime+a vested interest in low crime statistics. Police criminality is widespread/deep-rooted #GraemeMayneDHLaustralia #Comancheros #Mongols #PerfectCrimes #IoT

回复
Agnes Lovell

New PA-C graduate | Into improving lives | Eager to learn | Sonoenthusiast | Excited about healthcare and medicine innovation | Views are my own

1 年

What a wild time in healthcare!

回复

要查看或添加评论,请登录

Bertalan Meskó, MD, PhD的更多文章

社区洞察

其他会员也浏览了