Preventive Healthcare: Medicine as a Scientific Discipline
Adam Skali
I have experience on healthcare innovation and thrive in diverse, multidisciplinary teams. Together, let's unlock the future of transformative healthcare solutions.
The legacy of greek medicine
While many are aware of Hippocrates’s, the ancient Greek doctor, supposed advice, "First, do no harm," which emphasizes the physician's duty to avoid worsening a patient's condition, there are several issues with this: (a) Hippocrates likely never uttered these words, (b) it is not particularly practical in our current context.
Historically this advice might have been quite helpful for patients, for example we just have to consider some of the historical medical treatments, which often caused more harm than healing. For headaches, trepanation involved drilling a hole in the skull. For venereal diseases, mercury was applied to the affected areas, causing intense pain. Additionally, bloodletting, popular for millennia, was typically more detrimental than beneficial.
The motto "First, do no harm" implies the suggestion that the safest treatment, often involving minimal action, is the best. While historically this might have been true, in the last 100 years medicine has shown tremendous progress. And this is something that every experienced doctor can attest with stories from their practice.???
Healthcare and the scientific method
The transition from a more belief based healthcare to a qualitative healthcare,one where we can measure only when the patient goes to the hospital or clinic, was driven partly by new technologies like the microscope and a revolutionary approach to scientific inquiry. This shift began in 1628 when Sir Francis Bacon introduced what we now call the scientific method. This method marked a shift from mere observation and guessing to forming hypotheses, testing them rigorously through experiments, and evaluating the results.
This new approach allowed scientists and physicians to test treatments systematically rather than relying on anecdotal evidence. Despite this, it took three centuries from Bacon's introduction of the scientific method to the discovery of penicillin, a major milestone in current healthcare. This approach allowed us to find causal relationships, mostly when considering a short timeframe, between certain afflictions and treatments.
The discovery of the first vaccine is a landmark in medical history. Edward Jenner made this breakthrough in 1796. Jenner observed that milkmaids who had contracted cowpox, a relatively mild disease, seemed immune to smallpox, a highly contagious disease estimated to have killed more than 300 million people since 1900 alone.
Building on this observation, Jenner developed the smallpox vaccine by inoculating individuals with cowpox virus, effectively creating immunity against smallpox. This pioneering work laid the foundation for modern vaccination practices and ultimately led to the “eradication” of smallpox, declared by the World Health Organization in 1980 as one of the greatest achievements in public health.
Another great example is the case of antibiotics. Alexander Fleming's discovery of penicillin in 1928 changed medicine and earned him the Nobel Prize in Physiology or Medicine in 1945. Penicillin's introduction transformed the treatment of bacterial infections, with an estimated 500 million lives saved by penicillin. and significantly reducing mortality rates from diseases such as pneumonia, tuberculosis, and bacterial meningitis among others.
And while there are many more examples of the achievement of the current healthcare system, it is also true that it has been less effective against chronic diseases like cancer. It has been less effective with diseases where the timeframe it requires to show symptoms, and the root cause aren’t as easy to find. While it's often stated that lifespans have nearly doubled since the late 1800s, this increase is mostly attributed to antibiotics and better sanitation.?
Healthcare and the concept of risk
Risk is impossible to always avoid; it requires careful evaluation and management. Every decision, in both medicine and daily life, involves some level of risk assessment. Whether it's choosing a meal or deciding to drive, we constantly balance risk against benefit, aiming for the best possible outcome.
In some cases, like with? a stab wound or an acute infection, urgent action is necessary. However, in less pressing situations, doctors must weigh options carefully, such as deciding whether to perform a colonoscopy with its small risk of injury or skip it and risk missing a diagnosis of cancer. Essentially, every physician has to face the possibility of causing harm to help patients at some point in their careers. Because, sometimes, the riskiest decision is to do nothing at all.
While modern medicine owes much to ancient practices, the idea of uninterrupted progress from the time of Hippocrates is misleading. There have been distinct phases in medical history, and we might be entering a new era now. The first era lasted from Hippocrates for nearly two thousand years, where medicine was based largely on observation and guesswork. Hippocrates made significant contributions, such as recognizing diseases stem from natural causes, not the supernatural.
The second era began in the mid-nineteenth century with the germ theory, leading to better hygiene and the development of antibiotics. This transition wasn't smooth; it faced significant resistance. For instance, Ignaz Semmelweis, who suggested that doctors could transmit infections via unwashed hands, was ostracized even though he showed time and again that the cause of so many deaths during delivery were due to the tendency of many doctors to go from one patient to another, or from a corpse to a patient, without washing their hands.?
Yet, in the same year as his death, Joseph Lister successfully applied germ theory in surgery, proving Semmelweis's theories correct. This marked a crucial advancement in how we understand medicine.
And now it could be said that we are on our way to the “third era”, what some call the era of Wellness and what some others call the era of prevention. And the reason for this shift is that we have gotten really good at dealing with infectious diseases, while there are still many that evade us, and now we have to start dealing with a different kind of disease.?
An analysis of mortality data from 1900 shows that if deaths from the top eight infectious diseases are excluded, overall mortality rates have barely changed over the twentieth century, suggesting that we still need a lot of work to deal with chronic illnesses.
The Gap between healthspan and lifespan
The global population has grown from 2.9 billion in 1950 to 7.8 billion in 2020, and average life expectancy has increased from 47 to 73 years over these seventy years. This growth in human lifespan has led to a shift in demographic structures, particularly highlighting an increase in people over 70 years old. As a result, more countries now see over one-fifth of their population in this older age group.
The rise of chronic illnesses is a relatively recent development, largely due to advances in technology that have transformed many previously fatal diseases into manageable chronic conditions. For instance, before the 1920s, Type I diabetes was usually fatal within months of diagnosis, but treatments involving animal-derived and later biosynthetic insulin have made it a manageable chronic illness.?
Similarly, coronary artery disease, once often undetected until it was too late, has become manageable with the advent of bypass surgeries, stents, and medications. Over the last few decades, diseases like AIDS and some cancers have also become chronic conditions thanks to new treatments. These medical advances are undoubtedly positive, but they come with the challenge of managing an increasing number of chronic disease patients over longer periods, significantly raising healthcare costs.?
As chronic diseases become the leading cause of death and disability worldwide, they are responsible for 71% of the 56 million global deaths annually and 79% of disability-adjusted life years. The major chronic diseases—cardiovascular diseases, cancer, diabetes, and chronic respiratory diseases—are behind 80% of these deaths. The economic burden is substantial, with chronic diseases estimated to have caused a $47 trillion loss over the past two decades.?
Healthcare systems now face the challenge of a widening gap between lifespan—the total years lived—and healthspan—the years lived in good health, currently estimated at about nine years. According to the World Health Organization, health is "complete physical, mental, and social well-being, not merely the absence of disease." Bridging this gap necessitates integrating scientific advancements with robust public and social initiatives.
A significant amount of healthcare spending occurs in the last 18 months of a patient’s life, when the complications of chronic diseases become most severe. Any strategy aimed at controlling runaway healthcare costs must include a robust plan for managing chronic diseases; without it, only a minimal impact can be made on the overall issue.
The current healthcare system isn’t enough to solve chronic diseases
In developing a valid theory, it's essential to accurately define the categories. Initially, in the descriptive phase of theory development, researchers establish categories based on the traits of the phenomena under investigation. They associate these traits with the results they are interested in studying.??
However, these descriptive theories generally only suggest average trends. The real advancement in theory's predictive capability occurs when researchers identify the causal mechanisms linked to the results they're examining. This deeper understanding enables scholars to offer specific, effective recommendations on actions that either will or won't lead to desired outcomes in various scenarios.
In medicine, diseases are initially categorized by symptoms; for instance, wheezing might indicate asthma, while high blood glucose suggests diabetes. At this stage, medical practice is largely based on empirical evidence, and outcomes are often probabilistic, because our understanding of the body is limited and the ways in which it shows that something is going wrong can often be misleading for us. We can’t tell the cause of a disease just from knowing the patient has a fever, but we can tell that it means that something is going wrong.
Significant progress in medical theory happens when diseases can be classified by their causal mechanisms rather than just symptoms, marking a shift from descriptive to prescriptive medicine. This knowledge is crucial; for example, understanding whether wheezing is caused by an allergic reaction, airway inflammation, a foreign object, or heart-related fluid buildup can be critical for effective treatment.
In healthcare, small risks can grow into major crises over time, and this is especially clear in the development of chronic diseases. For example, atherosclerosis can develop over decades before leading to a critical event like a heart attack, which is often the first time treatment is attempted. While chronic diseases cause considerable cost and suffering, few healthcare are optimally designed for diagnosis, treatment, and ensuring adherence to therapy.?
Our methods for treatment, prevention, and early detection need to evolve to address these slowly progressing diseases. And while the rapid advancements in medicine today suggest we are entering a new phase of "personalized" or "precision" medicine and “precision prevention”, where treatments are customized to our individual genetic profiles, achieving this precision faces challenges because of the implementation of technology, along with the current way we face healthcare decisions.
The promise of data and personalized medicine
Chronic diseases vary widely, from heart disease to autoimmune disorders like arthritis. Each has unique causes and progresses differently. However, they all worsen with age, leading us to our next focus: promoting healthy aging.
The ideal health span should match our lifespan as closely as possible. Extending the time we live healthily increases our overall lifespan, allowing for a longer, healthier life. Aging is different from getting older; it’s about frailty, the decline in physical and mental abilities. While we can't stop time, we can slow aging by living healthier—eating well, exercising, managing stress, and avoiding toxins.
Emerging research even suggests the possibility of reversing aging, making us biologically younger. Slowing aging could allow people to enjoy a high-quality life into their nineties. To achieve this, we must improve how we track biological aging and assess the aging of vital organs and hormonal balances throughout different life stages. This means optimizing health across all life phases, ultimately leading to a natural and brief decline at life’s end, unlike the prolonged suffering many experience today.
What do We Need for a Shift Towards Prevention Based Healthcare
To achieve this vision, we need to establish comprehensive health monitoring networks. These networks should be equipped with advanced knowledge of the complex, interrelated biological systems within our bodies. By leveraging this knowledge, we can gain insights that allow us to anticipate and counteract potential health issues well before they present any clinical symptoms.
Traditionally, doctors diagnose diseases based on subtle symptoms and their clinical intuition. However, in a truly wellness-centered healthcare system, the paradigm shifts dramatically. The mark of excellence in a physician would no longer be their ability to treat disease, but their effectiveness at preventing disease in the first place. A doctor whose patient develops symptomatic illness would consider it a professional failure.
"Precision prevention", is seldom tried due to high costs, limited data, and concerns about incorrect results. Handling medical data is challenging, much like diving into a complicated statistics problem. For centuries, doctors have dealt with varying data such as heart rates that range significantly and blood pressures that can swing from very low to very high. While common tests like those for cholesterol or blood sugar are routine, there are many more potential tests available, leading to countless data combinations.
Technology, when properly implemented, now allows for extensive collection of health data through devices like fitness trackers, which record everything from steps and sleep quality to heart rate and stress levels continuously. The metrics for wellness that we should be tracking extend far beyond the simplistic notion of "how we feel." They include a comprehensive array of data points such as genomic information, phenomic patterns, and other digital health measures.
These indicators collectively monitor hundreds of different biological systems and processes. If we begin collecting and analyzing these data from a state of health, we can develop predictive models that identify when an individual is likely to transition from a state of health to one of disease. These transitions are often so subtle that they escape our conscious detection.
Sensor networks are growing, enabling ongoing monitoring of health indicators like blood and saliva, not just occasionally but constantly. As these networks become more widespread, they will collect vast amounts of health data. To keep pace with these advancements, healthcare will require significant support and education so people can make informed choices and benefit from enhanced health systems.
By implementing such measures, we can intervene earlier, more precisely, and in a less invasive manner than current practices allow, enhancing our ability to prevent diseases from developing into more severe, irreversible conditions.?
Decision engineering and healthcare
Human factors engineering focuses on optimizing interactions between individuals and their environment or technology. For instance, consider the communication between a physician or nurse and a patient. Without careful attention to this interaction, miscommunications can occur, potentially leading to severe consequences, including patient harm or death due to errors.
To address this, we must integrate human factors principles into various aspects of healthcare, including the utilization of electronic health records and computerized physician order entries. By ensuring that these interfaces are resilient to errors and capable of detecting and preventing them early on, we mitigate the risk of harm caused by human fallibility.
Recognizing that errors are inevitable due to our inherent human nature, we must implement strategies to minimize their impact and prevent harm. This involves adopting measures to catch errors before they escalate into adverse events, thereby safeguarding patient safety.In healthcare we seem to be very reluctant to accept that mistakes can happen, but it is only when we accept this that we can take the steps to create systems that reduce these mistakes.?????
How do we make decisions? And why prevention requires a more quantitative approach
Our intuition often aids in swift decision-making, enhancing efficiency. However, relying solely on it without deliberate training, a practice grounded in experience, can lead us to inadvertently accept higher risks in our decisions than anticipated.
Decisions involve evaluating available information and using experience to gauge the profitability of an action. If intuition, one of our decision-making pillars, rests on an unreliable foundation, the associated risks in each decision escalate unpredictably depending on the scenario.
This unreliability in decision-making stems from a lack of standard error measurement and the quality and limits of the information we use. In scientific contexts, errors are expected due to instrumental inaccuracies, but these are generally consistent, managed through frequent calibration. Unfortunately, such standardization does not apply to our intuitive decisions. Often, we operate without understanding the errors or biases involved, akin to conducting uncontrolled experiments daily.
Daniel Kahneman categorizes our thought processes into System 1 and System 2. System 1 is automatic, driving us to make quick decisions based on past experiences, useful in critical situations but prone to impulsiveness in daily activities. System 2, in contrast, is deliberate, guiding us through logical thinking and decision-making. While both systems are essential, challenges arise when System 1 dominates scenarios better suited for System 2, which is the case for healthcare and will be even more the case for prevention based healthcare.
An example Kahneman uses to illustrate this concept is "The Bat and Ball Problem."
A baseball bat and a ball together cost 1.10 euros. If the bat costs 1 euro more than the ball, how much does the ball cost?
Take a moment to think about it.
If you answered that the ball costs 0.10 euros, you're mistaken. Because for that to make sense, the total should be 1.20 euros, not 1.10.
In reality, the ball costs 0.05 euros. The reason we fail with this type of calculation is that instead of taking our time to check if the answer makes sense or not, we simply let System 1 take over.
When problems seem very simple, System 1 "takes over" and responds on its own. The problem with this is that many times problems are more complicated than they seem, and we end up making stupid mistakes.
The logic behind this seemingly irrational behavior is that our body seeks efficiency and therefore wants to spend the least amount of energy possible for each task. System 1 is faster and "cheaper" because its responses take less time, so whenever it can, unconsciously, it takes the reins. And this also happens in healthcare.
As a field where professionals have to work long hours under stress, the work requires communications between many different stakeholders often without much time or opportunity to prevent misunderstandings, healthcare can be considered as a field where intuition can have a big role.
Why is statistical thinking essential when dealing with large amounts of information and unclear risks?
Statistical thinking is crucial because it allows us to estimate the probability of events using mathematics and logic, especially in contexts where information is vast but our understanding of the associated risks is limited. This approach is vital due to our inability to foresee all factors and variables in any given situation, which constantly confronts us with the unknown.
The nature of incomplete information and inherent uncertainty about the future make statistics indispensable. If we could predict every event accurately, statistics would be unnecessary. However, the reality is that we always operate with partial knowledge, which compels us to calculate probabilities based on available information and reasonable assumptions.
The utility of statistical thinking lies in its ability to handle uncertainty and calculate the probability of events occurring within a specific range, enabling us to make informed decisions based on calculated risk. This approach is not intuitive and requires practice and dedication to be effectively used in problem-solving.
Within statistical thinking, Bayesian thinking and conditional probability are fundamental tools. Bayesian thinking uses prior information to adjust expectations about future events, based on new and relevant evidence. This method helps us assess risks and make more informed decisions, considering the background and context in which events occur.
Conditional probability, on the other hand, works with events that depend on previous actions or outcomes, which is essential for understanding the connections and sequence of events in a dynamic environment.
Taking these methods into account allows us not only to anticipate future events but also to contextualize and assess the relevance and impact of new data on our previous assumptions and decisions. For example, the use of Bayesian analysis has proven essential in interpreting public health trends, such as the increase in diabetes cases, enabling more targeted and effective health policies. And as we have to deal with more and more information, applying statistical thinking in healthcare will become essential.
We will never have all the information, and as such we should always undedrstand that our decision making carries an unknown amount of risk for which we have to be prepared. As we get more and better technology, diagnostics and prediction models the risk we have to face will be more manageable, we will be closer to the real risk we would measure if we had all the information in the world, but it will probably never be zero.
A simple Framework to Improve Decision Making
Healthcare decisions are made based on available data, but this data is often incomplete or limited. Healthcare professionals must be acutely aware of the 'tip of the iceberg'—what they can see—and acknowledge what they cannot—the vast, unseen risks beneath the surface. This understanding is crucial to avoid sinking.
Applying decision-making frameworks to healthcare
Data collection and reporting:?
This includes information from clinical trials, diagnostics real-world data from healthcare providers, and feedback directly from patients. By collecting comprehensive data on treatment outcomes, side effects, and patient experiences, healthcare providers can identify patterns that may indicate underlying issues with specific treatments or practices. For instance, data collection can reveal that a certain surgical technique leads to higher complication rates, prompting a review of procedure protocols.
Some things we should always take into account when it comes to this stage of the process.
领英推荐
Signal detection:?
Using advanced analytics and other data interpretation tools, healthcare systems can detect early signals of potential problems across various aspects of care. This isn't limited to medication-related issues but includes equipment failures, procedural errors, or unexpected outcomes from standard treatments. For example, if a particular type of medical device frequently fails or causes complications, signal detection tools can alert healthcare providers and regulatory bodies to these anomalies before they affect a larger segment of the patient population.
Another essential aspect in future healthcare will be diagnostics. While there has been remarkable progress in identifying numerous bio-markers associated with various diseases, there remains a notable dearth of independent evaluation of the reliability and accuracy of the tests used to detect these markers in biological samples.
The efficacy of personalized medicine hinges on the availability of precise diagnostic tests capable of identifying patients who would benefit from targeted therapies.
For instance, clinicians frequently rely on diagnostic tests to ascertain whether specific breast tumors exhibit overexpression of the human epidermal growth factor receptor type 2 (HER2), which not only correlates with a poorer prognosis but also predicts a favorable response to trastuzumab therapy. Approval of the HER2 test alongside the drug (as a "companion diagnostic") enables clinicians to tailor treatment to individual patients' needs.
Recent scientific progress is significantly enhancing medical care through personalized medicine, focusing on genetics, transcriptomics, proteomics, spatial transcriptomics, metabolomics, and advanced imaging.
Risk assessment:?
Once potential issues are identified, assessing the risk involves analyzing the seriousness, frequency, and reversibility of these adverse events. In healthcare, this could mean evaluating the impact of a misdiagnosis, the prevalence of post-surgical infections, or the side effects of a physical therapy regimen. Risk assessment helps healthcare providers understand the magnitude of the problem and prioritize their response. It also involves considering the vulnerabilities of specific patient groups, such as the elderly or those with pre-existing conditions, who might be more susceptible to certain risks.
Risk assessment also implies understanding the different types of errors in medical practice. As a very simple definition we could consider it as the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim, and it can be divided further into Serious error: An error that causes permanent injury or transient but life threatening harm.Minor error: An error that causes harm that is neither permanent nor potentially life threatening. Near misses: An error that could have caused harm but did not either by chance or timely intervention.
Benefit-risk evaluation:?
The final step is to weigh the benefits of a medical treatment, procedure, or practice against its potential risks. This evaluation is essential in deciding whether a current healthcare practice should continue, be modified, or be discontinued. For example, if an analysis reveals that a certain chemotherapy drug offers life-extending benefits but comes with severe side effects, healthcare providers must weigh these factors and discuss them with patients to make informed treatment decisions.?????
What is Stopping us From a Shift Towards Prevention Based Medicine?
The technological limitations in healthcare
Improved technology now allows for better data collection on patient health and the use of artificial intelligence to analyze this data for more accurate health assessments. And future technologies, like nanotechnology, promise even more precise diagnostics and treatments, but these advancements are not yet available.
We lack the infrastructure needed for digitization, which is essential to obtain the data needed for the personalization of treatments, and we lack the interoperability needed to allow patients to transfer their data from one region to another as they live in a highly interconnected world.
Despite advancements in digital technologies, a significant reliance on traditional data collection methods persists, potentially stifling transformative changes within healthcare processes. Debates continue over which types of data should be prioritized. A clear understanding of healthcare goals is essential to guide these decisions, necessitating a strategic and possibly disruptive approach to data adoption.
The management and ownership of data also pose significant challenges, driving the need for increased data literacy to support societal and legal changes. The media's role in promoting the effective use of data is also recognized as crucial.
The concept of precision medicine is still ahead of the current technology, similar to how the idea of self-driving cars was far ahead of its time decades ago. Just as early attempts at automating vehicles were crude and limited, our early attempts at personalized medicine are still primitive. However, just as automotive technology eventually caught up to the vision of self-driving cars, medical technology is now beginning to allow for truly personalized care.
An example of this progression in medicine is the use of continuous glucose monitors (CGM) that provide real-time blood glucose levels. This technology allows for more specific, immediate dietary adjustments based on individual responses, a level of personalized intervention not possible before. As technology advances, we expect more sensors and tools that will enable even more precise and immediate adjustments in medical treatments, improving patient care significantly.
The infrastructural limitations of a shift towards prevention
It's important to recognize that much of the data available today isn't immediately ready for use. Typically, it's gathered by professionals who may prioritize other aspects of their roles—like direct patient care—over meticulous data collection, often due to heavy workloads.
As a result, although we're collecting more data than ever, the challenge now is to harness it effectively. This requires highly skilled professionals who are specialized enough to navigate the complexities involved in a comprehensive digital overhaul of systems.
From initial data acquisition through to ensuring data safety and its anonymization, not just any professionals, but seasoned experts are needed to manage these transitions smoothly by overcoming potential pitfalls that have tripped up other efforts elsewhere. A robust framework for interoperability is crucial in this context. Interoperability facilitates the sharing of data across different organizations and geographical areas, enhancing the potential for its secondary use.
For instance, while the primary use of patient data might be to aid in their immediate treatment, secondary uses could include aggregating and analyzing large sets of data to improve broader health outcomes. Projects like Infobanco demonstrate attempts to foster such data-sharing frameworks.
The secondary use of data hinges not only on having the right professionals in place but also on clear communication with patients to ensure they understand how their data may be used and to reassure them of its safety. Gaining patients' consent is a foundational step in this process, necessitating robust patient advocacy to alleviate any fears about data misuse.
While public health systems predominate in providing access to healthcare, the role of the private sector is non-trivial, especially when public systems face shortages of healthcare professionals. These gaps are often filled by the private sector, emphasizing the need for collaboration between both sectors to ensure comprehensive patient care.
Implementing digital transformation initiatives, such as the European Health Data Space (EHDS), requires organizations to judiciously use their resources. They must discern which steps in the implementation process are critical and urgent versus those that may seem important but can be prioritized lower.
While the full implementation of such initiatives may take time, the underlying objective is clear: to advance healthcare through more personalized approaches. This includes leveraging innovative technologies like digital twins, which can make healthcare delivery more efficient, cost-effective, and safer. Ultimately, the EHDS is poised to be a cornerstone in the evolution of healthcare systems.
Artificial intelligence as an aid in the decision making process
Artificial intelligence (AI) holds transformative potential in healthcare, promising substantial enhancements in both clinical outcomes and operational efficiencies. The application of AI in healthcare is multifaceted, encompassing diagnostic support, treatment personalization, and administrative optimization. This technology will be essential for the shift towards preventive medicine by leveraging burgeoning data volumes and advancing technological integration.
Understanding AI in Healthcare Contexts
AI in healthcare primarily refers to machine learning, a subset of AI that enables computers to learn from data and improve over time without explicit programming. Machine learning facilitates the analysis of large datasets, allowing for the identification of patterns and trends that may not be apparent to human observers. This capability is critical in diagnosing diseases, predicting patient outcomes, and personalizing treatment plans.
Machine learning operates through algorithms that analyze data at a granular level, extracting actionable insights. For example, in oncology, these algorithms can sift through imaging data to detect early signs of cancer at stages far more incipient than what human detection might allow. Similarly, in chronic disease management, machine learning models utilize historical health data to forecast future exacerbations, enabling preemptive intervention.
AI's Role in Enhancing Diagnostic Accuracy
Deep learning, a more advanced form of machine learning, employs structures known as neural networks. These networks are particularly adept at processing complex data inputs such as medical images. Convolutional Neural Networks (CNNs), a type of deep learning model, are especially effective in image recognition tasks. They analyze medical imagery, such as MRI scans or X-rays, identifying subtle patterns that indicate the presence of specific pathologies.
As healthcare systems increasingly embrace digital technologies, the volume of available data grows exponentially. AI will be essential in analyzing this vast data trove, enabling a shift from reactive to preventive medicine. By identifying risk factors and predicting disease onset, AI facilitates early intervention strategies that can mitigate or even prevent disease progression. This proactive approach not only has the potential to improve individual health outcomes but also to decrease overall healthcare costs by reducing the need for extensive medical interventions.
Challenges and Future Prospects
Despite its potential, the integration of AI in healthcare faces several challenges, including data privacy concerns, the need for extensive training datasets, and the potential displacement of traditional jobs. However, the continuous advancements in AI technology and growing evidence of its benefits suggest that AI will increasingly become an integral part of healthcare systems.
In conclusion, AI's role in healthcare is rapidly expanding, with significant implications for both clinical practice and administration. As technology and data availability continue to evolve, AI's ability to support preventive medicine and enhance patient care will only increase, marking an essential evolution in the way healthcare is delivered.
The need for a shift in paradigm
An effective approach to dealing with less understood chronic diseases requires two things, a prioritisation of prevention over treatment and better diagnostics. Unlike the current medical approach which focuses on solutions after problems arise, a prevention based approach aims to predict and prevent issues before they occur, similar to preparing an umbrella before it starts raining.
This approach treats each patient as an individual. While current approaches apply the average results of clinical trials to all patients. A prevention based approach takes these results and adjusts them based on how a specific patient might differ from the average participant in those studies.
And it requires a different way of working with risk. As the amount of information we obtain increases we will be able to make better decisions, but what we shouldn’t forget is that all these decisions are probabilistic in nature.?
We always have to contend with the limitations of our tools, of the information we have and, of the context of each patient, and solutions they can have access to. We need to understand that there are many kinds of risks, not only those due to doing, but also those due to inaction.?
An example of misjudging risk was the use of hormone replacement therapy (HRT) for postmenopausal women. The Women’s Health Initiative Study in 2002 linked HRT to a 24 percent relative increase in breast cancer risk, which led to widespread condemnation of HRT. However, the actual increase in breast cancer was minimal, moving from four to five cases per thousand women, which translates to a 0.1 percentage point increase in absolute terms.?
Despite this small increase, the perceived risks were deemed to outweigh the benefits, leading to the avoidance of HRT, which left many women to suffer symptoms of menopause unnecessarily, and potentially increased their risk of other conditions such as Alzheimer’s disease.
Current practices are inclined to dismiss a therapy entirely based on one clinical trial rather than explore its complexities. Because of how the reimbursement system works, and the regulations that apply in healthcare.?
A more personalized approach poses a crucial question: despite a slight increase in risk for older women, could hormone replacement therapy still benefit a specific patient with unique symptoms and risks? And, do we have the information needed to make the right decision??
This patient's condition and how it compares to the study's demographic are crucial, especially since the women in the study were not symptomatic and were significantly past menopause. This raises questions about the study's relevance to younger or symptomatic women and any other reasons for the minor risk increase noted.
The focus should be on evaluating likelihood of developing certain diseases as well as the risks, benefits, and costs of this therapy on an individual level, as well as other treatments.?
???????Reference:
Attia, P., & Gifford, B. (2023). Outlive: The science and art of longevity. Harmony.
Belue, M.J., & Turkbey, B. (2022). Tasks for artificial intelligence in prostate MRI. European Radiology Experimental, 6(1), Article 33. https://doi.org/10.1186/s41747-022-00287-9
Ricci Lara, M.A., Echeveste, R. & Ferrante, E. Addressing fairness in artificial intelligence for medical imaging. Nat Commun 13, 4581 (2022). https://doi.org/10.1038/s41467-022-32186-3
Obermeyer, Z., & Emanuel, E. J. (2016, September 29). Predicting the Future - Big Data, Machine Learning, and Clinical Medicine. New England Journal of Medicine, 375(13), 1216-1219. https://doi.org/10.1056/NEJMp1606181
Fabes, J., Av?ar, T. S., Spiro, J., Fernandez, T., Eilers, H., Evans, S., Hessheimer, A., Lorgelly, P., & Spiro, M.; Health Economics Survey Group. (2022). Information asymmetry in hospitals: Evidence of the lack of cost awareness in clinicians. Applied Health Economics and Health Policy, 20(5), 693-706. https://doi.org/10.1007/s40258-022-00736-x
Bari A, Khan RA, Rathore AW. Medical errors; causes, consequences, emotional response and resulting behavioral change. Pak J Med Sci. 2016 May-Jun;32(3):523-8. doi: 10.12669/pjms.323.9701. PMID: 27375682; PMCID: PMC4928391.
Rodziewicz, T. L., Houseman, B., & Hipskind, J. E. (2023, May 2). Medical Error Reduction and Prevention. Retrieved from [https://www.ncbi.nlm.nih.gov/books/NBK499956/ ]
Interview of Dr.Abdulelah Alhawsawi https://www.youtube.com/watch?v=Ulh5311A4oQ
???????Hood, L., & Price, N. D. (2023). The Age of Scientific Wellness: Why the Future of Medicine Is Personalized, Predictive, Data-Rich, and in Your Hands. Belknap Press
G. M. Bartelds et al., “Development of Antidrug Antibodies against Adalimumab and Association with Disease Activity and Treatment Failure during Long-Term Follow-Up,” Journal of the American Medical Association 305 (2011): 1460–1468.
Alzheimer’s Association, “Alzheimer’s Disease Treatment Horizons,” updated October 2019,
Rinke de Wit, T. F., Janssens, W., Antwi, M.. (2022, November 28). Digital health systems strengthening in Africa for rapid response to COVID-19. Frontiers in Health Services, 2, 987828. https://doi.org/10.3389/frhs.2022.987828
Artiga, S., & Hinton, E. (2018). Beyond health care: The role of social determinants in promoting health and health equity. Kaiser Family Foundation. Retrieved from https://resource.nlm.nih.gov/101740257
Hamburg, M. A., & Collins, F. S. (2010). The Path to Personalized Medicine. New England Journal of Medicine, 363, 301-304.
Schork NJ. Personalized Medicine: Time for One-Person Trials. Nature 2015; 520: 609-611.
Puska, P., & Jaini, P. (2020). The North Karelia Project: Prevention of Cardiovascular Disease in Finland Through Population-Based Lifestyle Interventions. American Journal of Lifestyle Medicine, 19, 495-499.
Artiga, S., & Hinton, E. (2018). Beyond health care: The role of social determinants in promoting health and health equity. Kaiser Family Foundation. Retrieved from https://resource.nlm.nih.gov/101740257
Katsoulakis, E., Wang, Q., Wu, H., et al. (2024). Digital twins for health: A scoping review. npj Digital Medicine, 7, 77. https://doi.org/10.1038/s41746-024-01073-0
Garmany, A., Yamada, S., & Terzic, A. (2021). Longevity leap: Mind the healthspan gap. NPJ Regenerative Medicine, 6(1), Article 57. https://doi.org/10.1038/s41536-021-00169-5
Control of Infectious Diseases. (1999). JAMA, 282(11), 1029-1032. https://doi.org/10.1001/jama.282.11.1029
Editing aided by ChatGPT?????
Innovation Projects Coach, Mentor & Consultant
4 个月Thanks Adam Skali, good post and remarkable content !