AI Won't Replace Human Workers, But People Who Use It Will Replace Those Who Don’t
Brighton Chireka
Founder and Medical Director DOCBEECEE Leadership Academy. Transforming health and social care professionals into effective leaders and change agents, bridging the gap between leadership knowledge and practical skills.
Technology is advancing at a rapid pace, and artificial intelligence (AI) is becoming increasingly influential in our professional lives. As AI continues to evolve, it’s understandable that some see this fast-paced growth as a threat to job security, while others view it as an opportunity for future job enrichment. Andrew Ng, a prominent AI expert, famously noted, “AI will not replace people, but people who use AI will replace those who don’t.” I agree with Ng—AI has the potential to transform our jobs, not by replacing us but by empowering us to work more efficiently and effectively.
In the health and social care field, my role as a practitioner isn’t threatened by AI; rather, it is enhanced by it. Embracing AI allows me to leverage technology to provide better care, streamline processes, and focus on what truly matters in healthcare—compassionate, individualized care. The future will belong to those who harness AI’s capabilities while maintaining the unique human touch that AI cannot replicate.
Learning from the Past: A Red Flag for Every Advancement
When cars first hit the roads, early laws required a person to walk ahead of each vehicle waving a red flag to alert pedestrians. Known as the Red Flag Act of 1865 in the UK, this law also restricted vehicles to speeds of 2-4 mph and required three drivers per vehicle. Today, it seems almost laughable, a quaint relic of a time when society hadn’t fully grasped the potential of motor vehicles. Similarly, AI is now in its early stages, where misconceptions and hesitations are common. But as our understanding and integration of AI deepen, future generations will likely view our cautious, sometimes skeptical attitudes in a similar way—as natural, but ultimately outdated.
This mindset shift requires knowledge and openness. Ignorance often breeds fear. Like many, I was initially skeptical of AI until I enrolled in a course offered by Harvard Medical School, AI in Health Care: From Strategies to Implementation. The course has been an eye-opener, helping me see both the potential and the challenges of AI in healthcare.
The Human Touch in Healthcare: Why AI Can’t Replace Us
LLMs (Large Language Models) like ChatGPT are powerful tools, especially for tasks involving documentation and data summarization, but they face several limitations in healthcare. Below are some of these limitations, which highlight why healthcare professionals will remain essential.
1. Dependency on Training Data Sources
LLMs are trained on publicly available data, which means they may not include the latest medical advancements or specialized databases. This reliance can result in outdated recommendations, especially in fast-evolving fields like medicine. For example, if a new guideline is issued for managing COVID-19, an LLM trained on older data may not include this information. As a healthcare professional, I regularly update myself with the latest guidelines and research, ensuring that my patients receive care based on the most current knowledge.
2. Bias and Data Limitations
LLMs are limited by the data they are trained on, often leading to biases. If an LLM is trained primarily on data from Western countries, it may lack insights into conditions prevalent in other regions, such as tropical diseases like malaria. As a practitioner, I’m trained to consider factors like travel history and regional epidemiology, which allows me to identify conditions an LLM might miss.
3. Inability to Update in Real-Time
The COVID-19 pandemic showed us the importance of real-time information. New treatments, vaccine data, and management guidelines were being updated almost daily. An LLM, however, can only be updated periodically through retraining, meaning it may lag behind in crucial, real-time medical advancements. My ability to stay current through continuous learning is indispensable, especially in fast-paced situations.
4. Lack of Specialized Medical Judgment and Clinical Nuance
Healthcare is not just about diagnosing and treating; it’s about understanding each patient’s unique context. An LLM might recognize common symptoms but lacks the clinical judgment to make nuanced decisions. For instance, a model might list causes of fatigue like anemia or hypothyroidism but might not pick up subtle signs indicating something more serious, like cancer. As a doctor, I can interpret subtle cues and understand the complex interplay of symptoms that AI often overlooks.
5. Limited Ability for Physical Examination Interpretation
Physical exams are crucial in healthcare, and no amount of text processing can replace this. An LLM can suggest a diagnosis based on described symptoms, but it cannot observe physical signs like pallor, jaundice, or a specific rash pattern—details that can dramatically change a diagnosis. My role as a healthcare provider includes not only interpreting these physical signs but also understanding their context, something AI cannot do.
6. Difficulty with Complex Ethical and Emotional Judgment
Communicating difficult news, such as a terminal diagnosis, requires empathy, sensitivity, and a deep understanding of human emotions. An LLM may generate supportive text, but it cannot perceive a patient’s emotional state or adjust its responses accordingly. I tailor my approach to each patient’s needs, providing comfort and support that only a human can deliver.
7. Risk of Over-Reliance and Misinformation
Anyone can access an LLM, which means patients might consult AI for a self-diagnosis. However, AI cannot assess the full context of a person’s health. If a patient experiencing chest pain consults an LLM and is advised to rest, they might ignore symptoms of a serious condition like a heart attack. My role is to provide a comprehensive assessment and guide patients based on an in-depth understanding of their unique medical history.
The Future of Healthcare: A Collaboration Between AI and Humans
While LLMs and AI tools are invaluable for documentation, decision support, and other tasks, they lack the adaptability, clinical reasoning, and empathetic judgment that define human healthcare. Rather than seeing AI as a threat, we should view it as a tool that complements our work. AI can streamline administrative tasks, leaving more time for patient care and complex problem-solving that require a human touch.
The integration of AI will ultimately make healthcare more accessible, efficient, and precise. But this future depends on skilled practitioners who can work alongside AI, leveraging its capabilities while bringing the empathy, adaptability, and critical thinking that only humans can provide. My job isn’t disappearing—it’s evolving, and with AI, I can deliver better, more effective care than ever before.
If this article resonates with you, please subscribe, like, comment, and share it with your network.
I could not agree more Doc! Well said ???? We are at the verge of a major technology revolution. When used in a safe and ethical manner, AI can make huge leaps in closing some gaps in healthcare provision.
--
1 周Well said Doc. Embrace it we will for efficiencies and best outcomes but replace us no... Do you know who is developing AI algorithms for the continent,----different countries in Africa.
Health professional | Doctor | Leader
1 周Agreed ??
Founder of IAMECOM: Careconnectshow: Dollkingdom:Bigheartsrecruitment
1 周Insightful