AI & Us: A genial GP on what the AI revolution will (and won't ever) change in medicine
Shauna Hurley
Freelance writer, researcher & digital content strategist | Writer of AI & Us | Former Adviser to Cochrane's Editor in Chief | Evidence enthusiast | Podcast producer |
AI & Us is back for 2025 with a great and unconventional lineup of guests set to explore all things technological, sociological and philosophical. From roving reporters to prominent politicians, hairdressers to health workers, each edition will explore a very different and deeply human perspective on how AI is set to reshape life and work as we know it.
This week we meet a genial GP who offers the sort of insightful answers Chat GPT just can’t give us. With 30+ years as a family physician, Dr Richard Blanch deftly switches from fielding my usual questions about kids’ sore throats and soccer injuries, to tackling somewhat more abstract questions like: Is general practice the most deeply human of professions? How will AI reshape the doctor–patient relationship? And what does the robot revolution mean for the future of primary health care?
True to form, Dr Richard offers considered answers that combine the practical, the amusing and the profound...
Shauna: So, Richard can we start by going back in time? Why did you decide to study medicine in general, then General Practice in particular?
Richard: Well, I grew up in the far southern suburbs of Sydney, went to a public school and no one in my family had studied medicine. So, to be brutally honest about why I chose to get into it, I was just academically very good at my school work. And when you find yourself dux of your high school, it’s a bit like that comedy skit where you only have two choices: be a doctor or be a lawyer. I looked at both and thought doctoring sounded more interesting than lawyering. So that’s pretty much the real reason I first studied medicine.
Shauna: Did your parents like the idea and had they gone to uni themselves?
Richard: They really promoted the idea to me, but neither of my parents had been to university at that stage. My dad was an autodidact, who just grumbled through high school and became a carpenter, working his way up to become quite senior in project management. Mum had really conservative parents who didn't believe in educating women, so she was denied that route initially, despite the fact she was sharp as a whip. She actually ended up going to uni to do social work at the same time as I was studying medicine. She's such an interesting cat, my mum.
But no, my parents definitely weren’t university-affiliated types, and I was a bit of fish out of water when I first lobbed into Sydney University in the early ‘90s. Eventually I found my little niche there though.
Shauna: How did you go from a reticent medical student to a much loved family GP?
Well again, choosing general practice early on was based on pretty banal thinking. It was this ‘well I don't want to do X, so I'll just do Y’ sort of insipid reasoning. I just really, really didn't like hospitals, and I wasn't that enthusiastic about launching into the brutality of specialist training. So that was what really shaped my choice to become a GP. But I look back now and think it's funny how life moulds you. I went from this quite standoffish sort of kid from the suburbs and an outsider in the medical professions, to becoming a kind of pillar of the establishment. And by that I mean I just really love my colleagues; I love general practice and I love the kind of infinity of medicine. I wouldn't do anything else for the world now.
Shauna: That’s quite an odyssey. What do you mean by the infinity of medicine?
Richard: It’s like the GP sits in the centre of an ever-expanding sphere of knowledge. And it's just a ludicrously impossible task to keep on top of it all. And on one level, it's just so much fun in terms of the intellectual challenge.
But when it really comes down to it, what I love most about general practice is the relationships you build with your patients with all the to and fro over time. The social challenge of creating a collaborative approach is complicated, as every conversation is different – even with the same person across time. The reasons people walk in the door are always different as well. They might seem really obvious and mechanical at first – like ‘I need a script’. But peel back a couple of layers, and away you go.
I'm not sure that most people fully understand the depth of the intellectual challenge of general practice, but in broad terms I think they do appreciate that GPs listen really carefully, care about what happens to them, and treat them with kindness and respect the vast majority of the time. I know my patients appreciate the intent that I bring to all our conversations, and that real build-up of trust and familiarity over time.
Shauna: I didn’t realise just how much of GPs’ work involves mental health. The 2024 General Practice: Health?of the?Nation report says that over 70% of GPs listed psychological issues in their top three reasons for patient consultations last year. And even since 2017 – well before the spectre of COVID mightily challenged our collective mental health – psychological issues were right up there as the main reason for GP visits. This is the work of a deeply human profession, isn’t it?
Richard: That’s a good way to encapsulate it all.
If you look back it always has been – and if you look forwards it always will be – a deeply human profession. Getting a clear cut diagnosis would account for only about one in 10 cases – things like hypertension or pneumonia or broken bones. Down that end of the spectrum you test, diagnose and treat. But you're not just dealing with these kinds of cases. If you were, I’m sure you’ll find a robot that could one day fix the majority of them without a person being involved.
The reality is that most problems we deal with throughout our lives are much more complex and complicated than that. Many people have depression or fatigue or chronic health problems or diabetes – where there are multiple interwoven sorts of things feeding that problem, and also multiple potential solutions. The GP’s task is not to know or identify some exact truth; it's to help the person navigate from one point in their lives to the next. That takes a while to understand. Knowing what ‘the truth’ is, is only an occasional experience in medicine.
After your first 10 years or so in general practice, you realise what you really need to work out is a range of different approaches and strategies and come up with a philosophy. You grapple with the human condition. My own philosophy evolved over the years. Now everything I do, think and say as a GP has to come from a position of kindness and respect. Those two things.
I know that some people have deeply traumatised souls. And your job as a GP is not always to ‘fix them’. Your job is to be there, to listen closely, and to occasionally hold their hand as they try to get through their lives. Just having someone that's safe, and that is kind to them, is enormously valuable for some people.
Shauna: That’s an especially humanist and philosophical view of patient care, and all the more extraordinary when you consider you have only 15 minutes or so at a time to make that kind of difference. How and where does or will AI fit into this picture? Are human-centred care and automation set to co-exist and complement each other?
Richard: I'm not sure but I think so. My main thesis for the question, ‘How does AI fit into the emerging medical landscape?’ is just to restate the question. AI will fit into the emerging landscape of medicine. But it's AI fitting into the landscape, rather than the current landscape disappearing entirely. Those other much more important things we’ve just been talking about around communicating with people, seeing and understanding things from their point of view, and walking with them through all kinds of challenges – those sorts of nuanced things have nothing to do with AI. They don't get solved by AI.
Shauna: A recent study suggests 20% of UK GPs are now using AI tools like ChatGPT – not just for admin tasks or writing the odd email but also for diagnosis and treatment. Anecdotally do you think it’s a similar situation here in Australia and what are your thoughts about this as a development?
Richard: Firstly, I think ChatGPT is a truly terrible tool for resolving health questions. It's a language algorithm that really just translates one set of words for another set of words, and it's designed so that the answer it gives you appears to be comprehensive, authoritative, and deeply correct. And that is incredibly problematic with health stuff.
I've experimented with ChatGPT a few times to give me a differential diagnosis for a clinical scenario. It comes up with really confident assertions and lists that are actually complete garbage. I’d come up with better and more accurate ones off my own bat.
I know there are other programs being developed that use medical inputs that are screened. Universities, major hospitals and international peer-reviewed journals are trying to narrow the field and quality of sources that these tools would draw from. But they can't do that yet, and ChatGPT is definitely not designed to do that. And it's not intelligent, it's just algorithmic. I have to say the fact that it’s even called artificial ‘intelligence’ drives me a bit bonkers!
Shauna: Most GPs would presumably know that ChatGPT isn’t designed for diagnostics and poses all these issues with evidence, bias, privacy, hallucinations and inaccuracies. Do you find it surprising any health professional would be using it to make diagnoses or inform proposed treatment plans at this stage?
Richard: I think you've got to delve into those studies and statistics about doctors using it. Because if the question was ‘Have you used ChatGPT to research a medical question?’, I would have to say yes. Despite the fact that I'm profoundly against doing just that in resolving any clinical problem to do with any of my patients. I’d say the consensus is that the technology needs to mature, and a raft of medico-legal implications need to be considered and addressed no matter what tools are developed and how sophisticated they become.
For now, I don't have a whole lot of faith in AI products as they are today. But fast forward five years, and I’d be extremely surprised if I wasn’t using some sort of decision support. I'd be anachronistic if I wasn't. I think it's inevitable that that's going to happen because it's really just a powerful Google search in another form. If you could get the answer in a single document or source with a good outline and structure that would be helpful.
Shauna: But where it differs from Google is this kind of black box idea where you don't know the evidence base or logic behind the answer it gives. That has so many implications for healthcare decisions, doesn’t it?
Richard: Yes, and that’s why I’d absolutely not expect that I'll be using ChatGPT. Because at the moment it's the wild west. To me it’s like asking, ‘What does everyone on the internet think about this health question?’ And I'm not interested in what the answer to that is. If I was, then I should hang up my shingle and go home, because I'm meant to be adding value.
领英推荐
And this black box issue is much bigger than that. You do really need to know the structure of the black box that gave you the answer in order to really be able to interpret and trust the answer. With Google searches you can see for example the University of Liverpool's web page that tells you about drug interactions. I know what information I'm getting and who's compiled it. I know they're a great bunch of dudes that really know their stuff. It’s not a single answer from an unknown source. So there needs to be some sort of iteration that's appropriate, reliable and functional. But I still don't think it's going to fundamentally change what I do, just like doing internet searches doesn't fundamentally change what I do. But I certainly do it often. Not a day goes by that I don't access online resources numerous times.
Shauna: What about AI tools to summarise patient consultations. Are they more promising?
Richard: Yes, we use an AI transcription tool called Lyrebird at our practice. After each patient consultation it maps the conversation to a preset format. It’s pretty amazing really, and given the computing involved is so gobsmacking it’s hard not to think it's intelligent. But it's not. It's just using language mapping architecture to produce a very detailed summary of over-inclusive information that you need to go back through and edit.
It’s useful and better in one sense, but definitely not better in another. On the upside it’s more thorough documenting people's symptoms. It’s about 90% accurate, but there’s occasionally wildly inaccurate stuff, so you definitely have to read and edit it. But it's pretty good. You lose a lot of nuance you’d naturally find in your own notes. In their place you have Lyrebird’s notes that are very robotic and formulaic. You wouldn't be able to tell that it was me consulting compared with one of my colleagues, but that said, if it’s rapidly documenting more thoroughly what was actually said then that's good.
Shauna: The nature of General Practice and the way you spend your time will change as a result of these tools, as will the way GPs of the future learn and fulfil the different demands of the profession. Will future GPs have entirely different jobs?
Richard: The nature of the job is always changing. It's always evolving like any complex system. By definition, it doesn't remain the same. So, of course the new generations coming through are always different from the last, and they'll invent their own way of approaching this whole thing and it will involve solutions that we currently don't understand.
So, what are we going to see in the future when today’s younger generations are using AI and how will that change the medical experience for people, and the way doctors go about doing their thing? Of course, I don't know but I think it'll be much more of the same than it's going to be earth-shatteringly different. They’ll have different tools and different cultural ideas in their head and go about it in different ways, but I think the heart of it will remain the same. I think what you can guarantee is that they're just as smart as we are and that they will slowly develop increasingly sophisticated insight into these things, and they'll have their own debates about it and work it out for themselves. And yes, the world will be different. Like it or not.
And doctors will always continue to interact with patients. And they’ll continue to learn from them and grow and change sort of like I did. The kids of today will do that too. They'll solve problems in different ways. I wouldn't say that I had an affinity or even very good skills in empathy, kindness, respect and those sorts of things when I started out in medicine. As quite an introvert, the whole business of doctor–patient relationships and things like that didn’t come naturally to me in my 20s. You evolve and learn and change through experience. That will continue, whatever the technological landscape.
Shauna: It doesn’t sound like you think a robot will ever take your job entirely?
Richard: No. Never. You know I love Star Trek like the next person. But no, I don't think so. If you look at how the brain is structured, about how many billions of neurons and trillions of connections they have, it'll be a long time before we can create a computer that even comes close to that. I think quantum computing will need to go through its lifespan, and we'll need the next paradigm after that, or the next one before we get anything that comes close.
Shauna: If you had to wrap up all you think about the ways AI will change how we live, work and interact with each other in just one word, what would it be?
Richard: Embedded.
Shauna: Why embedded?
Richard: Because just look at mobiles and the internet. Look how we lived our lives when we were young and grew up in the total absence of them. Now think how powerfully embedded that technology is in our lives. We’ll just become more connected and more interwoven with technology as time goes by. You can see our kids have grown up with their screens which are almost like an extension of their mind for them, whereas it wasn't that for us. So you know, I'm just cutting and pasting onto what I see happening right now culturally, and posing the question. How on earth could it not become more and more embedded in people and the way they use and interact with information?
Shauna: And what do you think we stand to lose or gain in the process?
Richard: I do feel concerned about, and probably threatened by, AI because of my natural prejudice towards its current iteration. Because I think to an order of magnitude more so if you think about our relationship with the internet now as being the source of all knowledge and therefore the source of truth. Then AI is going to be more of a threat in the sense that it could distort the truth and subvert the truth and interfere with people's ability to think clearly about complicated problems. So, I'd be pessimistic about the role that AI will play in the way our minds develop as time goes by. But you know, people have always been pessimistic about new innovations and what they're going to do.
Shauna: True, but you’ve shared a mixed bag of optimism and pessimism here, so let’s go with that. I really like the idea that it arises from 30 years in amongst a constant stream of humanity, dealing with the realities and mysteries of life, death, illness, health, fortunes and misfortunes… It’s a lot.
Richard: That's actually the thing I'm the most thankful for in life – apart from my wife and my family – is that kind of educational experience and development changes that I've experienced as a human being as a result of years and years and years of just constantly interacting with other people. Sometimes GPs are known for being a little bit fond of disappearing into their own navels, banging on about the doctor–patient relationship, and how special everything is. I don't think it's anything to do with doctoring so much as we're just so lucky that we get this gold pass into people's lives.
It's hard to imagine many other professions – maybe journalists in some sense – or just any other avenues of life where you get so many genuine interactions with people. It's a very privileged kind of space to be in. People share all these deep and usually hidden aspects of their lives, and you get this profound appreciation for how fallible people are. The human experience is so deeply fallible. Suffering is unavoidably embedded in the act of living a good life, and pain is probably the thing that teaches you the most. And so, you know, this simplistic idea of doctors ‘fixing’ patients is not the point. It’s often the point, but not always.
Whichever way you look at it there’s this continuum. You go back 200 years, you find doctors trying to solve problems with their patients. Go back a hundred years, doctors were trying to solve problems with their patients. That's what we're doing now. And I'm pretty sure that's what they're going to be doing in 50 and 100 years’ time, whatever happens with artificial intelligence.
An audio aside...
Not long after interviewing Dr Richard, I tuned in to my favourite podcast hosts Alastair Campbell and Rory Stewart (The Rest Is Politics) for their latest interview on Leading. It happened to feature Google AI aficionado James Manyika in conversation about whether AI is ‘the answer to human suffering’. Having just talked to Richard, it was interesting to note the kind of blanket claims made, like ‘when doctors are assisted by these tools [like Google’s Gemini] they do better’. This seems quite characteristic of the lack of nuance in discussions around AI in healthcare, and the primacy that's automatically given to technology over uniquely human capabilities whatever the case, discipline, or clinical setting.
That said, it’s a fascinating discussion with an interesting snippet about quantum computing (if AI isn’t already enough to get your head around…). But spoiler alert, the answer to whether AI will save us from or in fact create more human suffering does remain elusive. Alastair Campbell’s suggestion to read the Digitalist Papers for different perspectives on future possibilities is a good one.
Interested in more AI & Us?
Catch up on earlier conversations with a philanthropist, a physio and a photographer, and subscribe here on LinkedIn or on my website for upcoming profiles – including interviews with an amazing Australian journalist and a former state premier, among others...
#AIinHealthcare #GeneralPractice #HumanCentredAI #DoctorPatientRelationship #HealthcareInnovation #MedicalEthics #FutureOfWork #EmpathyInMedicine #DigitalTransformation #ArtificialIntelligence #GPs #PrimaryHealthCare #ChatGPTinMedicine
Professional Writer and Editor
1 个月Nice one, Shauna. Well and good ...
Senior Managing Director
1 个月Shauna Hurley Very Informative. Thank you for sharing.
Director @ DMT Arts International | Strategic Development, Major Gifts
1 个月Fascinating insights Shauna - I love Dr Richards comments about the distortion and subversion of the truth. Chat GPT and other language models are so seductive. I feel we need to stand firm on training AI now we are stuck with it and to it, to sift out erroneous connections and assumptions, just to arrive at an answer. The danger of polluting science and medicine if we don't have the voice of reason supplied by Dr Richard, is worth serious consideration. Artificial it is.. intelligent, it is not. Thank you Shauna
Associate Professor & Principal Research Fellow Head, Centre for Health Communication & Participation Coordinating Editor, Cochrane Consumers & Communication; Co-lead Cochrane People, Health Systems & Public Health
1 个月Fabulous interview and article Shauna, as always. So interesting to hear from an experienced GP on the front line about the possibilities and challenges of AI when caring for humans with complex problems