A Nobel Laureate's vision of AI in health care

A Nobel Laureate's vision of AI in health care

One of the godfathers of artificial intelligence (AI), Dr. Geoffrey Hinton, was awarded a 2024 Nobel prize for his seminal research in the use of artificial neural networks in AI. (Dr. John Hopfield shared the award.) His remarks at a press conference that celebrated the award are an informed commentary on the current state and future prospects for the use of AI in health care.?

Background to the press conference?

We have witnessed an explosion of interest in healthcare applications of generative AI, based on large language models that are built with deep neural networks that were made possible by Dr. Hinton's research. But there is no Nobel prize category for computer science or artificial intelligence! Hinton was awarded the Nobel prize for physics, because he used statistical physics and concepts related to energy states to create models that mimic the way biological neurons process information. Previously, the Turing award from the Association of Computing Machinery (ACM) has been called the "Nobel prize for computer science" —?and Hinton previously received that award in 2019, along with two other "Godfathers of AI", Yann LeCun and Yoshua Bengio (NYU and McGill).

Dr. Hinton is professor emeritus at University of Toronto. He was until recently also a VP Engineering and Fellow at Google, a position he left to be more free to express his concerns regarding how we develop and apply increasingly powerful AI systems.?

Selected remarks from the press conference

Here is an edited transcript with a selection of questions and answers that may be of interest to doctors who wonder (or worry) about current and future uses of AI in health care:?

Dr Hinton, what do you believe your legacy will be when it comes to AI?

I'm hoping AI will lead to tremendous benefits, to tremendous increases in productivity, and to a better life for everybody. I'm convinced that it will do that in health care. My worry is that it may also lead to bad things, and in particular when we get things more intelligent than ourselves no one really knows whether we're going to be able to control them.?

Both you and Dr. Hopfield have warned of the dangers of unchecked AI and not understanding enough about how it now works. How do we avoid catastrophic scenarios?

We don't know how to avoid them all at present; that's why we urgently need more? research.? So I'm advocating that our best young researchers or many of them should work on AI safety and governments should force the large companies to provide the? computational facilities that they need to do that.

Can you elaborate on your concerns around AI? Do you believe it might become more intelligent than humans? Why and how quickly do you believe that could take place??

Okay, so most of the top researchers I know believe that AI will become more intelligent than people. They vary on the time scales — a lot of them believe that that will happen sometime in the next 20 years. Some of them believe it will happen sooner; some of them believe it will take much longer. But quite a few good researchers believe that sometime in the next 20 years AI will become more intelligent than us and we need to think hard about what happens then.?

How do you reconcile receiving this recognition with your outspokenness about the need to slow AI advancement and the risks that technology poses?

I've never recommended slowing the advancement of AI because I don't think that's feasible AI is is has so many good effects, like in health care, but in in pretty much all industries that I think there's no chance of us slowing the development of it… I think we need a serious effort to make sure it's safe because if we can keep it safe it'll be wonderful.?

Do you think students and even professionals over-relying on LLMs is going to have a dumbing down effect, or will we operate on a higher order?

I don't think it will have a significant dumbing down effect. I think it'll be like what happened when they first had pocket calculators and people said oh kids won't learn math anymore, they won't be able to do multiplication well. You don't need to be able to do multiplication if you've got a pocket calculator, and and I think it'll be the same with LLMs. People maybe won't remember as many facts that you can just ask an LLM and it will know, but I think it'll make people smarter not dumber.

Can you please elaborate on your comment earlier on on the call about Sam Altman

So OpenAI was set up with a big emphasis on safety. Its primary objective was to develop artificial general intelligence and ensure that it was safe. One of my former students Ilya [Sustsekver] was the chief scientist, and over time it turned out that Sam Altman was much less concerned with safety than with profits and I think that's unfortunate.

Do you have any recommendations for how to prevent serious consequences in the future? By that they mean how people should be careful of AI and its use, as you warned it can be dangerous?

I don't think individual people being careful in how they use it is going to solve the problems. I think the people developing AI need to be careful how they develop it and I think research needs to be done in the big companies which have the resources. I'm not convinced that the way individual people use it is going to make much difference.

When will AI surpass human capabilities? What will happen as a result?

Nobody knows when but most of the good researchers I know think it will happen. My guess is it will probably happen sometime between 5 and 20 years from now. It might be longer; there's a very small chance it'll be sooner, and we don't know what's going to happen then.? So if you look around there are very few examples of more intelligent things being controlled by less intelligent things which makes you wonder whether when AI gets smarter than us it's going to take over control.?

What are the exciting next frontiers for you in AI?

I'm 76 and I'm not going to do much more frontier research. I believe I'm going to spend my time advocating for people to work on safety. I think there are very exciting frontiers in robotics in getting AI to be skilled at manipulating things. At present we're much better than computers at that or than artificial neural nets. But there will be a lot of progress there. It may take a bit longer in that area though. I also think these large language models are going to get much much better at reasoning. So the latest model from Open AI and models from Google, like the latest versions of Gemini, are getting better at reasoning all the time, and I think that's going to be very exciting to watch.?

Can you share some more specific examples of how you think it can play a positive role?

If you think about an area like health care, a large part of the Ontario budget goes on health care. It can make a tremendous difference there, so I actually made a prediction in 2016 that by now AI would be reading all the scans that radiologists normally read. That prediction was wrong; I was a bit over enthusiastic. It may be another five years before that happens, but we're clearly getting there. AI is going to be much better at diagnosis, so already if you take difficult cases to diagnose a doctor gets 40% correct, a doctor AI system gets 50% correct, and the combination of the doctor with the AI system gets 60% correct, which is a big Improvement. In North America, several hundred thousand people a year die of bad diagnosis. With AI, diagnosis is going to get much better. But the thing that's going to really happen is you'll be able to have a family doctor who's an AI who has seen 100 million patients and knows huge amounts — and will be much much better at dealing with whatever ailment it is you have because your AI family doctor will have seen many many similar cases.

Anything that we've kind of missed here?

…One thing we've only touched on briefly is the role of curiosity-driven basic research. So artificial neural nets, the groundwork was all done by university researchers, almost all done by university researchers just following their curiosity. And funding that kind of research is very important. It's not as expensive as other kinds of research, but it lays the foundation for things that later are very expensive and involve a lot of technology?

Why do you think we haven't yet reached the point you predicted where AI is playing a bigger role in health care? Are there any barriers left to this happening ?

One barrier is the medical profession is very conservative. There's good reasons for that. If people die when you make a mistake, it's a good policy to be conservative. But they're relatively slow to adopt new technology. Another reason is I was just wrong about the speed at which AI systems would be better than radiologists at reading scans. They're now comparable with radiologists at lots of different kinds of scans, and better at a few. I think in another few years they'll definitely be better than radiologists, and what we'll see is collaborations between radiologists and AI systems, where the AI system reads the scan and the radiologist checks that it didn't make a mistake. And after a while the AI systems will be doing nearly all the work.

Sources:

Dr. Hinton - did not have much to say about AI in healthcare except for the sobering truism: "one barrier is the medical profession is very conservative". With all due respect - where he lacked insight is in the current value proposition of neural nets for diagnosis - that is - for the every day diagnostic needs of a clinician (MD/NP/PA/RN) seeing 20 plus patients a day. Before AI numerous real visit studies have been conducted to document the chance of a patient getting the correct diagnosis on a visit to a doctor.?? If a doctor conducts a history and physical exam as they were taught in medical school - that is they listen the patient and ask pertinent questions - then continue on to do a proper exam - they will have gathered the data required to be accurate 89% of the time in the diagnosis. In the AI world it is all about data - accurate, tagged data that you can plug into a neural net tuned from 100s of millions of similar data collections.? The limiting factor is the clinician still has to gather the data and magically enter it into the AI blessed computer to benefit from the massive neural net. So the clinician still has to do a proper H&P, until the day comes that the Star Trek 'tricorder' is available to do the job.

回复
Kate Merzlova

Chief Digital Transformation Consultant @ SumatoSoft | Modern IoT & MedTech Solutions | Driving Business Growth Through Software Development

1 个月

I couldn’t agree more with the need for ongoing research, especially in AI safety.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了