Muse? Newsletter - January 2023 - AI & Health

Muse? Newsletter - January 2023 - AI & Health

Artificial intelligence has the potential to revolutionize healthcare by improving patient prognostics, increasing the efficiency of therapy, and reducing costs of treatment.

As we learn in the articles reviewed below,?AI can be used in a variety of applications, including diagnostics, treatment planning, and drug discovery.?

AI can be used to analyze medical images and assist with diagnoses, such as identifying cancerous tumors. It can also be used to analyze patient data and predict outcomes, such as the likelihood of a patient developing a certain condition.?

In addition, AI can be used to develop personalized treatment plans for patients and identify potential drug interactions.


However, the authors raise a number of concerns about the impact of AI on the medical profession. One concern is that AI systems may perpetuate biases that are present in the data used to train them.?

Additionally, there is the concern that AI may replace human jobs in the healthcare industry, such as radiologists, and that it may lead to a loss of human expertise and judgment.?


These concerns will continue to grow as the footprint of AI in healthcare will grow in the coming years as the models continue to improve and more healthcare providers leverage these technologies.


But before we present the articles we have selected for this month, Lee and I (Alexander) would like to wish you a happy new year.


And to kick off the new year, we would like to remind you that our website is available at the following address: Muse?: Listen to your muse

Don't hesitate to leave us a comment, to share our newsletter with others.

Enjoy reading


In healthcare, the use of AI or machine learning (ML)-based decision support tools is becoming increasingly common. However, the datasets used to train these tools can be biased. As a result, the recommendations made to practitioners or non-practitioners using these tools are flawed, reducing the quality of treatment decisions.?


The researchers conducted a study to mitigate the harm caused by discriminatory algorithms. The researchers recruited clinicians (438) and lay people (516). Participants were given a series of 8 summaries of calls to a fictitious crisis hotline about people in mental health emergencies.?


The participants' task was to decide who should respond to the patients, either a medical team or the police. Some of the participants used algorithmic decision support recommendations to choose who should intervene. The AI recommendations were either a biased or an unbiased language model.


The powerful data processing (big data, data lake) of AI and machine learning promises great advances in the medical field. In this preprint (article for information only), a team of researchers tries to demonstrate the expectations and perspectives of AI implementation in pathology.

?

To do this, the researchers recruited 24 participants, all experts in pathology, according to well-defined criteria. The participants were asked to carry out the study using the Delphi consensus method.

?

Three points emerged from the study, which was conducted in early 2021. First, AI would improve key performance indicators (KPI), such as an increase in the complexity of reports or an increase in the amount of data collected. Second point, AI will affect specific tasks and manpower in different components of pathology. And third, the applications of AI such as analysis and detection, image interpretation, classification.


If the use of AI in medicine can improve care and treatments, and make them more affordable, then so much the better. However, there are projects that, despite their benefits for humanity, will never reach the stage of clinical trials for ethical reasons.?

?

To overcome this obstacle, a team of scientists and developers are trying to implement "built-in ethics" in the design of algorithms.?

?

To achieve this, the research team proposes methods and solutions to overcome the various problems encountered in integrating ethics into medical AI projects, while respecting the various medical regulations.



The use of algorithms and artificial intelligence in the field of health can improve the efficiency of diagnosis or treatment of patients, or even the processing of large amounts of data. This technological progress also brings with it a number of questions and uncertainties.?

?

In fact, there are several learning methods for training algorithms, ranging from simple and comprehensible learning methods to complex self-learning methods that are incomprehensible to health professionals. In this sector, doctors must be able to justify the results and reports produced by self-learning algorithms.

?

The analysis by researchers J.M. Durán and K.R. provides an answer to this concern about the opacity of self-learning algorithms. It explains the notions of methodological and epistemological opacity, as well as the ethical concerns in the use of these algorithms.


In this contribution to Forbes, Rootstack’s Alejandro Oses discusses the various ways in which artificial intelligence (AI) is being used to improve medical processes and patient outcomes. He highlights how AI can be used in areas such as diagnosis, treatment planning, and drug discovery.?


The author notes that AI can help to analyze medical images, predict patient outcomes, and develop personalized treatment plans.?


The article also mentions that AI can help to identify and prevent medical errors, which can reduce costs and improve patient outcomes.


In this contribution to Nature, María Agustina Ricci Lara and her co-authors discuss the potential biases in AI systems used in medical imaging and how to address them.?


The article highlights the importance of ensuring that AI systems used in medical imaging are fair and unbiased in their results, as biased results can lead to poor patient outcomes.?


The authors review several approaches that can be taken to address fairness in AI for medical imaging, such as creating diverse training sets, using explainable AI methods, and incorporating human oversight. They mention that it's important to consider ethical and societal implications when developing AI systems for medical imaging.


This Northwestern University study originally published in Scientific Report explores the use of a deep learning method to predict cognitive function in individuals.?


The article explains that the method is based on a convolutional neural network model that has been trained to predict cognitive functions using magnetic resonance imaging scans.?


The study highlights that the method has been shown to be effective in predicting cognitive function in a sample of individuals, and that it has the potential to be used in a clinical setting to help identify individuals at risk of cognitive decline.


  • Website:

AI Muse? Grenoble : Listen to your muse




Aucun texte alternatif pour cette image


Inscriptions for the?#BAI?/?#ZHAW?Winter School open October 10th. This ten day, 4 ECTS, session focuses on the application of AI for Good in?business and society next Feb. 1-10 in Mysore, India :

  • We will explore the organizational impact of Applications of?#AI?using case studies, workshops, seminars, and expert testimony from the?#finance, Service, and IT Sectors. Course syllabi are available on request.
  • The session is open to upper class and graduate management and engineering students, as well as working professionals aiming to develop their practical application of?#DataScience?and?#artificialintelligence?in?#business?and society.


  • An exclusive roundtable with participants from the UN’s International Research Center in?#artificialintelligence?will highlight the session.


  • Company speakers and/or visits are planned with the data science teams from leading international companies in Bangalore’s Electronics City.


#research #researcher #health #publichealth #mentalhealth #deeplearning #dl #machinelearning #ml #artificialintelligence #ai #algorithms #makingdecision #bias #minorities #discrimination #santépublic #ia #medicine #pathology #etiology #physiopathology #nosology #ethicai #ethicalai #aiethics #ethics #genre #religion #nlp #nlu #datalake #bigdata #blackbox


Jean-Luc STANISLAS Grégoire Hinzelin Paulyne Renard David Gruson Julien LEVAVASSEUR Fran?ois Cazals Chaire de Philosophie à l'H?pital

Alexandre MARTIN

πολυμαθ?? ? Times of AI ? Analyste d'Affaire en IA ? éditorialiste & Veille stratégique ? AI hobbyist ethicist ? Techno-optimiste ?

1 年
回复
Alexandre MARTIN

πολυμαθ?? ? Times of AI ? Analyste d'Affaire en IA ? éditorialiste & Veille stratégique ? AI hobbyist ethicist ? Techno-optimiste ?

1 年
回复
Alexandre MARTIN

πολυμαθ?? ? Times of AI ? Analyste d'Affaire en IA ? éditorialiste & Veille stratégique ? AI hobbyist ethicist ? Techno-optimiste ?

1 年
回复

L'intelligence artificielle en santé: Enjeux éthiques et juridiques - Observatoire international sur les impacts sociétaux de l'IA et du numérique (OBVIA) https://youtu.be/DDL2_94gRNQ #artificialintelligence #health #éthique #ethics #ethicai #ethicalai #juridique #law #justice

回复
Alexandre MARTIN

πολυμαθ?? ? Times of AI ? Analyste d'Affaire en IA ? éditorialiste & Veille stratégique ? AI hobbyist ethicist ? Techno-optimiste ?

1 年

L'#intelligenceartificielle en #santé: Définitions et promesses - Observatoire international sur les impacts sociétaux de l'IA et du numérique (OBVIA) https://lnkd.in/g9SHt2NR #artificialintelligence #health

要查看或添加评论,请登录

社区洞察

其他会员也浏览了