Robust Interpretability Methods For Large Language Models

Robust Interpretability Methods For Large Language Models

On December 1, 2023 the AI Precision Health Institute at the University of Hawai'i Cancer Center hosted the 12th seminar in a series of talks on AI in cancer research and clinical practice. In this session, William Rudman a rising star in AI from Brown University and the Health NLP Lab at Brown & Tuebingen University presented his research on robust interpretability methods for large language models and their use in diagnostic decision making.

Interpretability Illusions

The black box nature of deep learning techniques has limited their application in clinical settings. William Rudman's talk addressed the black-box nature of AI and described how to build interpretable models. Traditional interpretability methods, such as gradient-based saliency maps or model probing, are subject to interpretability illusions where networks may spuriously appear to encode interpretable concepts. For example, medical images are predominantly greyscale because color maps can imply meaning to color boundaries that may really have no meaning. This means even more in context to the black box problem of deep learning. William Rudman's work focuses on finding more robust techniques for understanding deep learning models by investigating the vector space of model representations. For example his team found that a single basis dimension in fine-tuned large language models drives model decisions and preserves >99% of the original classification performance of the full model. His ongoing research project investigates how interpretability methods developed for large language models can be applied to understand how multimodal clinical models developed for detecting child abuse from free-text clinical narratives and patient demographic information make diagnostic decisions.

Presentation Highlights & Slides

Speaker Bio

William Rudman is a 4th year PhD student in the computer science department at Brown University and a member of the joint Health NLP Lab at Brown & Tuebingen University. His primary research direction focuses on understanding the structure of large language model representations and how models make downstream decisions. In addition to my interpretability research, I work on developing NLP methods for detecting child abuse from free-text clinical narratives.

AI Precision Health Institute at the University of Hawai'i Cancer Center. Image: Margaretta Colangelo

AI Precision Health Institute

The AI Precision Health Institute Affinity Group was formed to discuss current trends and applications of AI in cancer research and clinical practice. The group brings together AI researchers in a variety of fields including computer science, engineering, nutrition, epidemiology, and radiology with clinicians and advocates. The goal is to foster collaborative interactions to solve problems in cancer that were thought to be unsolvable a decade ago before the broad use of deep learning and AI in medicine.

Recaps of Past Affinity Group Presentations

New Applications of AI in Cancer Research & Clinical Practice (2023 Recap)

Robust Interpretability Methods For Large Language Models, December 2023

Machine Learning Captures Insights Into Brain Tumor Biology, November 2023

Comparing AI Algorithms To Predict 5 Year Breast Cancer Risk, October 2023

Disrupting the Indigenous DNA SupplyChain, September 2023

AI Based Lab Test Approved To Phenotype, Grade Breast Cancer, July 2023

Trustworthy AI and Clinical Validation In Breast Cancer Imaging, June 2023

AI For Ultrasound For Real-Time Breast Cancer Decision Support, May 2023

Deep Learning To Diagnose Breast Cancer With High Accuracy, April 2023

Precision Oncology: Empowering Radiologists With AI, January 2023

Machine Learning For Personalized Cancer Screening, December 2022

AI Driven Surgical Robots To Diagnose/Treat Prostate Cancer, November 2022

Subscribe, Comment, Join Group

I'm interested in your feedback - please leave your comments.

To subscribe to the AI in Healthcare Milestones newsletter please click here .

To join the AI in Healthcare Milestones Group please click here .

Copyright ? 2024 Margaretta Colangelo. All Rights Reserved.

This article was written by Margaretta Colangelo. Margaretta is a leading AI analyst who tracks significant milestones in AI in healthcare. She consults with AI healthcare companies and writes about some of the companies she consults with. Margaretta serves on the advisory board of the AI Precision Health Institute at the University of Hawai?i?Cancer Center @realmargaretta

要查看或添加评论,请登录

社区洞察

其他会员也浏览了