Artificial Intelligence and Health Communication: #ARM2024 Retrospective
AI at Health Academy's 2024 Annual Research Meeting: Takeaways from a health communication PhD Student

Artificial Intelligence and Health Communication: #ARM2024 Retrospective

I just attended Academy Health's Annual Research Meeting (ARM). I'm hard at work on a dissertation looking at artificial intelligence, social media, and organizational health literacy. So it was a great chance to learn how my research meshes with these healthcare and policy discussions.

I'm interested in communities', local public health workers', and patients' use of technology to advance their goals. And there was a lot to learn at ARM 2024. Here is a sampling of some of the sessions I attended:

  • AI Governance by Patients For Patients
  • Teaching Responsible AI: Ethics of AI in Data Science and Health Care Education
  • AI, Health Equity, and Patient Empowerment

There were other sessions that touched on AI in areas like quality control or data equity. Admittedly, they started to blend together in my memory around the 7th session with an AI focus. So, I wanted to organize my thoughts into 3 takeaways as I head into my final year of graduate school.


The popular image of AI has changed

When I saw “artificial intelligence” in health settings a few years ago, it referred to machine learning. Usually: topic modeling, machine vision, or other? tools to help find patterns in large datasets. (Am I the only one who hasn't heard? about “big data” in a while?) Now it tends to refer to generative AI. Usually: ChatGPT.

That's a huge shift in professional culture of health fields. “AI” was once treated as an abstract, mathy approach to research. And now it's treated as a concrete part of the media landscape surrounding patients and health professionals alike.

My optimistic view: this is an exciting time. There are powerful new information technology tools, with rules and norms that are still in flux. Maybe there's still time to orient them toward health equity. Maybe there's still time to bake in community-driven norms.

Clipart illustration of a cart pulling a horse, edited so the horse is facing the cart instead of facing away from it.

My pessimistic view: tech companies already set the terms of engagement. And they put the cart before the horse, looking for problems to match their solutions. Now I feel like I'm doing research to say that not every “solution” has a problem. And please hire more people for health communication.

Communication Problems are Relationship Problems

I have James Carey to thank for the original quote I love to remix.

...problems of communication are linked to problems of community, to problems surrounding the kinds of communities we create...

And I think that's a helpful lens for one area where generative AI has taken off in healthcare settings: repetitive writing. Example: nursing hand-off notes, instant message replies, and health record summaries.

Looking just at the transfer of information, this sounds really helpful. A machine might very well be better at documentation than a tired person at the end of a shift. Or a busy person rushing to the next patient.

But communication isn't just about information. It's about relationships. And what does it say about our health system that there isn't enough time for communicating with patients and peers? What kind of relationships are those?

Our health systems are biased. We shouldn't expect the data to be any different.

The word “bias” came up a lot over the past few days. I like that we have a short word to describe how algorithms produce unjust outcomes. But "bias" points to the wrong issue.

For example, as part of my research, I've been training models to label public health Tweets according to health literacy guidelines. One model I trained was more likely to label Tweets with the word "China" as discussions of health risks. That model is problematic since since the outputs would unfairly impact one group of people if we just applied that to other Tweets. I wouldn't call this model "biased," because it seems to actually reflect the way public health communication played out online in 2020. So the problem isn't the data. Or the model itself. The problem is health communication.

A graphic with the cover of "Race After Technology" and a quote by its author, Ruha Benjamin. The quote reads, "Remember to imagine and craft the worlds you cannot live without, just as you dismantle the ones you cannot live within.""
I cannot recommend Race After Technology by Ruha Benjamin' enough.

That's the same reason I have an issue with the phrase "garbage in, garbage out." This refers to the idea that your model is only as good as the data that you train it on. But Ruha Benjamin has given me a more helpful way to think about this problem. All the social data we use to train AI models are the outputs of Jim Crow and a history of racism and discrimination embedded in policy. Education. Housing. Health. They are all shaped by an ongoing history of legal discrimination. The "garbage data" is not a bug, it was part of the design.


Conclusion

I'll close with a new framing of AI I hadn't heard before. I had a conversation with a patient advocate. In a context of discrimination and power imbalances, they used AI tools to help write emails to doctors, summarize months of doctors' notes, and prepare their talking points for clinic visits. All to try to gain an upper hand while waiting for clinicians to prove themselves trustworthy. (Again, I would say these are relationship issues coming to light in communication issues.)

I love seeing people use technology to empower themselves and flip the script on concepts like trust. I know such uses happen in spite of these tools' design for commercial purposes. At the same time, it gives me hope that I could contribute to tools that match the intentions of people underserved by our health systems.


Resources

Here are a few resources I learned about at ARM 2024:


About the Author

I am a PhD candidate in Population Health Sciences at the Harvard T.H. Chan School of Public Health. My research combines media studies and computational methods to improve organizational health literacy online. Check out my article on online harassment of public health workers as an example. Outside of research, my work focuses on supporting social media communication that can help advance health equity. Check out Digital Safety Kit for Public Health as an example.

Blair Williams, MPA, MBA, CPH

Community Driven Health Innovation | DrPH Student at Johns Hopkins Bloomberg School of Public Health | 2023 AJPH Student Think Tank

4 个月

It’d be great to connect in the near future. Would love to hear more about your studies and our shared interests in communication!

回复

I also attended ARM2024 and am new to AI. These insightful takeaways from the conference helped me to better understand the relevant issues.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了