Ethical AI in Professional Fields: Guiding Students to Spot Bias
Course Hero
We envision a world where every student graduates, confident and prepared.
Editor's Note: This article is part of an advice column series, authored by Course Hero's Vice President of Academics, Sean Michael Morris .
In this series, Sean addresses the questions and concerns of real faculty members using his 20+ years of experience working in critical digital pedagogy. His work has been featured by National Public Radio, The Chronicle of Higher Education, Inside Higher Ed, Times Higher Ed, The Guardian, Forbes, Fortune, and by numerous podcasts across the education space.
Reader Question
I want to teach my social work students how to responsibly and ethically use AI. One of the ways that I want to do this is to help them identify bias and prejudice that might be possible in AI responses. How could I do this effectively?
Submitted by: Kayla B. , Clinical Assistant Professor and BSW/MSW Practicum Education Coordinator at University of Michigan-Flint
Dear Kayla,
This is an excellent and timely question. As AI finds its way into fields like healthcare and social work, understanding its limitations—particularly around bias and prejudice—is essential. By helping your students critically analyze AI outputs, you’re setting them up to use this technology responsibly while upholding the core values of their profession.
Unpacking AI Bias
AI models learn from vast datasets, which often reflect the biases embedded in society. This means that AI-generated responses might carry forward historical prejudices or structural inequalities. For social work students trained to advocate for marginalized populations, recognizing these biases is critical to sustaining ethical practice.
It’s important to remind your students that AI is neither neutral nor infallible. Like the social structures they study, AI operates within systems built by human developers and shaped by existing data, both of which can carry implicit or explicit biases.?
Worth Noting: In addition to data biases, AI chatbots assume an accommodating stance and can reinforce the thoughts of the user. It’s equally important to help students recognize their own biases, because if not considered critically, AI use could result in confirmation bias.
Teaching Critical AI Evaluation
To teach students how to recognize bias in AI, encourage them to engage directly with AI systems. Here’s a step-by-step approach:
1. Introduce the concept of data provenance.
Guide your students to explore the origins of the data used to train AI models. Ask them to think critically about who collected the data, the context in which it was gathered, and whether it represents diverse populations.
2. Examine AI outputs for harmful stereotypes.
Have students run various scenarios through AI models and analyze the results. For instance, they can prompt an AI tool to generate responses related to different social identities—race, gender, socioeconomic status—and evaluate how the AI handles these topics. Are certain groups disproportionately represented in negative contexts? Does the AI make assumptions based on identity?
3. Compare AI-generated content to social work standards.
A cornerstone of social work is adhering to ethical guidelines that prioritize fairness and justice. Ask your students to assess whether AI-generated content aligns with these values. If an AI suggests a solution that reinforces harmful stereotypes, it’s important for students to recognize it and think critically about how they would respond as professionals.
Educator Tip: Use this free GenAI checklist developed by Jeremy Caplan , Director of Teaching and Learning at Craig Newmark Graduate School of Journalism at CUNY and author of Wonder Tools, to guide students in responsible AI use.
Role-Playing as AI Investigators
You might try using role-playing exercises where students act as “AI investigators.” Assign them the task of identifying biases in AI responses and brainstorming solutions for minimizing those biases. This could involve analyzing real-world case studies where AI has failed, such as in criminal justice, hiring, or healthcare disparities.
By taking on the role of problem solvers, students can both critique AI and explore how to contribute to its ethical use.
Group Discussions on Ethical AI Use
Ethical decision-making is a key component of social work, and this should extend into conversations about AI. Facilitate group discussions where students reflect on the implications of using AI in their future practice.
Some guiding questions could include:
These conversations will help students grapple with the moral complexities that AI introduces to the field.
Preparing Students to Use AI as a Tool for Good
Our role as educators is to guide students in using AI as a tool for good while helping them understand ways to apply it thoughtfully in their future careers. By teaching them to identify and address AI bias, we equip them to use technology in ways that prioritize human impact. These skills help them become better social workers who can use AI to uplift, rather than marginalize, the communities they serve.
Warm regards,
Sean Michael Morris
Vice President of Academics
Course Hero
Want your questions answered?
If you work in academia and have a pressing question for Sean, we encourage you to submit it here for a chance to be featured in an upcoming column.
Please note that due to the high volume of submissions, we may not be able to respond to every question. We can assure you, though, that each submission is read and valued, and your voice helps shape the ongoing conversation about the future of education. We look forward to hearing from you!
Passionate Social Work Educator
3 天前Thank you for this response to my question! Greatly appreciated.