Transparency in the Age of AI in Online Education
Shenita Ray, PhD
Leader in Developing Online Learning Infrastructures, Innovating Digital Education Solutions, and Integrating AI into the Process of Designing Online Programs.
A recent conversation with a colleague gave me reason to pause and reflect. We were discussing faculty perspectives on using AI in online program development, specifically whether there is an obligation to inform faculty when AI tools are used to draft or create course content. I had always assumed this was a given—of course, we should disclose when generative GenAI tools contribute to course assets. But that discussion led me to a broader question: Are we obligated to disclose our use of AI not just in course creation, but across all areas—marketing, recruitment, student support, and instruction?
As AI tools become increasingly prevalent in education, the need for transparency feels more urgent. These tools have the potential to streamline processes, spark creativity, and generate innovative solutions. However, with this potential comes an ethical responsibility to consider how and when we disclose their use. Transparency lies at the heart of this discussion, ensuring trust and accountability in how we leverage AI.
Who Bears the Responsibility of Transparency?
Why Transparency Matters
Transparency fosters trust. Whether between instructional designers and faculty, faculty and students, or institutions and their broader communities, honesty about AI usage can prevent misunderstandings and ethical dilemmas. It also sets a precedent for responsible AI practices in education, aligning with the values of integrity and accountability that we aim to model for students.
But transparency also has its complexities. Not every use of AI may feel significant enough to disclose. Do we risk overwhelming stakeholders with information, or is the commitment to full disclosure a non-negotiable in this new era of education?
Opening the Conversation
The rapid integration of AI tools in online education is transforming how we design, deliver, and experience learning. This shift demands that we confront complex ethical and practical questions, ensuring we develop policies and practices that are both equitable and forward-thinking. Transparency, in this context, goes beyond merely disclosing the use of AI; it’s about cultivating a shared understanding of how these tools function, their benefits, and their limitations. Maintaining trust in an AI-driven educational landscape requires us to critically examine the extent to which we communicate the role of AI in decision-making processes, data usage, and content personalization. How transparent should we be to balance innovation with accountability and equity?
Owner & CEO, Shiftwork Consulting; Former Program Director Georgetown Executive Certificate in DEI (2011-2024); NTL Emeritus Member; OD Scholar-Practitioner, and Leadership Coach
2 个月Thanks so much for sharing these insights on this important development, Shenita. I enjoyed reading the article.
Director of HR Business Partners and Main Campus Human Resources
2 个月Shenita it’s, a very good read. These are some of the questions I am addressing in my dissertation. Thank you for sharing.
Instructional Design and Technology Specialist at Georgetown University
2 个月These are very relevant questions! I also encourage our faculty to include the rationale behind their AI use policies and their use of AI.
Instructional Designer | Lifelong Learner, Scientist, Educator, Organizer, Creator, and Traveler
3 个月Thanks for your thoughts Dr. Ray. We've been frequently discussing this topic within my LD&D team. In my opinion, not everyone is as transparent as they should be.