Time on the (paid) tools
Mark A. Bassett
Associate Professor | Director, Academic Quality and Standards | Academic Lead (Artificial Intelligence)
Many HE providers already have an AI equity problem, but it’s a staff AI equity problem and it has the potential to significantly impact assessment redesign
The AI lightbulb moment for many educators occurs after the first time they successfully complete a student assessment using GenAI. Having witnessed ChatGPT, Claude, CoPilot etc output a detailed and comprehensive response that, in an ever-increasing number of cases, would meet the learning outcomes of the assessment, it’s not uncommon for educators to realise that something has to change.
But as Ethan Mollick recently highlighted, a combination of ‘naive prompting’ and Large Language Model (LLM) accessibility issues is preventing some educators from even approaching this realisation.
For more than simple tasks, it’s rare for someone new to ChatGPT, for example, to prompt it in such a way that it outputs what they want on their first attempt. Getting ‘Chatty-G’ to respond in an exhaustive, detailed, and coherent way on complex tasks takes hours and hours of practice, testing the prompts of others, patience, and increasingly, luck (you’re at the whim of the model’s ‘mood’ the moment you submit your prompt). Now consider that models change over time and are regularly (and surreptitiously) updated, so even the most experienced users must continue to engage with the models to ensure they’re across current idiosyncrasies.?
领英推荐
I think a lot of people would be surprised about what the true capabilities of even existing AI systems are, and, as a result, will be less prepared for what future models can do. (Mollick, 2024)
For educators who are inexperienced with prompting LLMs, initial experimentation with ChatGPT may yield largely disappointing results that cast doubt on the potential for the tool to complete an assessment. It’s at this point that educators may walk away from these tools, reluctant to invest any additional time to learn good promoting techniques and further their understanding of how to ‘interface’ with an LLM.
When this is combined with the fact that many educators are understandably using the free version of ChatGPT—which, for complex tasks, is strikingly inferior to the paid version—we end up with staff who just don’t see what the fuss is about and simply don’t buy the claims that these models have harpooned the security of many of their assessments. This combination prevents educators from not only understanding but actually experiencing the capabilities of GenAI.
The result is educators who don’t know how students are currently using GenAI to complete their assessments, and more importantly, don’t understand how to guide students to use these tools to support their learning and prepare for a new AI-enabled world.
Possessing a robust understanding of GenAI capabilities is critical to rethinking assessment design
There are no experts in prompting LLMs, only those who have spent literally hundreds of hours engaging with these tools. It’s your ‘time on the tools’ that matters the most.
Assistant Director @ Loyola Chicago | AI Enthusiast
8 个月This article is spot on! It’s hard to convey the urgency of the situation to educators when many are convinced it can not help them and/or are afraid to confront the technology. What’s incredible is most are not concerned as they have no idea what they are missing.
University Librarian, Charles Sturt University
11 个月Absolutely, through patient dialogue and introspection, academia can responsibly leverage genAI to help transform research and education.