Time on the (paid) tools
Image: ChatGPT

Time on the (paid) tools

Many HE providers already have an AI equity problem, but it’s a staff AI equity problem and it has the potential to significantly impact assessment redesign.

The AI lightbulb moment for many educators occurs after the first time they successfully complete a student assessment using GenAI. Having witnessed ChatGPT, Claude, CoPilot etc output a detailed and comprehensive response that, in an ever-increasing number of cases, would meet the learning outcomes of the assessment, it’s not uncommon for educators to realise that something has to change.

But as Ethan Mollick recently highlighted, a combination of ‘naive prompting’ and Large Language Model (LLM) accessibility issues is preventing some educators from even approaching this realisation.

For more than simple tasks, it’s rare for someone new to ChatGPT, for example, to prompt it in such a way that it outputs what they want on their first attempt. Getting ‘Chatty-G’ to respond in an exhaustive, detailed, and coherent way on complex tasks takes hours and hours of practice, testing the prompts of others, patience, and increasingly, luck (you’re at the whim of the model’s ‘mood’ the moment you submit your prompt). Now consider that models change over time and are regularly (and surreptitiously) updated, so even the most experienced users must continue to engage with the models to ensure they’re across current idiosyncrasies.?

I think a lot of people would be surprised about what the true capabilities of even existing AI systems are, and, as a result, will be less prepared for what future models can do. (Mollick, 2024)

For educators who are inexperienced with prompting LLMs, initial experimentation with ChatGPT may yield largely disappointing results that cast doubt on the potential for the tool to complete an assessment. It’s at this point that educators may walk away from these tools, reluctant to invest any additional time to learn good promoting techniques and further their understanding of how to ‘interface’ with an LLM.

When this is combined with the fact that many educators are understandably using the free version of ChatGPT—which, for complex tasks, is strikingly inferior to the paid version—we end up with staff who just don’t see what the fuss is about and simply don’t buy the claims that these models have harpooned the security of many of their assessments. This combination prevents educators from not only understanding but actually experiencing the capabilities of GenAI.

The result is educators who don’t know how students are currently using GenAI to complete their assessments, and more importantly, don’t understand how to guide students to use these tools to support their learning and prepare for a new AI-enabled world.

Possessing a robust understanding of GenAI capabilities is critical to rethinking assessment design and being able to support students to use these tools appropriately in the age of GenAI. Without this foundational understanding and experience, educators are being left out of the conversation. However, as Helen Beetham noted, in response, institutions don’t so much need to accelerate the personal productivity of staff to match that of the hype cycle by ‘getting them up to speed’, but rather ‘offer them space to slow down, share, and reflect’.

There are no experts in prompting LLMs, only those who have spent literally hundreds of hours engaging with these tools. It’s your ‘time on the tools’ that matters the most.

Emily Pacheco

Assistant Director @ Loyola Chicago | AI Enthusiast

8 个月

This article is spot on! It’s hard to convey the urgency of the situation to educators when many are convinced it can not help them and/or are afraid to confront the technology. What’s incredible is most are not concerned as they have no idea what they are missing.

回复
Carlo Iacono

University Librarian, Charles Sturt University

11 个月

Absolutely, through patient dialogue and introspection, academia can responsibly leverage genAI to help transform research and education.

回复

要查看或添加评论,请登录

Mark A. Bassett的更多文章

  • 'All that we are is the result of what we have thought'

    'All that we are is the result of what we have thought'

    Large Language Models (LLMs) like ChatGPT, Claude, and Microsoft Copilot can’t think (saying it doesn’t make it so…

    10 条评论
  • Provenance and the infinite regress

    Provenance and the infinite regress

    One response to the academic integrity challenges posed by Generative Artificial Intelligence (GenAI) that I’ve noticed…

    23 条评论

社区洞察

其他会员也浏览了