Navigating the AI Detection Debate in Education: Insights from Academic Discussions
In recent times, the use of AI detection software for student submissions has become quite the hot topic in the educational sector. After engaging in numerous discussions with many academics globally , I've gathered a wealth of their valuable insights on this matter and have brought their views together in this article.
Recently I reached out to the Jisc AI mailing group for their thoughts and experiences regarding the use of AI detection software for student work (Artificial intelligence community - Jisc). Specifically, I was interested in understanding the general consensus on whether to use AI detection tools, the importance of assessment design, AI literacy and training, and the ethical considerations surrounding these tools.
Here, I share their collective comments and informed considerations that educational organisations are currently discussing and should keep in mind.
The Debate on AI Detection Tools
One of the key themes that emerged from these discussions is the use of AI detection tools. Many academics advised against using them due to their unreliability and potential for false positives. While I believe there's a larger debate to be had around AI bias, many emphasised the importance of good assessment design and AI literacy.
It's was very interesting to see how academia’s focus is shifting towards creating assessments that integrate AI use ethically and effectively, rather than just relying on the end product.
Another key point is the importance of AI literacy and training for both staff and students. Investing in training can help staff interpret AI detection reports and create better assessment processes. It's clear that there's a strong emphasis on human interpretation and ethical use, with many cautioning against using free online AI checkers without students' knowledge or consent.
Interestingly, while some cautioned against the use of AI detection tools, others see value in them when used appropriately. These tools can enhance detection capabilities and maintain academic integrity when integrated thoughtfully into existing workflows. For instance, Turnitin is widely used and trusted for plagiarism detection and has recently been enhanced to identify AI-generated content. However, some tools can produce false positives, and reliance on detection percentages can lead to inappropriate academic misconduct investigations.
Ethical Concerns and Considerations
The use of AI detection tools in education raises several ethical concerns. Privacy and data security are significant issues, as these tools often collect detailed data on student writing patterns and other personal information. False positives and negatives can lead to inappropriate academic misconduct investigations and unfair treatment of students. Additionally, AI algorithms can perpetuate existing biases, resulting in discriminatory outcomes.
Transparency and ethical use are crucial, especially when these tools are used without students' knowledge or consent. Over-reliance on AI detection tools can also detract from the importance of good assessment design and AI literacy. To address these ethical concerns, organisations should ensure privacy and data security by handling student data responsibly and with proper consent. Combining AI detection with human oversight can mitigate the risk of false positives and negatives. Training AI algorithms on diverse datasets can help prevent biased outcomes. Transparency about the use of AI detection tools and focusing on good assessment design and AI literacy are also essential steps.
AI Detection vs. Plagiarism Detection
AI detection is a bit like plagiarism detection. Tools like Turnitin don't actually detect plagiarism; they find material that's similar to other sources. This approach can lead to a repeat of similar responses where students couldn't get away with copying from the internet and turned to copying human efforts instead.
Embracing GenAI
领英推荐
A great reply that was received suggested the idea of substituting the word 'internet' for 'AI' or 'GenAI' in any recent dialogue to see if it still makes sense.
For example, “How do we know if the student used the internet in this assignment?” Thirty years ago, this might have been a legitimate question.
Now, not so much. Can you imagine doing anything without the internet? Could we even define an assessment, other than closed book exams, that would be possible without the internet?
Do we need to embrace GenAI just like we've accepted the internet's ubiquity? If we require (and teach!) the use of AI tools, then if a student chooses not to use these tools, their work will either be entirely their own (which is fine if it's their choice) or will self-penalise. Imagine doing an essay with only 80s-style library loans, photocopies, or printed textbooks.
Teaching AI Literacy
So, there is a need to teach AI literacy (the Three Vs: verification, validation, and veracity), and set assessments that focus on the process, supported by AI tools. Jisc has been a tremendous support for education in navigating these challenges. They offer numerous engaging articles and support sessions to help educational organisations adopt AI responsibly.
Jisc's Support and Resources
Jisc's AI maturity toolkit has been really useful for the #FE college I work at South Staffordshire College and provides resources and guidance for effective AI adoption in tertiary education. They also run AI literacy training sessions to support educators in enhancing productivity and developing AI literacy. You can explore more about their initiatives on their website.
Acknowledging Contributions
Thank you to all the academics across the globe who have contributed to this hot topic. Your insights and thoughts have been amazing and incredibly insightful. With all this information coming in, Co-pilot has helped me pull together and group your responses to generate this succinct (and more eloquent than I would ) article from all your emails that we can now share with an wider community on LinkedIn.
Conclusion
Whilst the debate and responses continue, this sharing of best practices (thanks JISC for making this possible) and AI-generated summary demonstrates that academics worldwide are seriously discussing the changing face of academic assessment.
The genie is out of the bottle, and while AI detection tools have their benefits, they also come with drawbacks. As we navigate this evolving landscape, it's crucial for educational organisations and awarding bodies to consider these insights and focus on creating robust, ethical, and effective assessment strategies.
Sign up to the JISC Artificial intelligence community here:
#AI #AIDetection #Education #AIinEducation #AcademicIntegrity #AIethics #AssessmentDesign #AILiteracy #EdTech #AItools #Jisc #GenAI #AcademicAssessment #AITraining #PrivacyInEducation #AIandEthics #FutureOfEducation #DigitalLearning #AIinAcademia #EdTechInnovation #AIandPrivacy
Customer Experience @ Jisc
1 个月Particularly liked the phrasing of this pragmatic ‘question’ “A great reply that was received suggested the idea of substituting the word 'internet' for 'AI' or 'GenAI' in any recent dialogue to see if it still makes sense.”
Innovation and Learning Manager, Queen Mary Academy
1 个月It was a really insightful discussion - thank you for sharing the summary.
Director of IT and Facilities | MCMI CMgr | NPPV3 + SC
1 个月Excellent read, Steve. I’m all at sea with AI and machine learning, but I am not really sure where they will end up in education. For example, I have used spell checkers or Grammarly for years. Were these tools not created to make life easier? Maybe we should adopt these technologies or change the curriculum to prepare learners for employment. Like you, I feel we should teach learners to work smart so they can compete in the industry.