Learning and/with AI

Learning and/with AI

Thought of the day #3

(This is in response to a recent article in RTE: https://www.rte.ie/brainstorm/2025/0217/1497164-university-exams-students-chatgpt-deepseek-generative-ai/)

The rapid advancement of generative AI tools such as ChatGPT and DeepSeek undeniably raises important questions about university education and assessment. However, the proposed solution of reverting to traditional, invigilated, pen-and-paper exams and placing the responsibility on students to use AI "judiciously and responsibly" fails to address the deeper, structural challenges at play. Instead of trying to preserve outdated modes of assessment or expecting students to single-handedly navigate the complexities of AI integration, we should be rethinking the broader function of university education in the knowledge economy.

The argument that supervised, in-person exams are the only way to ensure academic integrity is shortsighted. While invigilated exams may reduce AI-related academic misconduct, they do not necessarily assess the skills and knowledge that graduates will need in a world where AI is omnipresent. Traditional exams often emphasise memorisation and rapid problem-solving under pressure—skills that are increasingly being automated. The focus should not be on policing students’ use of AI, but rather on designing assessments that foster critical thinking, creativity, and problem-solving in AI-enhanced environments.

If AI can already pass undergraduate exams with high scores, then the real issue is not students using AI, but rather whether the knowledge and skills we are testing remain relevant in an AI-driven world. Instead of doubling down on restrictive assessment methods, universities should explore open-ended, applied, and collaborative forms of evaluation that reflect the realities of professional work environments.

Rather than viewing AI as a threat to academic integrity, we should be using its capabilities as an impetus to rethink the role of higher education. The ability to recall facts or solve standard problems is no longer a key differentiator in the workforce. Instead, universities should focus on developing students’ ability to:

1. Critically evaluate AI-generated content for accuracy and reliability,

2. Work alongside AI to enhance problem-solving and decision-making,

3. Apply theoretical knowledge to real-world, complex, and interdisciplinary challenges, and

4. Develop skills that AI cannot easily replicate, such as ethical reasoning, communication, and leadership.

This shift requires a departure from rigid, outdated assessments toward more dynamic, authentic learning experiences. Project-based learning, case studies, group work, and research-driven assignments—where students must apply their knowledge in unpredictable contexts—offer more meaningful ways to evaluate student competency.

Placing the onus on students to “use AI responsibly” is an unfair burden. Universities should be equipping students with the literacy and skills to effectively and ethically engage with AI. This means:

1. Designing curricula that explicitly incorporate AI as a tool, rather than treating it as a threat to be managed,

2. Teaching students how to critically assess AI-generated outputs,

3. Providing clear guidelines on ethical AI use in academic work, and

4. Encouraging transparency in how students integrate AI into their learning processes.

AI is not going away, and attempting to maintain traditional educational structures without adaptation is a losing battle. The challenge for universities is not to determine how best to restrict AI use, but how to harness it to create a more relevant, future-focused educational experience.

(image Credit:?rawpixel.com)

要查看或添加评论,请登录

Dr Angelos Bollas, FHEA的更多文章