The Looming Threat of AI Authors: A Challenge to Creativity and Authenticity - digital distraction
Dr. Tassos Anastasiades
Transforming Global Education: Leading with Innovation, Mindfulness, and Cultural Insight
Digital distraction is no longer just a minor annoyance; it's a pervasive challenge threatening effective learning in today's digitally saturated world.
UNESCO's AI in Education guidance recognises this, emphasising that navigating the complexities of AI, including Generative AI (GenAI), requires not just technical understanding but also a strong foundation in ethical considerations (like bias and copyright), pedagogical implications (such as critical thinking and prompt engineering), capacity building for teachers, and a robust regulatory framework.
It stresses moving beyond fearing AI-driven cheating to proactively teaching students how to use GenAI responsibly, fostering crucial skills like critical thinking, media literacy, and the learning agility necessary to thrive in an AI-driven future.
This aligns with the IB's ATL skills framework, which provides a structure for developing these essential AI competencies and promoting human agency—the ability to adapt, learn, and act ethically and effectively in a rapidly changing environment.
Just as we must combat digital distraction to cultivate focused learners, we must also equip them with the skills to harness the power of AI responsibly, ensuring it empowers rather than diminishes their potential.
Challenges
As an educational leader with extensive experience across 11 countries, I've witnessed firsthand the transformative power of artificial intelligence.
While AI offers thrilling possibilities across various fields, its encroachment into authorship raises serious concerns about the future of creativity, originality, and the essence of human expression.
The prospect of AI not merely as a tool but as a substitute for human authors presents a multifaceted challenge.
Consider the novel 1 the Road by AI programmer Ross Goodwin. Composed during a road trip with an AI hooked up to sensors and cameras, the book generates text based on environmental inputs.
Readers are left questioning whether they are engaging with a human narrative or a synthesised string of data.
This ambiguity undermines the crucial connection between author and reader—a bond forged through shared human experiences and emotions.
Education
In my work within the educational sector, I have leveraged AI to enhance learning outcomes.
From personalised learning pathways to predictive analytics, AI has proven to be a valuable ally in reforming school systems.
However, I remain cautious about its role in creative domains.
For instance, AI-generated journalism, such as the articles produced by the Associated Press, offers efficiency but lacks the nuanced analysis and investigative depth that human journalists provide.
The danger lies in normalising such content, leading audiences to accept surface-level information devoid of critical insights.
This concern is amplified by the increasing sophistication of AI.
Deep learning models like GPT-4 can analyse an authors entire body of work and produce eerily similar texts.
In 2019, a university student used AI to generate a new chapter of Harry Potter, resulting in a narrative that, while grammatically correct, lacked the imaginative spark of J.K. Rowlings storytelling.
Such instances blur the line between homage and intellectual property theft, raising complex questions about authorship and originality.
If a machine can generate text that imitates human writing, does it qualify as an author? In the legal arena, this question is uncharted territory.
The case of Naruto v. Slater, where a monkey took a selfie and questions arose about animal copyright, pales in comparison to the complexities AI introduces.
With AI-generated content, who holds the rights—the programmer, the user inputting prompts, or the AI itself?
Artistically, AI struggles with the subtleties of human emotion and lived experience.
Take, for instance, the profound impact of Elie Wiesels Night, a personal account of the Holocaust.
An AI could assemble facts and simulate empathy, but it cannot replicate the visceral authenticity born from Wiesels harrowing experiences.
Literature serves as a mirror to the human condition—a reflection that requires a soul behind the words.
Moreover, reliance on AI could lead to the homogenisation of literary styles.
Algorithms optimise for patterns and may favour popular tropes to appeal to a broader audience.
This could marginalise unique voices and innovative narratives that challenge the status quo.
Plagiarism
Plagiarism is another lurking issue. Since AI models are trained on vast amounts of existing literature, they may inadvertently reproduce passages or ideas from other works.
In academia, this raises concerns about academic integrity.
A student submitting an AI-generated essay might unknowingly include uncredited material, leading to accusations of misconduct.
This necessitates a deeper understanding of AI ethics, which should be incorporated into educational curricula. This includes discussions about authorship, plagiarism, bias in algorithms, and the broader societal impact of AI.
Furthermore, educational institutions might incorporate AI ethics into their curricula, preparing future writers and readers for the evolving landscape.
This could involve discussions about authorship, plagiarism, bias in algorithms, and the broader societal impact of AI.
The International Baccalaureate
In classrooms, the International Baccalaureate (IB) suggests several approaches to enhance teaching and learning that are particularly relevant in this context.
These include inquiry-based learning, which encourages students to ask questions and explore topics deeply, fostering critical thinking skills essential for evaluating AI-generated content.
An interdisciplinary approach, which integrates subjects to help students make connections between different areas of knowledge, is crucial for understanding the complex interplay between technology, creativity, and ethics.
Collaborative learning promotes group work and develops communication skills, enabling students to discuss and debate the implications of AI authorship.
Finally, the use of technology, while potentially contributing to the problem, can also be part of the solution, allowing students to explore and experiment with AI tools while developing a critical awareness of their limitations.
These practices are pivotal in preparing students to navigate the complexities of AI's influence on authorship and creativity.
UNESCO's guidance on AI in education emphasises understanding AI, ethical considerations, pedagogical implications, capacity building, and a regulatory framework. It stresses moving beyond fearing cheating to teaching students how to use Generative AI (GenAI) effectively, ethically, and creatively.
Ethically, data privacy, bias (as highlighted by research showing GenAI exaggerates societal inequalities), misinformation, misuse, intellectual property, and copyright are crucial.
Pedagogically, critical thinking is key to navigating AI's limitations, including biases and inaccuracies. Prompt engineering—crafting effective prompts—is also essential, requiring critical literacy and thinking.
Capacity building necessitates teacher training and access to appropriate GenAI tools. AI use shouldn't hinder intellectual development; human agency, including critical thinking, empathy, creativity, and ethical judgment, remains paramount.
A regulatory framework is needed to ensure data privacy, intellectual property rights, and ethical AI use, potentially including age limits.
The IB's ATL skills framework aligns with this guidance, promoting learning agility—the ability to learn, relearn, and unlearn quickly. This agility, akin to human agency, enables individuals to adapt and act responsibly.
Specific ATL skills, like critical thinking, information and media literacy, and communication, are vital for developing AI competencies.
We need to apply these skills in the context of GenAI, focusing on evaluating AI responses, crafting effective prompts, understanding ethical implications, and utilising AI for collaboration and reflection.
It is important to equip students with the skills to become informed and responsible GenAI users in an AI-driven future.
Looking forward
So how do we navigate this new terrain?
One approach is to establish clear guidelines and transparency.
Publishers could disclose the use of AI in creating content, allowing readers to make informed choices.
Drawing from my extensive work in educational leadership and consultancy, I believe we can also look to industries that have faced similar challenges.
The music industry, for instance, adapted to digital synthesisers and sampling by creating new genres and embracing fusion.
Musicians like Imogen Heap use technology to enhance their art without losing the personal touch.
Similarly, writers can harness AI for brainstorming or overcoming writers block while ensuring the final work remains a product of human creativity.
We can also delve deeper into the issue of plagiarism in the age of AI, exploring how AI-generated text might blur the lines of plagiarism, making it harder to detect, and considering the development of tools or techniques to address this.
Ultimately, preserving the human element in literature isn't just about safeguarding jobs—its about maintaining the depth, diversity, and richness of human stories.
As AI continues to advance, our vigilance and commitment to authenticity will determine the future of literary arts.
Further Reflection
This issue extends beyond literature into how we value and define human creation in an increasingly automated world.
What measures can we take to protect the sanctity of human expression?
How can we balance technological progress with ethical considerations?
Engaging with these questions now is essential to preserving the soul of our cultural and creative endeavours.
I have used AI myself to gather some of the research in this article - my views are truly authentic - I value the use of AI - but worry on how it is being used and the authenticity.
Consultant at Self-Employed
1 周A wonderful Read, Dr. Tassos!! A detailed insight on the consequences of using AI in the educational sector has thrown us the challenge of using it as an useful tool - just to comprehend the wholesomeness of the topic. It should in no way take away the writer's touch as well as the writer's perspective and intention. Truly, preserving the soul of our cultural and creative expression is a must.
If we somehow curb the usage of AI in the life of a student, we will be depriving them of a skill that they will eventually have to use in their future. It is imperative that education and its dynamics change to align to this inevitable change. #adaptingtoai
School Principal - Emirates Schools Establishment
1 周Dr Tassos I recently attended a session at the Canadian University Dubai, where one of the Psychology lectures actively encouraged her students to utilize AI in the production of the assignments, the grading on each assignment was split between this written piece and the presentation that the students had to complete on their findings. It was during this presentation that a students true understanding of what they had written was evaluated. As educators, we have to embrace the possibilities, this is a chance to move away from the rote learning approaches of the past and dive into critical thinking, innovation and PBL, all of which are key elements of the IB. I have recently commented on the works of Mona Salem Binothman, who is advocating for this chamge to take place, which will see students being prepared for the jobs of the future. Students commencing kindergarten this year will graduate university in 2042, that will be the UAE's 71st year. How has the workplace changed since 1971, and we can only guess what jobs will look like then. Thank you for sharing your thoughts here, boss.