Is AI Harmful or Helpful for Students? The Tale of Two Studies

Is AI Harmful or Helpful for Students? The Tale of Two Studies

In an Time of Rising AI Pessimism, Why Do We Increasingly Want a Simple Answer to This Question?

As online enthusiasm for AI’s potential in education gives way to a more measured, and at times, skeptical stance, scholars and researchers are now publishing studies that provide essential insights for school administrators, faculty, and staff grappling with a pivotal question: Does AI benefit or undermine students??


The popular media often sensationalizes these findings with headlines like “The Stupidity of AI ” or “AI Use Linked to Memory Loss .” Such black-and-white framing might simplify decision-making, if true, enabling educators to react swiftly and decisively to protect our students' cognitive development. However, a closer examination of the research reveals a far more nuanced picture than these dramatic proclamations suggest.

In this article, I will explore two significant, yet divergent studies concerning AI’s impact on student motivation, memory, and academic performance. In a nod to Charles Dickens , I have sub-titled this article “The Tale of Two Studies,” reflecting the contrasting narratives these investigations present


Study 1: ChatGPT Is Harmful for Students

In the more recent of the two studies, entitled "Is it Harmful or Helpful? Examining the Causes and Consequences of Generative AI Usage Among University Students, " researchers Muhammad Abbas, Farooq Ahmed Jam, and Tariq Iqbal Khan explore the effects of ChatGPT usage on undergraduate students in 2023. Contrary to the balanced inquiry suggested by its title, the study primarily probes for potential detrimental impacts of AI interaction and concludes:

?“Our findings suggested that excessive use of ChatGPT can have harmful effects on students’ personal and academic outcomes. Specifically, students who frequently used ChatGPT were more likely to engage in procrastination than those who rarely used ChatGPT. Similarly, students who frequently used ChatGPT also reported memory loss.”?

Notably, the structure of the entire research—from its experimental design to the phrasing of survey questions—suggests a focused search for negative outcomes.


The investigation unfolds through three time-lagged surveys designed to chronicle the adverse aspects of students' interactions with generative AI:

Survey 1: Captures baseline data on academic workload, academic time pressure, and students' sensitivity to the rewards and quality of input from ChatGPT.

At each phase, students self-report their experiences using scales, responding to statements such as: “I use ChatGPT for my course assignments” or “Lately, I often forget things I need to do.” The study makes significant inferential leaps from these self-reported data.?

For example, in the concluding section, the researchers assert a direct correlation between perceived academic pressure and the extent of AI usage, and between AI usage and reported memory loss, without adequately accounting for other potential explanatory factors or the possibility of bidirectional causation—whereby increasing memory loss could lead to greater reliance on AI as a coping strategy.

One of the most intriguing yet underreported elements of this study pertains to the concept of reward sensitivity, defined within the project as the extent to which a student is concerned about academic rewards such as grades. The researchers initially hypothesized that students with high reward sensitivity would use ChatGPT more frequently than those without such sensitivity. Contrary to expectations, the study found that such students self-reported less frequent use of ChatGPT. I believe this trend reflects a multifaceted issue: the current limited utility of AI tools, a lack of proper instruction for students on how to effectively use these tools, and their limited integration into university writing programs.?


Study 2: ChatGPT Is Helpful for Students

Our narrative of contrasting studies deepens as we delve into the references within this essay's discussion on ChatGPT's impact on memory. The authors of "Is it Harmful or Helpful?" cite a work by Ramazan Yilmaz and Fatma Karaoglan Yilmaz in their section on “memory impact,” suggesting that "continuous use of ChatGPT for academic tasks may develop laziness among students and weaken their cognitive skills."?

This claim intrigued me, especially after reviewing an earlier study by Yilmaz & Yilmaz, titled "The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy, and motivation ." Surprisingly, this study found that within a computer science curriculum, students who engaged with AI tools actually exhibited higher computational thinking skills, enhanced programming self-efficacy, and increased motivation compared to their peers who did not use these tools. Notably, this study followed a similar three-wave time-lagged design with similar inferential gaps and outcome expectations built into its experimental designs.?



Additional Evidence: ChatGPT Is Helpful for Professionals

Despite my own title, I will throw one more study into the bargain. The findings of Yilmaz & Yilmaz’s work brought to mind one of my favorite explorations of AI’s potential, “The role of generative design and additive manufacturing capabilities in developing human–AI symbiosis: Evidence from multiple case studies .” I have delved into this analysis in much greater depth on my Substack, Educating AI . For more insights, refer to this specific post . What draws me to this study is its push beyond mere documentation of trends, as it seeks to theorize and elucidate the learning processes inherent in AI-assisted work cycles.?


Conducted by Elliot Bendoly et al., this research examines the impact of generative design applications across industries during the pandemic. Through a three-wave time-lagged study, Bendoly documents how firms relying on these applications showed greater resilience, efficiency, and profitability. He posits the work process at the core of this efficiency as a double loop learning cycle, with AI serving as "a symbiotic learning mechanism" that enhances users' meta-cognitive reflections.

Clearly, the contexts and subjects in Bendoly’s and Yilmaz & Yilmaz’s studies differ significantly from the undergraduates in our first discussed study, leading to distinct interactions between tool, user, and outcomes. In both studies, domains of use are much more focused and users have much more training with tools in question.?

This variance underscores a broader caution: we need to be on guard against any sweeping statements about AI's impact on cognitive functions, motivation, or academic performance. If we hear some saying, “AI causes cognitive impairment,” “AI causes dementia,” “AI causes rapid skills acquisition, our immediate response needs to be, “Who says?, ” “In what context?,” or “Who is this question serving?”


Why Do We Want Simple Answers?

In conclusion, it's intriguing to consider why there is such a strong desire for definitive answers to questions like "Is AI harmful or helpful to students?" at this particular moment in time–at this particular moment in the school calendar. As a commentator, I find myself desiring to wear multiple hats—empathetic reconciler, snarky cultural critic, cerebral academic, and optimistic pragmatist—to address the diverse perspectives within my audience.

Firstly, there's a palpable concern for the safety of children as they interact with technology that possesses increasingly agentive properties.?

Secondly, there's a prevalent distrust of the companies developing these tools, despite their numerous assurances about safety standards and protections.?

Thirdly, we recognize the potential benefits these tools can offer to our students, having witnessed their remarkable capabilities in specific situations.?

Fourthly, we understand that these tools are not yet fully ready for classroom integration. Even if they were, a significant gap remains in transitioning from traditional to AI-responsive pedagogies, necessitating extensive training.

But more importantly, we teachers are flat-out tired. This exhaustion fuels our strong desire for definitive answers—we yearn for clarity and simplicity amidst the chaos of rapid technological change and educational demands. After a year of grappling with uncertainties and future challenges, we long for solutions that can bring stability and ease to our professional lives.?

More reflections soon!

Nick Potkalitsky, Ph.D.



Check out some of my favorite Substacks:

Terry Underwood’s Learning to Read, Reading to Learn : The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!

Suzi’s When Life Gives You AI : An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy

Alejandro Piad Morffis’s Mostly Harmless Ideas : Unmatched investigations into coding, machine learning, computational theory, and practical AI applications

Amrita Roy’s The Pragmatic Optimist : My favorite Substack that focuses on economics and market trends.

Michael Woudenberg’s Polymathic Being : Polymathic wisdom brought to you every Sunday morning with your first cup of coffee

Rob Nelson’s AI Log : Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.

Michael Spencer’s AI Supremacy : The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts

Daniel Bashir’s The Gradient Podcast : The top interviews with leading AI experts, researchers, developers, and linguists.

Daniel Nest’s Why Try AI? : The most amazing updates on AI tools and techniques

Riccardo Vocca’s The Intelligent Friend : An intriguing examination of the diverse ways AI is transforming our lives and the world around us.

Jason Gulya’s The AI Edventure : An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.

Joel Kowalewski

AI/Machine Learning Scientist, Writer, Instructor, PhD Computational Neuroscience

2 个月

Thanks for the brief literature review. I think the major roadblock is the education system itself. When faced with disruptive technologies, institutions, in general, can either reject these influences--in an effort to preserve the status quo--or embrace the technologies and evolve. Currently, the decision among major universities has been to stigmatize the technology by imposing rules, policies, and penalties. This simply motivates students to view the technology not as an effective learning tool but as an act of rebellion against what they perceive as an outdated and inflexible institution. The high-achieving student, for instance, who was less likely to use the technology in the one study, likely fears reprisals. Amid feelings of imposter syndrome, they are forced to do double the work and experience double the stress. An immovable yet broken institution serves no one.

要查看或添加评论,请登录

Nick Potkalitsky, PhD的更多文章

社区洞察

其他会员也浏览了