"Well, I'm going to use A.I even if my professor bans it": A response to research that indicates kicking A.I out of our classrooms isn't an option

"Well, I'm going to use A.I even if my professor bans it": A response to research that indicates kicking A.I out of our classrooms isn't an option


Students are outpacing professors

I was recently hired to do a keynote address and workshop series on generative A.I for educators and students at a local California Community College. During the research and development phase, I discovered a fascinating study completed by Tyton Partners with over 3,000 students filling a survey out, regarding increased A.I usage over a year long period. Below, you can see a visualization of the results:

Note: “Has tried” refers to Gen A.I use once or twice a month whereas “Regular use” refers to Gen A.I use of the tool several times per month and in some cases per week.

As the graph above shows, students polled from Spring 2023 to Spring 2024 are far outpacing the regular use of generative A.I than the faculty who are facilitating their classrooms. But what is more interesting, is that in a response that asked students whether they would continue to use generative A.I even if it was banned via their college academic plagiarism policy or individual classroom policies, 75% of respondents said they would continue to use it.

The natural knee jerk reaction to a stat like that is, “Well I’ll catch them red handed and assign them a big fat F.” I hate to be the bearer of bad news but in a blog released by OpenAI (the company responsible for the now infamous ChatGPT), it was determined that A.I detectors were unreliable at best and dangerous at worst. Here is what some of the most brilliant data scientist in the world discovered when they tried to create their own version of the A.I detector:

Additionally, Massachusetts Institute of Technology (MIT), The University of Kansas, Inside Higher Ed., and Vanderbilt University have all released statements cautioning educators about the use of A.I detection software. The accuracy rate for Turnitin is about 77%-- this translates to the margin of error being over 15%. In layman's terms, we run the risk of accusing honest students of falsifying assignments which could lead to dire consequences for both the student and the institution.

What can we do?

Based on research from the study I previously referenced, educators are not keeping up with this technology to their own detriment. Even fewer are finding ways to clearly relay realistic AI-usage policies and are generally not incorporating assignments that require students to begin to use the tool in a manner that is ethical and promotes critical thinking.?

For those of you who are in search of a jumping off point, here are some suggestions:

  1. Learn how to use it– Some of you would be astonished to know how many faculty have never asked ChatGPT a question (this has been a conversation with many faculty I have trained). It’s very difficult to create policies that work for the unique needs of your classroom if you yourself don’t know what the tools have to offer. Though you may not find yourself using it regularly, see what all the fuss is about so you can talk about it with your students intelligently.?
  2. Don’t just include a syllabus policy and expect students to read and understand it– have a conversation with them about it. I wrote another blog post that walks you through how to discuss the ethical implications of blind copying and pasting work straight from large language models.
  3. Research other professors that are experimenting with AI in their own classroom– This has been SO eye-opening for me. One sample assignment I discovered was from Associate Professor Angela Seaworth at Texas A&M University who has students in her fundraising class write their own letters and thank you notes to donors and then asks ChatGPT to write one in order to compare. Students typically prefer their own which helps to point out how artificial intelligence lacks the humanistic connection that is necessary in many of the fields we work in.

"even if it [A.I] was banned via their college academic plagiarism policy or individual classroom policies, 75% of respondents said they would continue to use it"

Remember that if we are teaching students how to use AI the right way, they are far less likely to abuse it. We are also doing them the service of preparing them for careers that will absolutely require them to understand the foundational principles of prompt engineering (a blog for another day).

How have you incorporated AI into your classroom? Leave a comment below.


Ashley Berry is the CEO and Founder of The Higher Ed. Institute, an educational consulting firm that specializes in collaborating with faculty on best pedagogical and andragogical classroom practices.

If you are interested in learning more about best practices for artificial intelligence, please feel free to contact me at [email protected].


Carman Wimsatt MA

Assistant Professor Los Angeles Pierce College

4 个月

Great Article! Thank you for sharing.

回复
Citlalli Anahuac

Ethnic Studies Professor, community educator, poet, advocate

7 个月

This is a great read! I struggle with this. I challenge myself to create more creative prompts that allow students a more hands on approach to showcasing their critical thinking skills and creativity!

Katie Datko

Educator, Creator, Innovator, Multiple Award-Winning Curriculum Designer

7 个月

Thanks for sharing!

要查看或添加评论,请登录

Ashley Berry的更多文章

社区洞察

其他会员也浏览了