AI, Academia, and Evaluation
"I guess it makes sense for a robot to read an e-book [401]" by brianjmatis is licensed under CC BY-NC-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/2.0/?ref=openverse.

AI, Academia, and Evaluation

There have been some interesting questions raised with the launch of ChatGPT. The exponential improvement of AI capabilities over the past few years is surely cause for excitement and concern. Social media has been buzzing and the most interesting discussion to me is the impact ChatGPT and tools like it will have on the traditional education system.

A tool like ChatGPT makes it super easy for a student to create a body of work that can convincingly check the completion boxes for a variety of assignments and assessments. As some of us know, there was a time not long ago that writing a research paper required us to flip through dusty pages of an actual book and create stacks of index cards. Today I can create an informative and accurate research paper on almost any topic within seconds.

Articles, posts, and comments on social media are debating the good and evil of ChatGPT. I’ve noticed the discussion seems to be generally positive when the tool is being used to solve a problem or create marketing content or educational materials. The conversation becomes more divided when it comes to how or even if students/learners should be using ChatGPT. While forming my own opinion about the topic I came back to a familiar conclusion. What is the expected outcome and how do we evaluate if that outcome was achieved?

I challenge everyone in academia and workplace learning to stay grounded in an outcomes-based approach to designing learning and performance improvement programs. The expected outcome or result will vary greatly depending upon your industry or role in learning and education. ?Nonetheless the outcome must be the focal point and the reason for all the material, assignments and assessments that make up the program. ?

For instance, let’s take a learning program in an academic setting where a research paper is required to pass the course. According to an article from Purdue University, “The goal of a research paper is not to inform the reader what others have to say about a topic, but to draw on what others have to say about a topic and engage the sources in order to thoughtfully offer a unique perspective on the issue at hand.”

Let’s say for the sake of this example that we agree on that statement as the goal(outcome) of a research paper. The real challenge we have as educators and workplace learning professionals becomes how we evaluate for the successful achievement of that outcome. It is now possible using ChatGPT to generate a research paper of any length, citing various reputable resources. If the completion criteria for this activity is to write a research paper, then the box will be checked. If we haven’t designed a way to evaluate whether the student has “thoughtfully offered a unique perspective on the issue at hand”, then we’ve missed an opportunity to ensure our expected outcome has been achieved.

The business world has been obsessed with automation for over a century. It shouldn’t be surprising that a student with limited perspective and high intelligence would be tempted to automate a task or assignment, especially if they don’t see value in doing it manually. A student can only be successful in automating the completion of their homework assignments if they pass the course. If the criteria for passing the course doesn’t properly evaluate the intended outcome, then the program has failed the student, not the other way around. This is bad. Student failures can be observed and have consequences, but failed programs are not always so obvious and can do more damage to society.

Educators and workplace learning professionals will need to be more creative than ever to deliver on the promise of each program. We can embrace new technologies with a healthy skepticism and accept that they are part of the landscape, but we need to adjust our expectations and our evaluation methods. The concerns about ChatGPT are certainly founded but this is a great opportunity for us to align each assignment/task/course/program with meaningful outcomes and implement evaluation methods that cannot be cheated. Equally as important, we can clearly communicate the value of achieving those outcomes so students/learners can gain more perspective about why some assignments shouldn’t be automated.

I would love to talk specifics!

If you have a learning program or a solution in place that is in danger of being devalued by AI, let’s talk! I would love to hear about it and talk through different strategies for keeping it effective. You can send me a message on Linkedin or email me at [email protected].

Good luck out there!

Lauren Schneider

Chief Evangelist - Learner Engagement

2 年

I admit that! Although the rising skepticism over the usage of chatGPT is understandable, if you give it thought and retrospect the prowess of learning methods & outcomes, it's surprising that the evaluation methods remained static, more or less. Indeed, the launch of such AI models would also significantly impact revolutionizing older processes, especially in academia.

Brandon Strickland

Talent Acquisition Professional

2 年

ChatGPT...I just heard about it recently. I am still trying to understand it. Admittedly I have some catching up to do on my knowledge.

要查看或添加评论,请登录

Thomas Schrader的更多文章

  • An Inspiration

    An Inspiration

    In 1999 I met a woman who would inspire me for the rest of my career. I was in college that year and struggling…

    4 条评论
  • The Secret Solution to Everything

    The Secret Solution to Everything

    Project past deadline? Receive a poor performance review? Lock yourself out of your house? Argument with the spouse?…

    3 条评论

社区洞察

其他会员也浏览了