Frankenstein AI: Can ChatGPT really promote learning?
Image created by Jordan Hopkins.

Frankenstein AI: Can ChatGPT really promote learning?

Like me, you’re probably groaning at another article or post chiming about ChatGPT and how it will forever change your business! or disrupt X, Y, or Z forever!

However, please sit with me. This won’t be another hype post about ChatGPT.

Rather, I want to offer a critical take on ChatGPT. Since instructional design is all about the learner, my concerns are twofold:

  • Does ChatGPT support learning?
  • How ethical and humane is ChatGPT?

We’ll tackle the first question in today’s newsletter. But first – here are some key terms you’ll encounter, so brush up on all things Machine Learning and ChatGPT!

Key Terms

  • Artificial Intelligence (AI): When machines simulate human intelligence.
  • Machine Learning (ML): A type of AI “that can ‘learn’ from data patterns without human direction” (“What Is ChatGPT?,” 2023).
  • Generative AI: This type of ML can create, or generate, new images, music, or text based on the input it has received.
  • Natural Language Processing (NLP) Platforms: These platforms can understand the natural parts of human language, from meaning to tone, and then communicate back ideas using natural language patterns according to IBM.
  • GPT: Generative Pre-Trained Transformer. A NLP capable of “reading and writing text” based on its input (Pavlik, 2023, p. 2).
  • ChatGPT: Is the most popular NLP right now. It uses a natural-feeling chat layout using GPT (see above) to generate responses.
  • OpenAI: The company who created ChatGPT and DALL-E, a generative AI model for creating visuals from text.

TL;DR

ChatGPT is an exciting advance in Generative AI that is upending how schools, higher-ed, marketing, and others are approaching content creation. However, ChatGPT has some major flaws. In regard to supporting learning, the question remains: How can it support critical thinking, germane cognitive load, and synthesis? So far, it’s a mixed bag. Read on to learn more.


Does ChatGPT Support Learning?

And the beneficiary is…

So far, most articles and research I’ve read about ChatGPT has focused on how it can help content creators, whether instructors or marketers, pump out their content more efficiently. ChatGPT makes it ridiculously easy to create a slew of content types from email marketing copy to clear learning objectives for a course. (For an impressive look how at how a thoughtful prompt can create a nifty lesson, check out Dr Philippa Hardman 's recent post on ChatGPT.)

So, in one sense, ChatGPT sorta helps with learning by helping the creators of said courses, learning content, etc.

But what I really want to know is if it can directly support germane cognitive load, reflection, synthesis, and schema-building for the learners themselves, not the creators.

Wrestling with Frankenstein

Many Higher-ed and K-12 professionals are dealing with how to handle this new technology. Students are using it to mass-produce essays and answers to homework. Educators are stumbling with how to embrace ChatGPT into their classroom repertoire. Teachers and professors are smart. They know ChatGPT is not going anywhere, so they’re discovering ways to critically discuss and use it with their students. This has to be the way, right? I?believe so. We have to keep the conversation going.

So, I went to ‘the source’ and asked ChatGPT how effective it was at supporting learning. Its response sounded solid at first. However, after a few reads I noticed some glaring issues:

  1. It regurgitated a lot of my prompt without any meaningful elaborations or additions.
  2. It offered fake research.

The last one surprised me. It had a nice in-text citation and some paraphrasing! However, after researching online, I could not identify the source. So, I checked in with ChatGPT to help me identify its own sources...

I ask ChatGPT to provide a real source and offers an APA7-formatted reference with authors, a title, and more.
ChatGPT generates a reference for its response.

Ahh–relief. It was able to give me the full citation.

I was about to move on, but I had to dig deeper (ChatGPT’s street cred with me is rather thin).

So, I searched the interwebs. Nothing.

Okay, time to professional-up, so I scoured research databases. Nothing.

Hmm. It cited a real journal, so I visited their website and searched via their database. Nada.

The research was fake. But synthetic-diamond fake. It looked real, but had a different origin. I noticed how ChatGPT assembled bits and pieces of credible sounding elements: Some of the authors were real people. Some of the words in the title came up as common keywords in real research on the topic.

The research was fake. But synthetic-diamond fake. It looked real, but had a different origin.

So, I addressed this with ChatGPT:

ChatGPT admits its prior response with source was incorrect and admits its model is incomplete. It encourages the user to check facts.
ChatGPT admits to falsification.

In other words, ChatGPT recognized it cannot offer verifiable, factual information. This means its synthesis of internet data is a patchwork of words to sound credible.

We’re wrestling with an AI Frankenstein that’s trying walk and talk as human as possible. But, what did we expect? It’s a Natural Language Processing platform, not a research database.

We’re wrestling with an AI Frankenstein that’s trying walk and talk as human as possible.

Okay, so if ChatGPT does not generate reliable information, which is kinda important for building schema, how can learners work with their existing knowledge to create new knowledge?

Get critical

First, the mere existence of ChatGPT has sparked critical debate and thinking over its use cases. In the right hands and classroom, ChatGPT can be a used as a tool do mull over and debate:

  • The role of generative AI in creating content
  • Ethical issues surrounding it
  • Verifying data and research in a post-AI world
  • When machines will take over the world (;

To this end, ChatGPT acts as a starter for learning by generating conversation around a controversial and interesting topic. However, it must be used in the hands of a savvy facilitator with learners who have some domain expertise or prior knowledge in the topic. Otherwise, we risk cognitive overload or unproductive learning if the students are complete novices (Colvin Clark, 2008, p. 89).

The good news is professors and educators have the pedagogical wisdom to build in prior knowledge before tackling higher order thinking with learners.

For example, Jason Gulya , an English Professor at Berkeley College, created a series of fun and learner-centered activities to promote critical thinking over his students’ writing. One of the activities challenges students to change the target audience for an essay prompt by having ChatGPT change its output through a variety of voices and styles. His directions offer a few templates to spark productive conversation with ChatGPT, and guidance on how to get the best response from the generative AI.

This approach is brilliant for a few reasons:

  • He’s engaging students with new technology and teaching them how to critically think through using it.
  • He’s scaffolding the student’s use of this new tool much like how you’d help students write their first expository essay or a sales person prep for their first pitch.
  • He asks students to print and, I assume, bring copies to class to further the discussion.

This approach focuses on the learner’s critical thinking and building schema linked to working with generative AI.

Kudos, Dr. Gulya!

Get feedback

Another use case for ChatGPT is to make it act as a pedagogical agent that can offer you specific guidance and feedback. Clark (2008) stated these agents should “sound conversational” to help promote learning, and ChatGPT excels in this area (p. 181).

But again, we’re dealing with Frankenstein here. So, we must proceed with our critical-thinking foil hats fully engaged. (:

Dr. Gulya offers another activity for students called “Poke my Thesis.” He offers a templated request for ChatGPT to review the student’s thesis, challenging its weaknesses and potential problem areas for a full essay. Again – kudos professor! – this method puts the learner in the driver seat. And since his students have the domain knowledge to sift ChatGPT’s responses, this an appropriate activity for a fully-automated pedagogical agent…but with a huge caveat: The learners can’t be complete novices to generative AI or the topic at hand.

Get creative

I think this is where ChatGPT can shine! From crafting poems to reworking email subject lines, this tool can support creative thinking.

Ruth Colvin Clark explained how instructional analogies help us “map critical information from familiar knowledge to knowledge that is new” (p. 187). As you explore a new topic, idea, or theme, you could ask ChatGPT to “Provide 5 unique analogies for [enter your topic here] please.” Though you’ll have to confirm its accuracy, analogous thinking is meant to be less literal and more creative.

Check out one of ChatGPT’s analogies for the art and science of learning:

Learning is like gardening. It requires patience and effort, but as you cultivate and care for it, it blooms into something magnificent.

Not bad, right?

But I’ve noticed its writing starts to become repetitive and bland. Our responses to ChatGPT help train the model, so you have to challenge it to offer new angles and perspectives. You can ask it to revise the response in a new voice or with a new focus.

I wrote a detailed prompt to see what ChatGPT had to say about its own limitations – with a twist.

No alt text provided for this image
My request to ChatGPT to share its limitations in the voice of Wednesday Addams.

Here’s its creative –and rather candid – response in full:

Sure, here's an example of how the limitations of ChatGPT might sound if expressed by a Wednesday Addams with a dry and gothic sense of humor:

"ChatGPT, my dear, is a peculiar creature. It lacks the ability to truly grasp the context of a conversation, much like a lost ghost wandering through the foggy moors.

"It can be biased, much like a haunted mirror that only reflects the most unsavory aspects of oneself.

"Common sense? Puh-lease, that's about as likely as a ghost successfully navigating a seance.

"Abstract concepts are as foreign to it as the concept of mortality is to a vampire.

"Emotions? Ha! As if the undead could feel such frivolous things.

"The external world? It's about as aware of current events as a zombie are of a brain shortage.

"It's only as good as the data it's trained on, much like a Frankenstein's monster is only as good as his stitching.

"But, much like every horror story has a twist ending, so does this one. Despite its limitations, ChatGPT is still quite an admirable creature, much like Wednesday Addams herself."

Does ChatGPT really promote learning?

Like a stack of flashcards, or any learning tool, we have to use it wisely to benefit from it. In the case of ChatGPT, though, it’s more nuanced and difficult to discern whether its output is accurate and therefore truly valuable.

Verdict

ChatGPT is a decent starter to learning. It can offer guidance, feedback, and creative output. However, use it prudently. It’s quick. It’s novel. But it still needs verification and your own research to check facts. If you’re an educator or facilitator, provide the scaffolding to support learners leveraging generative AI. Above all, engage your learners in the on-going conversation around this new tech, training them to think creatively and critically.


References

Nurlan Zhanybek

Account Executive | Making Enterprise Sales Great Again ??

1 年

Very interesting and engaging reading! thank you very much! Jordan Hopkins, M.Ed. I also got curious if ChatGPT is really a game-changer in the learning. Your article proves that!

Jordan Hopkins, MEd

Customer Education at MNTN | I deliver customer & employee education programs that empower & engage.

1 年

Nick Lemen, thought you'd appreciate this one! (:

回复
Brendon Lobo

?? Instructional Designer | eLearning Designer | Certified Soft Skills Trainer

1 年

This is a very interesting take to read. Most commentary on ChatGPT is about how great it is at creating an output we want but in my tests so far that output cannot be used without first being modified, refined, enhanced or fitted to our context. I've had the same issue with verifying the research it cites, or even the resources it cites from the web. The links it gives almost never go to real sources. But these issues get barely spoken of. And I think therein lies the danger of IDs using an incorrect output because they took it at its face value.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了