What ChatGPT means for education

What ChatGPT means for education

Between Unthinkable and EVR, we’ve been asked a lot recently about what ChatGPT means for education - from assessment to ethics, to the essence of what education is even for. Here are some of the things we’ve been thinking about.?

What does ChatGPT represent??

ChatGPT is an example of generative AI, one which went viral at the beginning of 2023. This rapid adoption is fascinating in itself - the combination of a business model (free, direct-to-consumer) which makes it available at scale, and a user experience (UX) which is accessible for a layperson (like the first graphic user interfaces (GUIs) for early computers. Initially available for free on the web, ChatGPT now also offers premium subscription plans, as well as an API - an interface which allows it to communicate directly with other software applications. ChatGPT-4,? new release so far available only to paying users or via API, additionally recognises image inputs; this improved version is already being integrated into other products, such as Microsoft’s Bing, Duolingo, and a Khan Academy automated tutor.???

No alt text provided for this image

But to fully appreciate the implications of ChatGPT, it makes sense to think about this new tool in the context of other generative AIs, which are at different stages of readiness and adoption.?

A few examples:

  • Dall-E 2, also from OpenAI, creates original, realistic images and art from natural language text description, combining concepts, attributes, and styles. Available via web and API.
  • Midjourney?is a?proprietary?artificial intelligence program from an independent research lab, that creates images from textual descriptions, available via a freemium pricing model.?
  • Stable Diffusion is another text-to-image model produced by a consortium of partners; it is available open source, enabling other organisations to train an image model on their own datasets.?
  • Unveiled is Meta’s text/image to video generative AI, which produces video from a text or image prompt.
  • Phenaki and Imagen are Google products that generate video from text.?
  • Bard is Google’s AI-powered chatbot, available only via waitlist currently.?
  • Synthesia uses generative AI to create avatars and talking heads, then uses these to generate presenter videos for text scripts.?

Not all of these tools are as easily available as ChatGPT yet, but realistically it’s just a matter of time. Many generative AI tools will come to be integrated into a range of products that are not only a part of education, but will also be part of many white-collar jobs in the not-too-distant future.?

So what does all this mean for wider society??

Education exists in a wider context, so we think it’s critical to first think about the wider societal changes which are likely to come about in association with widespread adoption of generative AI.? Some examples include:?

  • Far more sophisticated chatbots for customer service, which could be more effective and give a better user experience than is currently possible.
  • Drafts of written content, whether a report, a work plan, an article, or a magazine - ChatGPT can certainly help at least with a first draft, which can increase initial understanding (or creativity) and be built / improved upon.?
  • Generative AI will also become part of search engines (for example, Microsoft’s new Bing), adding to the sophistication of the responses that we receive in response to queries - although this may come at the cost of understanding where responses are sourced from and whether they are accurate.
  • Creation of visual media, e.g. designs, images, video, in response to prompts, leading to implications for intellectual property ownership and the creative arts.

What should educators and education institutions be aware of??

Right now: debates around assessment, plagiarism and academic misconduct

There has been considerable attention from educators on how to avoid use of generative AI in education, seeing these technologies primarily as tools for cheating. Several universities - one third of the Russell Group - have already banned the use of AI software in writing assessments, backing this up with procedural changes such as returning to in-person exams or upgrading detection software (e.g. Turnitin). Note that detection policies and tools, in particular, may struggle to keep up as AIs are able to develop the fastest when acting in competition - pitting ChatGPT against Turnitin is a surefire way to ensure it quickly learns how to evade such policing.

No alt text provided for this image

But not all institutions or individuals share this restrictive view, and some are embracing the technology and acknowledging that students must understand how to use these AI tools, because they will likely be part of their future lives.

These institutions have come up with policies and guidance to enable the usage of such tools. This more measured stance involves considering how misuse of AI in assessment can be mitigated - for example, by promoting an appreciation of academic integrity, preventing misuse through assessment design, and detecting the use of AI through familiarity with students’ work. The International Baccalaureate, for example, have said that ChatGPT should be embraced, adding that ‘As with any quote or material adapted from another source, it must be credited in the body of the text and appropriately referenced in the bibliography’.?

Regardless, there are definitely immediate considerations for policy development and managing academic misconduct - and there will be costs associated with resourcing this.?

The bigger picture: a game-changing moment

At Unthinkable and EVR, we think that the biggest pitfall for education would be failing to recognise that generative AI - and AI more broadly - is a game-changing technology, and that this has ramifications across education and society.?

ChatGPT shows us how an AI technology, which has been around for several years, can come rapidly into the mainstream. In time, it will probably be integrated in all common communications technologies, from word processing to social media and search. Education is about preparing children and students for the world, and as such, to be well equipped for these changes.?

But to do this, education institutions and educators must be well prepared themselves.

The backdrop isn’t helpful: in the UK, we are seeing post-pandemic burnout, the cost of living crisis, teaching union strikes, and the impact of Brexit on university research and hiring. It would be easy for ChatGPT to simply become an additional burden placed on already maxed-out staff - in terms of the policy, legal and administrative response, but also in terms of pedagogic development: adapting learning outcomes, exploring new teaching approaches, redesigning assessment methodologies - rather than exploring ways in which AI can be a potential solution to addressing some of the challenges in education today.???

We can see early evidence of this in the reflex reaction to assessment characterised by education providers reverting to in-person unseen exams; we know exams are stressful and have poor relevance to real-world situations - yet whilst there had been a trend of replacing exams with more meaningful forms of assessment, defensive reactions to ChatGPT threaten to reverse this.

Without adequate time and energy, the opportunities for capitalising on and growing through the adaptation to generative AI in education will be missed - not to mention adding to the stresses and strains of the workforce, the implications for staff retention and motivation, and the knock-on impact in terms of education quality.

So what next? Rethinking what education is and is for?

Most education systems are not doing students justice when it comes to the technologies of the future. The UK national curriculum, for example, is grounded in a knowledge-based pedagogy - but as the fears around plagiarism and assessment show, this is way too easy for AI to replicate, and can all be automated in time. As such, we have to ask ourselves whether the curriculum is fit for purpose - why teach children something that an AI may soon be able to do faster, cheaper and more accurately??

In this context, it’s vitally important to remember that artificial and human intelligence are different. Generative AIs like ChatGPT do not understand what they are generating, nor do they know about the context to which their outputs apply.?Humans, on the other hand, benefit from meta-cognition, self-understanding, self-regulation, emotional intelligence, and contextual intelligence. These differences suggest we should refocus our education system on increasing the sophistication of “human epistemic cognition” - in other words, understanding what knowledge is, where it comes from, and what good evidence looks like.

We also need more support for developing critical digital literacies so that students can make well-informed choices about whether and how to use new technologies in light of their broader social and ethical implications. Academics and researchers have highlighted how the technologies underpinning ChatGPT can exacerbate bias and discrimination, deceive people, and impact the environment. They could also be used to cause significant harm - for example, by generating malware, phishing emails, disinformation, harassing social media content, hate speech, and racist code. Students need to learn how to leverage AI to increase their human capacities - their knowledge, intelligence, and ability to impact the world - but they also need to learn to consider the ethical implications.?

This means critically embracing AI in teaching and learning - rather than banning it. If we want to change education systems to embrace AI, we need to start by using it, and understanding it.?

Experimentation, informed choice and ethics

AI in policy?

There are already some great examples of permissive uses of AI within education, like this one from Monash University in Australia:??

“Monash University supports the responsible and ethical use of generative AI. To equip our graduates with the skills that they need to develop with emerging technologies, we have a duty to explore and educate students on the benefits in the judicious use of technologies such as ChatGPT while also ensuring they understand the risks and ethical considerations of such tools.

Implications for assessment will be addressed by clear definition of expectations as part of the assessment conditions within units. Amendments will be made to the Assessment Regime Procedure and Academic Integrity Policy and Student Academic Integrity Procedure to include:

  1. Specify the conditions for the use of generative AI at the start of the teaching period.
  2. Place the conditions in the assignment description on Moodle (template text is available below).
  3. Explain to students how they should acknowledge the use of generative AI in their assessments.
  4. Point students to further guidance on using AI.
  5. Manage suspected breaches of academic integrity, including inappropriate use of AI, using the Academic Integrity Portal”

This shows how some institutions are creating the conditions for students and teachers to experiment with and use AI, including supporting them through this transition by exploring its appropriate role and implications of use.

Experimenting with AI as a tool for teaching and learning

No alt text provided for this image

As AI tools develop, some educators are embracing ways in which they can be used and integrated into their practice. This includes not just developing an understanding of how AI can be used to achieve the same learning outcomes, but also developing an understanding of the broader social and ethical implications. This proactive engagement is vital to getting ahead and deriving the potential value which generative AIs have to offer for education and society.?

Prompt battle, for example,? is one format which could be used with students. A prompt battle is a live event where people compete against each other using text-to-image software. As well as demonstrating and teaching students how to become a ‘Prompt Engineer’ - maybe a typical ‘job of the future’ given the advent of generative AI - students can be set challenges which enable them to explore the ethical ramifications of the technology. Example challenges shared by creators Florian Schmidt and Sebastian Schmieg at a recent Somerset House event bring this to life:

  • “Recreate this image as closely as possible”?
  • "Steal the style of your favourite artist and make a portrait of my opponent"?
  • "Place yourself into a famous Hollywood movie of your choice"
  • "Commit as many copyright infringements as possible in one image"

Through these sorts of challenges and wraparound discussion and exploration with teachers, students can start to experience for themselves the relationships between generative AI, the creative arts, and complex issues like plagiarism and intellectual property rights.?

More simply, educators might accept and embed the use of AI technologies in the way their students approach work. For example, educators could ask students to use ChatGPT to generate an essay response… then submit both the initial AI version, and a human-edited version. This approach enables students to familiarise themselves with the functionality of ChatGPT - but also to develop an understanding of its limitations, and to complement it with their own human intelligence.?

Informed and ethical choices

ChatGPT’s ability to produce human-style text responses is a function of the way the underpinning language model has been trained. The current iteration, GPT-3, represents a huge leap forwards (its ability to produce human-style text is at the root of its widespread adoption as well as the issues of plagiarism described above), but this high level of performance owes a debt to the way it was trained, using huge quantities of text from across the internet. And as OpenAI have acknowledged: “Internet-trained models have internet-scale biases”.?

This has led to ChatGPT reproducing not just incorrect information, but also disinformation, racial bias and offensive language which is (unfortunately) characteristic of its training set. Similarly, the first release of an AI-enabled version of Microsoft’s Bing, which uses ChatGPT, has been reported as giving insulting or manipulative answers.? To improve social acceptability, ChatGPT has been trained using the preferences of human testers as well as text datasets, which has improved the social acceptability of its outputs somewhat; this is an interesting example of how artificial and human intelligence can be paired up to create a better overall outcome - yet also one which creates an ethical dilemma about exposing human testers to potentially traumatising content in the name of improving AI outputs.?

It is therefore important to consider the genesis of AI tools being integrated into our education systems, and to make intentional, informed choices about which to use. Owing to the expense of developing AI systems, they’re likely to be developed primarily as private projects by closed corporate teams - which makes it hard to explore the ways in which they have been developed, and understand their potential limitations. But there are examples of collaborative projects to launch large language models, and one such is BLOOM.?

BLOOM has been designed by a volunteer research project called BigScience, coordinated by AI start-up Hugging Face, using funding from the French government. It is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. (OpenAI and Google, for example, have not shared their code, or made their models available to the public, leaving researchers with little understanding of how these models have been trained.) The idea behind BLOOM is to develop an open-access language learning model, changing the culture of AI development and helping to democratise access to AI technology around the world.??

As well as developing the technology, BigScience have taken ethical steps such as:?

  • Putting ethical guidelines in place at the outset
  • Recruiting volunteers from diverse backgrounds and locations
  • Developing specific data governance structures to clarify data use and ownership
  • Sourcing data sets from around the world which aren’t readily available online
  • Launching a new Responsible AI Licence to act as a deterrent for using BLOOM in high risk sectors, to harm, deceive, exploit or impersonate
  • Including 46 different human languages (and 13 programming languages), enabling access for researchers in poorer countries where English is less dominant.?

ChatGPT represents a game-changing moment in the history of AI, and a game-changing moment for society and education.

We can use this as a catalyst for rethinking the purpose of education, focusing on nurturing the elements of human intelligence which cannot be simply replicated by an AI. We can also use it to improve the process of changing education, prioritising experimentation and teacher agency in the way we go about embedding these new technologies. Finally, we can use this moment to think deeply about the ethics of the technologies we use in our day-to-day lives, as well as in education, and explore how we might make more informed choices which work to address inequalities rather than perpetuating them.?

No alt text provided for this image


Photographs of article contributors: Kathryn Skelton, Justion Spooner, Prof Rose Luckin and Dr Charlotte Webb
Simon Nelson

CEO of QA Higher Education

1 年

Superb analysis folks - from a great team! Great to see you collaborating

Alf Lizzio

Consulting for learning and innovation

1 年

Well-balanced analysis of the range of current institutional ‘coping responses’, the barriers to ‘exhausted educators’ engaging with the step-change….and ‘safe experiments’ to test possibilities. Nice!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了