New intelligences: exploring AI and humanity in education
The Dark Valley of Innovation - as imagined by DALL-E.

New intelligences: exploring AI and humanity in education

?

Philosophy in the age of AI

It is an interesting quirk of Wikipedia’s structure that if you click the first non-pronunciation link on any given page, and keep clicking in this way, you will, in all likelihood, eventually land on the Philosophy page. Whether you start with Harry Styles, crochet, or nuclear fission, and whether it takes six clicks or thirty-six, eventually you land up with Philosophy. This seems allegorical: if we dig deep enough, the love and study of wisdom, and the most fundamental questions about existence, reason, knowledge, values, mind and language make as good a foundation for learning as any.

The moral panic we witnessed at Wikipedia’s launch in the early 2000s echoes down the decades, and serves to remind us that we have been here before: innovation ebbs and flows with predictable tides. On the latest wave: Artificial intelligence (AI). AI is not a new phenomenon, but its latest iterations, including Large Language Models, are very rapidly changing the world, and education is no exception. This paper aims to take an exploratory approach to understanding the ways in which human and artificial intelligence connect and conflict in the education space, with a view to helping us to engage proactively with the nuances and build intentionality into how we engage with it.

Calculation, Automation, Intelligence, Wisdom…?

AI is not a singular technology, but rather an umbrella term for a collection of technologies that enable machines to perform tasks that historically required human intelligence. IBM’s Deep Blue may now seem like a relic, but even narrow AI like Siri and Alexa, Netflix and Spotify’s recommendation engines, facial recognition, and spam email filtering have been part of our lives for the past ten years. Adam, the research robot developed by researchers at the Universities of Cambridge and Aberystwyth, became the first machine to independently discover scientific knowledge back in 2009[1] . Generative AI and Large Language Models such as GPT-3 and GPT-4 have exploded into public awareness in the past year.

Since the industrial revolution, there has been a tension between the utility of machines to reduce physical labour and automate repetitive tasks, and the displacement of human work. Machines could free up human bandwidth for ‘higher’ purposes of creativity, leadership, relationships, community, and learning. So, as the capacity of machines increases, there is a destabilising uncertainty about whether machines can supersede computational processes, and direct attention towards accessing these higher-order aspects of experience. We should expect AI systems to greatly surpass human reasoning capabilities – possibly very rapidly.[2]

What happens to learning outcomes and progress when we outsource intellectual effort? Will intelligent machines reduce cognitive engagement to such an extent that learning and growth are de-incentivised? What is the point of school, if it is only to teach children how to be outperformed by machines??[3]

Enchantment, Excitement and Disappointment: the Hype Cycle

Understanding that generative AI is approaching ‘the peak of inflated expectations’ helps us to anticipate what is coming next: a period of challenge, disillusionment, overwhelm, and even disappointment. This is known as ‘the dark valley’: a time when the new technology will have to demonstrate its value proposition, and prove itself to be safe, reliable, and scalable.

Graphic created by the author, referencing Gartner Hype Cycle

So why can this feel so unsettling? Do we as humans feel that for the first time, this technology has the potential to be truly out of our control? A lack of transparency about the models, how they work, and the data they are trained on certainly adds to the challenge. Further opacity comes from the speed of development: barely a day goes by without new tools, developments, and capabilities being announced. There is currently a power-hungry landgrab in play, with only two options for companies and individuals looking to benefit from the wave of hype: go fast, or lose out. As a result, governance and regulation is far behind where it needs to be, and schools will inevitably find ourselves picking up the pieces in our pastoral and safeguarding work. Publicly voiced fears from respected voices add to the malaise: “My worst fears are that we the field, the technology, cause significant harm to the world”, said OpenAI CEO, Sam Altman.

Ideas, creativity, consciousness: what does it mean to think like a human?

"AI will not necessarily come up with our best ideas for us. But it will greatly reduce the cost—in time, money, and effort—of generating new ideas by instantaneously revealing untold options."[4]

LLMs and image generators like DALL-E and Midjourney have already had a significant impact on creativity and have demonstrated their capacity to act as a freeing, democratising way of removing obstacles, improving performance, and? facilitating collaboration. With creative work in particular, we are confronted with complex questions about intellectual property and ownership of ideas and outputs. Navigating these complexities ethically is a key aspect of ingenuity and creation in the AI era.

"Invention consists in avoiding the constructing of useless contraptions and in constructing the useful combinations which are in infinite minority. To invent is to discern, to choose."[4]

However, behind the articulate language is not understanding, despite how tempting it may be to believe the contrary: “We humans, however, are prone to anthropomorphism—projecting intelligence and understanding on systems that provide even a hint of linguistic competence.”[5] Shallow heuristics may be all that bolster the appearance of high performance, and bringing good judgement to our use of AI is key to maintaining the integrity of communication and content.

?Hybrid Intelligences

The AI era will spark the development of new literacies and skills. However, there are three core aspects to human intelligence which will support thriving:

?

Aspects of intelligence to support future thriving


Curiosity : The ability to ask questions, explore new possibilities, and seek novel solutions is essential for generating original and valuable ideas. Curiosity can also help humans learn from AI rather than offloading cognitive responsibility. This will improve their own skills and knowledge.

Empathy: The ability to understand and relate to the emotions, needs, and perspectives of others is crucial for creating products and services that can benefit society. Empathy can also help humans collaborate with AI and other humans, as well as avoid ethical issues and biases that may arise from AI.

Critical Thinking: ?The ability to analyse, evaluate, and synthesise information from various sources and perspectives is important for making informed decisions and solving complex problems. Critical thinking can also help humans assess and discern the reliability, validity, and limitations of AI, as well as identify potential risks and opportunities.

We must remember: AI does not understand right and wrong, or the grey areas in between. It doesn’t understand truth, and in reality, it doesn’t know anything. It is a machine that cannot feel, or judge, or intuit. Our expectations of it to be somehow magical, mythical, or something out of a sci-fi film are misguided. Lazy metaphors do not help teachers or pupils understanding the complexities and nuance involved in working with and alongside AI, and its lack of physical form makes giving it a visual language difficult. However, we must see past disingenuous imagery depicting empty-eyed sci-fi cyborgs, suspiciously cute robot pets, glossy VR headsets with children looking skyward, or chains of Matrix-style binary scrolling in front of people's eyes.


Just... no.

Instead, let us consider examples of AI grounded in reality: intelligent tutoring systems, adaptive learning software, automated essay scoring, and revision chatbots are just a few ways AI is used for learning. As time goes on, AI should be an extension rather than a replacement of our human and educational culture, creativity, learning, and character development. In a space where fearmongering makes for much more interesting headlines, educators should push proactively for the positive use cases alongside robust regulation and appropriate safeguards. David De Cremer and Garry Kasparov, drawing on their experiences with AI in management and chess, illustrate how humans and machines can work together to produce better outcomes than either alone:

“Leadership in the AI era is about creating the conditions in which humans can thrive in a symbiotic relationship with machines.” [6]


Conclusion

This article has explored the complex interplay between human and artificial intelligence in the realm of education. As AI capabilities rapidly advance, we must consider how to leverage these tools to augment human skills and knowledge rather than replace, devalue, or undermine them. While AI brings risks like over-reliance and ethical issues, it also presents opportunities to free up human mental bandwidth for higher-order thinking. Critical thinking, empathy, and curiosity will remain core aspects of human intelligence that support thriving in education. Ultimately, the AI era calls on educators to intentionally guide the development of new hybrid human-machine intelligences. With proactive governance and a focus on human strengths like creativity and character, AI can become an empowering extension of our culture and learning rather than a detached or dystopian replacement. Moving forward, striking a purposeful and ethical balance between human and artificial intelligence will be key to realising the full potential of this technology to benefit schools and wider society.


Laura Knight

August 2023



This article will feature in the upcoming edition of Research at Berkhamsted. You can read the previous issue here: https://issuu.com/bsg1541/docs/research_publication_issue_4





[1] University of Cambridge, 2015 ‘Artificially Intelligent Robot Scientist could boost search for new drugs’ Artificially-intelligent Robot Scientist ‘Eve’ could boost search for new drugs | University of Cambridge

[2] Hamilton, Wiliam and Hattie, 2023, ‘The Future of AI in Education: 13 things we can do to minimize the damage’

[3] Comment given in person by Lord Jim Knight, House of Commons, July 2023

[4] Iyengar, S. 2023 Time.com AI Could Help Free Human Creativity | Time

[5] Mitchell, M. 2023 Science.org How do we know how smart AI systems are? | Science

[6] De Cremer, D. and Kasparov, G, 2021 ‘AI should augment human intelligence, not replace it’, Harvard Business Review AI Should Augment Human Intelligence, Not Replace It ( hbr.org )

Darren Benson

Teacher, Computer Scientist, Tinkerer, Patiently integrating AI into Pedagogy

1 年

"Understanding that generative AI is approaching ‘the peak of inflated expectations’" ... that hits the spot for me.

Ayodeji A.

Digital Innovation Manager @ Berkhamsted School | AGILE, PRINCE2, Microsoft Certified Professional

1 年

Thanks for the really interesting read Laura.

Guy Huntington

Trailblazing Human and Entity Identity & Learning Visionary - Created a new legal identity architecture for humans/ AI systems/bots and leveraged this to create a new learning architecture

1 年

Hi Laura, You might find very interesting these two out of the box articles on rethinking learning: * “Vision: Learning Journey of Two Young Kids in a Remote Village” - https://hvl.net/pdf/LearningJourneyofTwoYoungKidsInARemoteVillage.pdf * “Sir Ken Robinson - You Nailed It!” - https://www.dhirubhai.net/pulse/sir-ken-robinson-you-nailed-guy-huntington/ Guy ??

要查看或添加评论,请登录

社区洞察