AI and learning - what are we afraid of?
One of the standout themes at this week's excellent OEB Conference came from the many sessions that explored the intersection of AI and learning. Over the two days, a number of world class speakers explored the future of learning and the anticipated impact of AI (including Roger Shank, Donald Clark, Nell Watson, Philipp Schmidt, Andrew Keen and Tarek Besold).
In the video above, the ever-so-slightly tongue in cheek plenary debate was: "This house believes AI could, should and will replace teachers." A thoroughly entertaining and informative sideshow that also served to illustrate (no doubt deliberately) how polarising the AI debate has become.
At the very heart of the discourse on AI is the following dichotomy. On the one hand, we value the utility of AI in delivering improvements to our quality of life, convenience, new experiences, etc. On the other, we fear that AI will seek to displace by imitation what we value as uniquely human traits - and that somehow we will lose control. How can we achieve utility without that threat?
In consumer markets, AI is already delivering positive benefits. Driverless cars are already with us - whatever our fears over a ‘computer’ being in control of a 2 ton vehicle, who would not take the utility of AI over a poorly trained human driving with flawed choices and a low attention span?
But does the analogy have any relevance to learning? What does - and will - it mean for the experience of teaching and learning?
Because we live in an age of mass information and seemingly infinite computational power, the fear that AI might somehow replace what we value most dearly as human beings (independent thought, capacity for choice, opinion) appears at least on the face of it - by imitation - to be a real possibility.
Tarek Besold very effectively demonstrated how AI is already encroaching on areas of thinking, reasoning and understanding (and if you ever get the chance to hear Tarek speak on the subject, grab it with both hands). This could have profound implications for the relationship between the learner and the learning professional.
So what is the appropriate response? Should we be afraid?
In learning, the utility of AI has already been demonstrated with virtual tutors, bots, machine learning techniques that will revolutionise everything from virtual reality through to speech technology. Our capacity to generate more engaging experiences - to replace the mundane with the extraordinary, to automate previously manual tasks - presents us with new possibilities for learning.
Our challenge is to take ownership of the ethical questions that this poses. In the debate on Thursday evening, Nell Watson and Andrew Keen both placed an emphasis on the uniqueness of agency as a human condition, when compared to a computer. Whilst AI may enable a revolution in education, it cannot and will not replace ‘educators’.
Perhaps we just need to stop being afraid - to embrace our own agency in order to direct the utility of AI for our own benefit. To step up and begin to set the ethical boundaries and parameters that allow us to innovate and move forwards.
At the very least, let's start to have a more nuanced and open conversation.
Catalyst for Innovation, Learning & Growth
7 年What we are afraid of is the blind technological determinism that is being exhibited across the tech and media sectors. The suggestion that we, as a human society, have no equity in the decisions about how technology is developed and deployed is a canard debunked many years ago and yet here is it reborn in the media, trade shows and corporations with eye-watering stock valuations. "a number of world class speakers explored the future of learning and the anticipated impact of AI" With the deepest respect I don't see the "world class" here. I see a lot of techno gadget boys painting another picture of the world so bright with abundance that we're going to need shades. How many times are we going to ride this roundabout? Tongue in cheek or not the title of your debate is loaded from the start: "This house believes AI could, should and will replace teachers." It entirely misunderstands the role of the human teacher by a group of technological fetishists who really ought to get out more. What we have to fear is quite simple because it's already happening. Computer software and the Internet is not neutral. It contains all the bias of any other media. The "abundance" that people those who hail from the Singularity University or TED conference after parties bleat on about after their second bottle of Pinot Noir is a myth or at least something that is the preserve of the elite. Who owns the AI, who owns the robots that create a life of leisure and the decline of employment? Artificial Intelligence, a term bandied around most often by people desperately looking for the door marked entry, is poorly understood and often confused with machine learning. It will be mentioned on the brochure of every crappy LMS system on sale at every box shutting trade show for the next few years. The problem as I see it and what we have to fear is AI as a centralised technology owned by a handful of EdTech corporations designed to reinforce their 20th century business models of content sales and assessment. It's not about replacing teachers, for AI to be useful it must be the impartial agent of the learner not the state or a corporation. At the risk of accusations of thread-jacking I offer my own pontifications on the subject of what AI means for education: https://medium.com/learning-re-imagined/what-does-ai-mean-for-education-3aeb9dbd7b35#.os5piph40
fixing some bikes
7 年I think maybe our educators need help to get to another level of understanding, this would surely help the debate. The pace of change just in the last few years has been astounding and wont stop.