My notes on the Global Education Forum's roundtable about Ethical Implications of AI (and 4): Future outlook
Image generated with MidJourney

My notes on the Global Education Forum's roundtable about Ethical Implications of AI (and 4): Future outlook

This is my last post on Education and AI after the Global Education Forum 's roundtable on Ethics and AI I had the pleasure to participate at, along Dr. Gregory C. Unruh . Ahmed Elgammal and Mark Kabban, Ed.L.D. . The first post was about bias and fairness, the second about privacy, and the third about transparency and accountability. I wanted to finish with some reflections about the near future, taking advantage of some of the announcements made by OpenAI and Google in these last couple of days.


I have lived through various iterations of personalized learning promises. While there have been some obvious advances (with even standard learning management systems offering rule-based or insights-based automation in some cases), the current potential for personalized learning, enhanced accessibility, and automated administrative tasks could finally shape the educational landscape.

The near future of AI in education hinges on developing systems that augment educational outcomes but in a responsible way. In this case, responsibility goes beyond "doing good" but also enabling a proper way to teach new generations with an answer that does not just encompass "here, have an AI bot to help you" but with a deeper reflection on what that means in the mid-term.

"Here, have an AI bot to help you" is not the answer to education.

As we know, governments and policymakers play a crucial role in this process, shaping regulations that address ethical concerns while fostering innovation. The EU AI Act is a prime example, classifying certain AI applications in education and employability as high risk, thus requiring oversight and transparency. I was not the first one initially surprised by education and employability being high risk under the AI Act... until you start thinking about the broader definition of "high risk". This legislation underscores the need for shared responsibility across the AI value chain—from cloud and GPU providers to educational institutions and end users. Such comprehensive governance ensures that AI systems align with human values and maintain accountability.


Looking forward, and as mentioned above, the educational sector might witness the rise of AI systems capable of offering truly (or almost) personalized learning experiences. Imagine AI-powered mentors that adapt to individual learning styles, pace, and interests, providing tailored educational pathways for each student. We saw a demo of that just a few days ago.

This vision extends to intelligent tutoring systems that offer real-time feedback, support, and guidance, acting as both tutor and assistant when human interaction is unavailable.


Moreover, AI's role in creating global classrooms cannot be understated. These advanced systems could connect students across various geographies and cultures, facilitating collaborative learning and cultural exchange on a scale previously unimaginable. However, such innovations must navigate the complex web of ethical considerations. We also saw something a few days ago that, if you think about it, was a really advanced learning machine.

The question comes back again to the base. AI systems, often trained on existing datasets, could unintentionally perpetuate historical biases unless explicitly designed to address fairness. Developers must strive to create unbiased systems that offer equitable learning opportunities to all students.


But moreover, maintaining what is known as teacher and student autonomy is crucial. While AI can significantly enhance the educational experience, it should not replace human educators or infringe upon students' control over their learning journeys. AI should be viewed as a tool to augment, not replace, the human elements of education. It's an assistant or a tutor, not a teacher. And we all must learn how to interact with something like this. If we commit this mistake thinking we are helping underprivileged people, think about a future where privileged people are the only ones with access to the hybrid approach of humans and AIs helping them learn, while underprivileged only have access to the "lower versions" of AIs. Not what we want, really.


Globally, ethical considerations in AI vary significantly, influenced by cultural and regional norms. For instance, discussions around copyright in AI training models have seen different approaches, such as Japan's evolving stance on copyright applicability in AI (which is now changing again, it seems). These variations highlight the importance of cultural context in shaping AI applications and the need for international dialogue and learning.


Lots of people have been saying for years and even decades that "education must change". Beyond MOOCs and the likes, the relevance of a good teacher and engaged students continues to be the baseline of it all. Are AI tutors going to finally be part of the core equation? We will see!


Marta Dominguez

Founder i-Thread Consulting, Digital transformation and tech trends strategy, Business School Professor, PHD researcher in innovation

6 个月

I listened to a recent debate with Mitch Resnick and one other of his colleagues from MIT. I wrote down two important ideas. One: the good learning is social like with students collaborating on achieving something. This is so often forgotten when we push the idea of personalized MOOCs, AI videos, tutor bots. Two: I must read "Mindstorms" by Seyour Papert. It was written in 1980 and is a classic on how to bring technology to learning in a constructivist mode of thought.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了