The Elephant in the Room: A Call for More Systems Thinking when Addressing AI in Ed
Anastasia Betts, Ph.D.
Author of "Start Right: The Science of Good Beginnings" (coming 2025) | Executive Director, Learnology Labs | Principal Consultant, Choice-Filled Lives (CLN) | Learning Scientist | EdTech Innovator | Executive Leader
The Blind Men and the Elephant
Most of us have, at some point, heard the parable of the blind men and the elephant. If you haven't, here's a little recounting of it:
In a small village there lived six blind men. One day, they heard that a new wonder—an elephant--had been brought to the village. Having never encountered an elephant before, they decided to go and "see" it for themselves.
The first man approached the elephant and touched its side, concluding that an elephant was like a wall. The second man felt the elephant's tusk and believed an elephant was like a spear. The third man grabbed the elephant's trunk and thought an elephant was like a snake. The fourth man (grabbing a leg) believed it to be more like a tree, the fifth (grabbing the ear) thought it like a fan, and the sixth (holding the tail) declared it to be more like a rope. All these different opinions led to an animated debate between the men.
What the men failed to realize was that they were all experiencing just a part of the whole truth. Each man's understanding was limited by his own perspective, and only by combining their insights could they have formed a complete picture of the majestic creature before them. In this case, they lacked systems thinking - understanding how different parts relate to one another and to the whole.
Seeing the Whole "Elephant"
This parable came to mind recently as I considered the current state of efforts to tackle problems related to AI in education. Like the blind men, various groups are approaching the AI challenge from their own perspectives, focusing on specific aspects such as privacy, bias, data sharing, ethics, personalization, and beyond. Each group believes they have a grasp on the true nature of the problem and the best solution.
I recently encountered an example of this during a Q&A with a government stakeholder– many of us attempted to drive the discussion toward many interrelated issues of concern around AI in education. The response of this person was essentially, "all we care about right now is solving for privacy concerns." While privacy is a worthy problem to tackle, it's only one piece of the proverbial "elephant." A systems thinking approach would recognize that privacy is interconnected with many other aspects of AI in education, and cannot be effectively addressed in isolation.
There are significant unknowns about the future cost and trajectory of AI development, see a recent article that discusses that here. These unknowns make it difficult to comprehend the full scope of the challenge, or even to how to break it down into its component parts. In other words, how can we see the whole when pieces of the AI "elephant" don't exist or aren't yet perceivable? Just as the blind men failed to perceive the entire elephant, we may be missing the bigger picture by focusing too narrowly on specific issues without considering the broader context, uncertainties, and unknowns.
Everyone is concerned about the ways, both known and unknown, that AI will revolutionize education. But we should be seriously considering how fragmented approaches and lack of coordination between different groups tackling AI challenges could be leading to inefficiencies, duplicated efforts, and potentially conflicting solutions. Like the blind men arguing over their individual perceptions, these siloed efforts may hinder progress towards a comprehensive understanding and effective governance of AI. It may also dampen AI in Ed innovation. A systems thinking lens would emphasize the need for greater coordination and collaboration to address the AI challenge and opportunity holistically.
领英推荐
The Need for a Unified Problem Definition
To effectively address the challenges posed by AI in education, we may want to take a step back and clearly define the broader problem space. We might ask ourselves: what is the fundamental problem we are trying to solve related to AI in education? What are the "jobs to be done" by the solution(s), and for whom should these "jobs" be done? How are these "jobs" connected? Is our primary goal to protect student privacy? Reduce bias in AI models? Or is it to enhance student learning outcomes, or solve specifically for the learning opportunity gap? What about increasing personalized instruction or engagement? Supporting teachers in their roles? Ensuring equitable access to educational resources? Or something else entirely--or all of the above?
Answering these questions requires input from a wide range of perspectives, including educators, students, parents, policymakers, researchers, and technology experts. Taking a systems thinking approach would mean considering the perspectives of multiple stakeholders, and understanding the interrelationships between those perspectives within the context of the bigger challenge/opportunity of AI in Education.
Building Connections and Sharing Knowledge
I recognize that many organizations and individuals are already working tirelessly to address the challenges of AI in education. From academic institutions to non-profits, startups to government agencies, to just caring, passionate people leading the charge and trying to make a difference—there are numerous efforts underway to develop solutions and shape policy.
However, the sheer number and diversity of these initiatives can make it challenging to stay informed and connected. It's easy to feel overwhelmed by the volume of information and the pace of change in this field. This is where a systems thinking approach can be particularly valuable, helping us to see the connections and interdependencies between these various efforts.
One way to achieve this could be through the creation of an open-source shared repository or platform where different groups can share their research, case studies, best practices, and lessons learned. This would provide a centralized resource for anyone seeking to understand the current state of AI in education and connect with others working on similar challenges. Perhaps this already exists. Or, perhaps efforts are already underway to create something like this. If so, I would love to learn more about the work--I'm sure many of you would like to know more about it too. A systems thinking approach would suggest that such a platform could be a powerful tool for facilitating cohesive, coordinated action.
Another approach could be to convene regular cross-sector dialogues or workshops, bringing together representatives from diverse organizations to share updates, discuss challenges, and identify opportunities for collaboration. I know there are many diverse workshops, conferences, and symposiums happening across the country attempting to accomplish this very thing. There are many virtual convenings as well. The sheer number of these, however, can be quite overwhelming—which leads me back again to the need for some kind of central repository where information can be curated and accessed. From a systems thinking perspective, these convenings could be valuable opportunities to map out the complex system of AI in education and identify leverage points for change, and a central repository of information and resources would support that work in a coordinated manner.
A Call to Action
As we work to navigate the complexities of AI in education, it's clear that no single organization or individual group has all the answers. Like the blind men in the parable, we each bring our own unique perspectives and expertise to the table. But it's only by coming together and combining our insights that we can hope to fully understand and effectively address the challenges before us. This is the essence of systems thinking - recognizing that the whole is greater than the sum of its parts.
Whether you're an educator, researcher, policymaker, or technologist, we all have a perspective to contribute. Let's seek out opportunities to connect with each other and others working on these issues. Share your knowledge and experiences. Be open to learning from diverse perspectives. And if creating some kind of overarching consortium that works to define the problem-space from a systems-perspective lens excites you, please reach out to me. Or, if you know of an effort like that already underway, please let me know that too!
Together, we can work cohesively and systematically towards a future in which AI is a powerful tool for enhancing learning, equity, and opportunity for all. With communication, collaboration, and a commitment to the greater good, I believe we can make meaningful progress.
Designing SaaS: Easy to use, guaranteed | Sr. SaaS Designer | Founder of SaasFactor | Google-certified
5 个月Anastasia, thanks for sharing!
Co-Founder of EduSpark | Transformative Trainer & Coach | AI in Education Expert | Professional Development Guru | Metacognition & Neuroscience Tragic | Author
5 个月So true on many levels, a great article. My worry that the very noble call to action may be tough in an already disaggregated world.
Education Technology and Social Impact Founder, CEO, CPO
5 个月You are making such a great point Anastasia. Convenings are the easiest option but I've been to few that were set up to really facilitate the type of cross-topic and cross-industry sharing that you are calling out. I love your idea of creating an "open-source shared repository" and that could easily be built - the struggle there would be incentivizing people to share their research and lessons learned as we'd benefit most from sharing stories of positive outcomes and failures. It is a very worthy problem to solve and I'd be happy to be a sounding board or thought partner to try to figure it out.
Trailblazing Human and Entity Identity & Learning Visionary - Created a new legal identity architecture for humans/ AI systems/bots and leveraged this to create a new learning architecture
5 个月Hi Anastasia, What if in the village, all they had to compare new things to was a few way of doing things based on time. Then imagine not only the elephants showing up, but gazelles, flying birds and other types of organism and animals they'd never seen before. So, all the people in the village look backwards to what they know, and then try and squeeze into their frameworks all the new types of animals, birds and organisms showing up. That's my assessment of what's happening today. As new "tech" appears, people react to ir, as you pointed out in your article, from their own perspectives. It could be assessments, ethics, privacy, individualized learning, teaching, data, lesson plans, administration, security, etc. They try and address it using their existing frames of reference, It's neither right nor wrong, nor good or bad. It's simply people making choices based on what they know and their frames of reference. I'll continue in the next message...