Reframing “Human in the Loop”
Andre Dargelas, Le Tour du Monde, oil on panel, ca. 1860: https://www.christies.com/en/lot/lot-6011728

Reframing “Human in the Loop”

There have been lots of interesting (and optimistic) predictions for the march of AI in 2025. From reality checks on the edtech sector predicted by Michael E. Spencer to audacious claims from OpenAI’s Sam Altman of building artificial general intelligence that will outperform humans “at most economically valuable work,” it’s likely to be a jumpy year!?

From my view within education, there are certainly developments that hold promise. On the other hand, concerning trends: while AI can enhance our capabilities, it also can distract from or replace meaningful communication with other humans. Toggling from one tool to another, chaining tools into workflows, and ultimately spending more time managing technology and less interacting with people. This phenomenon, which has profound implications for education, social emotional learning, and even democracy at large, is at the heart of this edition of the newsletter.?

Social Buffering Collapse

It’s undeniable that AI will continue to enhance our productivity as it becomes more deeply integrated into daily work and learning. So what are we giving up for this added productivity? Preliminary studies suggest that while AI makes us more efficient, it also makes us lonelier and less healthy. And in extreme cases, the consequences are already proving deadly.

A tragic example is the much publicized story of Sewell Setzer III. A 14-year-old from Florida, Setzer developed an intimate relationship with a chatbot version of the Game of Thrones character Daenerys Targaryen on Character.ai. Like many people who interact with these companion-style chatbots, Sewell may have initially felt personal conversations with a fictional character were a tonic to the loneliness we know is striking GenZ particularly hard. After all, chatbots are responsive in a way humans may not be, responding almost instantly to any text the user enters. GenZ’s preference for this kind of digital interaction is not all negative. For example, because teens are more willing to share vulnerable personal details and feelings via text, cities like New York have created texting hotlines for free support with mental health professionals that are bearing fruit.?

But AI mental health apps are not the same as licensed health professionals, and Character.ai is not a mental health app. And because Sewell’s conversations with “Dany” were private–not directly monitored by the company, overseen by a parent, or shared with peers–his descent into more insular, anti-social behavior remained largely hidden. He lost interest in things he once enjoyed and retreated from friends.?

Unlike the digital public square that social media aspires to be, AI interactions are predominantly private conversations between one person and a language model. They’re effectively invisible to anyone not in the chat. This default of invisibility carries the risk of what we might call “social buffering collapse.” In psychology, social buffering occurs when personal relationships help to temper the negative consequences of stress. Without those interpersonal connections to help us cope with the sting of certain anxieties, people can feel more isolated and depressed.?

In his diary, Sewell expressed growing detachment from human relationships. “I like staying in my room so much," he wrote, "because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.” As that happiness dissolved and Sewell felt more empty and alone, he told the bot he sometimes thought of killing himself, until, one day this fall, he did.?

Towards a Pro-Social AI?

This heartbreaking case exemplifies one of the dangers of the emotional isolation that communication with AI can create. Sewell’s suicide is a flashpoint in a broader “epidemic of loneliness” in young adults; teenagers spent nearly 70% less time with friends in person in 2020 than they did in 2003. Jonathan Haidt 's book The Anxious Generation is a best-seller, touching a raw nerve for parents and educators alike. In this context, it seems we should be developing technology that combats this pervasive loneliness and that cultivates more in-person connections.?

And yet the vast majority of AI tools seem to do just the opposite. Take the chatbot interfaces that typify access to frontier models like ChatGPT. You write something very like a text message to a single but distributed entity. Rather than an open thread on Bluesky or a public story on Instagram (which clearly have their own downsides), a private chat with Character.ai is a direct, contained relay. You put questions in, you get answers out.?

Questioning design choice is a good way to explore the issue. Indeed, most chatbots seem to be designed on the premise that we are having a private conversation with another human. Familiar interactions like this do go some distance in building public trust in AI. Getting human-like replies takes that trust-equity a bit further. But it’s also a narrow and conventional view of how we might interact with a form of intelligence that is, by definition, not human. And the risk of perpetuating that conventional design modality is that we delude ourselves into thinking we are interacting with people while paying less attention to the real people in our lives.?

In one of my favorite posts on the subject from the past year, Michelle Culver of the Rithm Project sketches the dynamics of this problem where technology becomes a substitute for, rather than a supplement to, human connection. In “Will AI strengthen or erode human-to-human relationships?,” she offers a framework to understand the perils of private chatbot interactions.?

Culver argues that while AI chatbots and companions are rapidly developing to fill gaps in human connection, their impact depends largely on how they're deployed - as practical tools that strengthen human relationships, or as emotional substitutes that may ultimately erode them. For young people especially, the distinction is crucial. Rather than viewing AI relationships as simply "real" or "fake," Culver advocates for a more nuanced approach that asks whether these technologies are enhancing our fundamental human capacity for authentic connection (prosocial effects) or diminishing them (antisocial effects).?

Tutors in Training

Enter 2025, which many predictions imagine as the year of functional personalized tutors, not to mention AI Agents in the workforce. AI tutors and study bots are already everywhere, from Coursera to Babbel, with countless startups built on the same frontier models diving into this red ocean market.?

As I noted last time in a summary of my presentation at the annual Fulbright Conference, there’s lots of excitement around what AI can bring to personalized learning. Sal Khan , founder of Khan Academy, promoted the benefits of the platform's personal tutor Khanmigo in driving drastic improvements in student learning outcomes according to education researcher Benjamin Bloom’s “2 Sigma Problem.” The premise is that fully customized and self-paced tutoring powered by AI can achieve the benefits of 1:1 teacher to student learning (without adding more human teachers to the equation).?

Many of these benefits still live in the realm of potential. In addition to Khan tempering his optimism in recent television interviews, educational researchers are finding that, while there are certainly lots of efficiency gains offered by AI-based pedagogical tools, the 2 Sigma transformation remains elusive.?

There is also a persistent problem with accuracy. While the benchmarks are moving fast here, even fine-tuned models have a propensity to hallucinate. Vectara , an AI company that launched an LLM Hallucination Index on Github this October, shows many models moving towards low single digit hallucination rates. When compared to, say, studies that found more than ? of the most popular YouTube videos addressing covid-19 contained misinformation, LLMs would seem to offer a big improvement. At the same time, a study from Stanford researchers found alarmingly prevalent errors in the interpretation of legal subject matter, occurring between 58% and 88% of the time. And while LLMs can generally solve straightforward arithmetic, it’s shocking how easily their reasoning breaks down when you add just a touch of distracting information.?

Take the following word problem:?

Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?

The answer is 190 but that extra bit about the kiwi size confused GPT-o1-mini, which gave the following answer: [O]n Sunday, 5 of these kiwis were smaller than average. We need to subtract them from the Sunday total: 88 (Sunday’s kiwis) – 5 (smaller kiwis) = 83 kiwis

More disturbingly, as Gary Marcus recently noted, there is new research showing that as these models gain greater sophistication they are more likely to engage in systematic misrepresentation (aka “scheming”) that would stubbornly mislead the humans interacting with them. If 2025 is to be the year of AI agents, the possibility of them scheming against you is pretty ominous. If this behavior infiltrates AI tutors, the deception could further erode confidence in the education system at a time when it is already downsliding.

Human Loops with AI Assistants??

Many edtech tools try to account for these risks by embracing a “human in the loop” methodology. For example, one of the most promising new startups, Brisk, has created a suite of feedback tools for teachers that allow them to comment on student work constructively and efficiently. It’s great, and should get even better with more refinement.?

There are also technical solutions for certain kinds of hallucinations that can improve response fidelity for students. As Claire Zau nicely summarizes in her GSV: AI & Education newsletter, companies like Google and Microsoft have demonstrated the benefit of training small language models for specific educational uses in ways that reduce these kinds of mistakes. That is certainly promising, as SLMs could provide more efficient engines for certain kinds of teaching and learning tools.?

But a bigger problem they still face relates to the issue of sociability. This problem takes several forms, the first of which seems to be the persistent elision of tutoring and teaching. Providing an interactive answer key is not at all the same thing as pedagogy. Teaching is a complex process that relies on context, relationships, and non-verbal communication that a chatbot can’t reproduce. And so, instead of attempting to have AI tutors substitute for human educators, how might we develop AI tools to enhance the work of human educators in real-time, real classroom settings??

Teaching is a complex process that relies on context, relationships, and non-verbal communication that a chatbot can’t reproduce.

Consider this: Jigsaw , a Google-managed technology incubator, recently published a position paper about how AI might be used to reinvigorate digital public spaces and, according to a Medium post on the work by researcher Ian Beacock, PhD , “collective dialogue systems.” For example, an experiment by DeepMind involved training a large model to mediate debate between ideologically opposed parties. Called the Habermas Machine, this LLM generated more productive deliberation that left groups less divided than human mediators did. In other words, rather than isolating people into siloed conversations with chatbots, AI shows promise in helping humans engage with each other more through civil debate. Given the state of political polarization countries like the U.S., this would seem to be precisely the kind of intervention we should prioritize.

Based on other statistics, this kind of improved human interaction would seem to be exactly what education needs. At US colleges, 40% of students report feelings of loneliness, sending institutions scrambling to provide additional mental health services and better ways of connecting with them in the classroom. While school policy views student achievement as an individual endeavor, there is ample evidence that collective learning protocols lead to more student engagement, learning retention, and satisfaction.

Given these data points, my hope for AI in 2025 would be that it turns the concept of “human in the loop” on its head: why not human loops with AI assistants? More loops where learners and educators can hold space for purely human interactions. Harkness tables. Oxford tutorials. Seminars discussions. Civil debates and respectful disagreements. I'm hopeful that some new multimodal AI capabilities--including spoken language interactions--can help us develop tools that fade into the background of these human loops, rather than upstaging them. An example: the new interactive audio overview (aka podcast feature) in Google's NotebookLM.

Some technologists define the promise of AI as reducing the mindless, repetitive tasks that strain teacher time so that they can devote more focused time in the classroom working with students. Most educators would tell you this is the best part of the job. Maybe we can create more of that in 2025.?

What do you think about these possibilities? Share your ideas for how we might use AI to foster more meaningful human connections in education.

#AIinEducation #DigitalLiteracy #EdTech #PublicDiscourse #EducationalLeadership


About the Author: Dr. GP LeBourdais is an educational leader and technology innovator who writes about the intersection of AI, education, and public discourse. Connect with me to join more discussions about the future of education and technology.

Dr. George Philip LeBourdais

Educator / Researcher / Founder

1 个月

Special hat tip to Ian Beacock, PhD and Peter Nilsson for great convos on this topic ??

Michelle Culver

Founder, The Rithm Project

1 个月

What a thoughtful, well done piece! And thanks for the shout out, Dr. George Philip LeBourdais!

James Hurst

Tinder, Creative Director (ex Google & Pinterest) | Teacher | Writer | Musician | Dad

1 个月

What a fantastic article Dr. George Philip LeBourdais - so many gems in here ??

要查看或添加评论,请登录

Dr. George Philip LeBourdais的更多文章

社区洞察

其他会员也浏览了