A bug in the ear - a use case for smart wearables, as long as…
Beatriz Zanforlin
AI | software | education | product | chief of staff | human augmentation
In this text: smart wearables in education | the role of human and the role of AI | computers as constant companions that must adapt to us (instead of us adapting to them)
I recently began a research project to study the interaction between users and human augmentation tools. More specifically, I am studying how to design the timing of notifications, so they can serve the application’s purpose but without causing unnecessary disruption to the user. During this project, I was introduced to the concept of the “bug-in-ear.”
Since the 1950s, novices in high-stakes social interactions have been guided by experts through the use of a “bug-in-ear” device - a communicator that allows an experienced individual to provide real-time feedback during live interactions. This type of assistance is used in sports, military operations, and event coordination, but the term “bug-in-ear” is most frequently associated with applications in psychology and education (a quick search yields both intriguing results and some less-than-welcoming images).
This educational use case interested me. As much theory as one can read about teaching practice, managing a classroom and interacting with students can only be learned through practice. For this reason, it’s common for teachers to receive coaching from experienced educators. The argument for the “bug-in-ear” comes from the value of real-time feedback, and not just post-practice feedback, in developing new habits or preventing counterproductive ones from forming.
The introduction of Bluetooth in the 2000s made the devices more accessible, affordable, and easier to use. Still, the main limiting factor remained: the costly time of an experienced professional observing the class. Could this be automated with AI?
What is a human worth doing, what is AI worth doing?
Prof. Chris Dede of the Harvard Graduate School of Education makes a useful distinction between judgment and reckoning, offering this as a lens through which we can evaluate the roles of humans and AI in collaboration. Reckoning involves tasks like calculation, prediction, and formulaic decision-making. Judgment encompasses ethical deliberation, contextual awareness, and nuanced decision-making that rely on human experience and values.
Rather than proposing extremes - either keeping the “bug-in-ear” as an exclusively human activity or fully replacing it with AI - I began breaking down its components:
1. Define the expected learning outcomes for the class, e.g. understand ecological relationships
2. Identify effective teaching practices for achieving those outcomes, e.g. assess students' comprehension
3. Specify desired observable behaviors while with those practices, e.g. ask students to repeat their understanding of an explanation
4. Monitor the execution of those behaviors
5. Report the observation of the behaviors
Defining learning outcomes (1) and identifying the most effective teaching practices (2) seem best suited for human experts, given their reliance on educational values and expertise. However, monitoring behaviors (4) and reporting observations (5) are repetitive and time-intensive tasks that AI could handle, enabling every class to become a practice opportunity for teachers.
Defining which behaviors to observe (3) could be a collaborative effort between humans and AI. AI could assist by identifying patterns between teacher behaviors and student outcomes, helping refine what should be tracked.
领英推荐
“Traditional bug-in-ear"
An expert observes the class in real time, either through video conferencing or from the back of the classroom. The expert and teacher have a communicator, and the expert sends signals when a particular behavior is observed.
Future "bug-in-ear"
While in-person observations would still have their place in teacher training, teachers could receive feedback for every class, even when the expert is unavailable, accelerating habit acquisition. The expert and teacher would collaboratively define key behaviors worth tracking, given the desired learning goals. These would then be configured in the tool, and whenever teachers exhibit them, they receive an in-the-moment feedback through sound or haptic signals.
From human support to human companion
This research project started from my interest in two intersecting technological trends that are changing human-computer interfaces:
These trends accelerate a broader movement of computers becoming increasingly closer to us. PCs were confined to separate rooms, then became laptops in our backpacks, smartphones in our pockets, smartwatches on our wrists, and soon chips in our brains. At some point in this progression, tools cease to be external items and become part of us, always present.
You once had to make an intentional choice to go use a computer in a lab (initiating the interaction with the machine yourself), you now receive notifications at moments you didn’t choose (the machine initiates the interaction with you). If we’re already overwhelmed by notifications in our pockets, imagine Meta’s new glasses sending alerts directly in front of your eyes at the wrong time. The closer these tools get to us, the more they will need to integrate seamlessly with our cognition and natural biological rhythms, understanding both us and our context. These tools must learn who we are, how we prefer to be engaged, and when proactive interaction will be helpful rather than distracting. They must effectively become companion technologies - competent, adaptable, cooperative, and trustworthy - or risk becoming an annoyance.
For the bug-in-ear, a tool designed for learning and habit acquisition, this means devices must determine whether providing instantaneous feedback is appropriate. They will need to weigh the benefits of immediate feedback against the risks of distracting the user, disrupting the interaction, or negatively impacting the user’s emotional state.
Determining the right moment for feedback is a complex challenge with many moving parts, and I’ll continue sharing updates on its progress here.
Biundo, S., Wendemuth, A., Biundo, S., Biundo, S., Wendemuth, A., Wendemuth, A., & Biundo, S. (2017). Companion Technology: A Paradigm Shift in Human-Technology Interaction (1st ed. 2017 edition., pp. 1–500). Springer Nature. https://doi.org/10.1007/978-3-319-43665-4
Dede, C. Etemadi, A., & Forshaw, T. (2021). Intelligence augmentation: Upskilling humans to complement AI. The Next Level Lab at the Harvard Graduate School of Education. President and Fellows of Harvard College: Cambridge, MA.
Founder & teacher @ Elevate Education | Education Leader with expertise in Curriculum Development & Professional Development
1 个月Wow, sounds super interesting. I'm curious about what kind of tools are we using?