AI-in-the-Loop: Opening the Black Box of Human Emotions
When people ask me about my hobbies, I’m often tempted to say ‘me, I’m my hobby’. Whilst aware that makes me sound like a narcissist (and maybe so) it’s also true. I’ve been in therapy for 30 years, the same amount of time I’ve been in the tech industry. (Is there a connection - probably). One of my therapists said that we only really understand about 20% of ourselves, our motivations and why we react to particular people and circumstances in either a positive or a negative way. She also said, how on earth can we say we “know” someone else if that is true. I’ve realized that understanding the “black box” that is my emotional and mental mapping is a complex and ongoing journey. And I think AI can be a helpful partner on my travels.
Affective AI
As with most things AI, emotional AI has been around for some time. Affective AI has its roots in the work of Rosalind Picard at MIT, who introduced the concept in her book "Affective Computing." (1997) This branch of AI focuses on recognizing, interpreting, and responding to human emotions. Over the years, affective AI has found applications in various fields, including customer service, where it enhances user interactions by detecting and responding to customer emotions; mental health, where AI tools provide therapeutic support and emotional monitoring; and education, where it personalizes learning experiences based on students' emotional states. Affective AI aims to create more empathetic and supportive interactions between humans and machines.
So, that’s a great starting point. Given the super-charge of Gen AI over the past 2 years then using AI to understand human emotions is increasingly moving towards codifying and responding to our human state and is developing fast.
Inflection AI and Emotional Intelligence
Inflection AI is on a mission to integrate emotional intelligence (EQ) into AI systems, to create solutions that understand and respond to human emotions in a meaningful way. Their belief is grounded in the fact that emotionally intelligent AI can bridge the gap between human needs and machine capabilities with more empathetic responses. Rather than maximising utility and IQ only (recent advances means Pi competes just as well with GPT 4 on utility) their vision is to create a future where AI understands and responds to human needs with emotional intelligence, kindness, and authenticity. It’s a big vision and I find it heartening that so many ex-Inflection AI folk have moved to Microsoft AI – surely this must mean that their values will be carried with them. I very much hope so. They are also very nice people – thanks to Ted Shelton , the new COO, for a kind and encouraging response to a recent cold LinkedIn message – much appreciated x
Interpretability in AI
Interpretability in AI, as outlined by Anthropic , aims to make the inner workings of AI models more transparent and understandable. Anthropic's recent work on mapping the mind of large language models involves visualizing how these models interpret and generate responses. While not just focused on human emotions, I found this image from their recent research very compelling. Mapping the Mind of a Large Language Model \ Anthropic
There’s something about this mapping and interconnectedness of emotions that reminds me of art therapy, where defining the edges and overlaps of my own emotional states helps in understanding my reactions and thoughts. It’s a great visual for understanding how the ‘black box’ of AI works but also offers a parallel to the ‘black box” of my own psychological landscape.
Human-in-the-Loop for AI Ethics and Accuracy
We’re increasingly talking about human-in-the-loop (HITL) systems involving human oversight in the AI decision-making process to ensure ethical and accurate outcomes. By incorporating human judgment, these systems can better handle complex, nuanced situations that require contextual understanding and moral considerations. HITL approaches are crucial for maintaining ethical standards in AI applications, as they provide a check against biases and errors that may arise from fully automated systems.
In practice, human-in-the-loop frameworks can involve activities such as:
领英推荐
AI-in-the-Loop: Enhancing the Human State
"AI-in-the-loop” (in my definition) suggests that AI can actively participate in our own emotional and mental processes. By leveraging AI to map our emotional states and provide reflective prompts, we could gain deeper insights into our own minds.
For example, AI-in-the-loop could help us with:
But back to me
Obviously….
I’m an ardent creator of all kinds of custom GPTs, some for fun, some for business, some for education. I wanted to create an Inflection AI-like GPT that I could use on my own ‘black box’ mapping journey. I took inspiration from Alain de Botton’s book “The School of Life: An Emotional Education”. I’m including this quote in full as, even if AI is not your bag, then just being there for someone, making the right noises with genuine care means you can offer some active listening to someone you know and be “alive to how difficult it is for anyone to piece together what they really have on their minds”
Occasionally a friend might be unusually attentive and ready to hear us out. But it isn’t enough merely for them to be quiet. The highest possibilities of listening extend beyond the privilege of not being interrupted. To be really heard means being the recipient of a strategy of ‘active listening’. From the start, the therapist will use a succession of very quiet but significant prompts to help us develop and stick at the points we are circling. These suggest that there is no hurry but that someone is there, following every utterance and willing us on. At strategic points, the therapist will drop in a mission-critical and hugely benign ‘do say more’ or an equally powerful ‘go on’. Therapists are expert at the low-key positive sound: the benevolent, nuanced ‘ahh’ and the potent ‘mmm’, two of the most significant noises in the aural repertoire of psychotherapy which together invite us to remain faithful to what we were starting to say, however peculiar or useless it might at first have seemed. As beneficiaries of active listening, our memories and concerns don’t have to fall into neat, well-formed sentences. The active listener contains and nurtures the emerging confusion. They gently take us back over ground we’ve covered too fast and prompt us to address a salient point we might have sidestepped; they will help us chip away at an agitating issue while continually reassuring us that what we are saying is valuable. They’re not treating us like strangely ineffective communicators; they’re just immensely alive to how difficult it is for anyone to piece together what they really have on their minds." — Alain de Botton, "The School of Life: An Emotional Education"
So, here is Pickles the Fish ?https://chatgpt.com/g/g-DUy5JmKb3-pickles-the-fish – designed to be an eco-art therapy assistant. It’s not there to give me answers or direction, just take how I’m feeling and express it visually using images from the natural world. I’ve loaded in the 51 Buddhist mental states as a primer and asked it to give me a visual of how I'm feeling and then a visual of what that creature would do to help itself. Sometimes I don’t even know how to explain my feelings and this gives me a bit of a starter for ten and some images to use when words fail me. I hope it's useful to you.
I’d love to hear more examples of this kind of AI-in-the-loop approach. As we develop and design the next hackathon for Third Sector at 42 London – there’s no way we can develop useful, respectful solutions without taking into account our human state and deep understanding of how to relate respectfully using AI with underrepresented and marginalised groups to co-create the kinds of AI interactions that can manage complex human circumstances.
Fascinating to hear about how you're melding AI with therapeutic support. I love the sound of Pickles the Fish and the picture of the fox curled up in a tree.
Philosophy, mentorship, & the future of work through the lens of Ai
2 个月I love his topic! Great post, beautifully written. Bonus points for including de Botton & a data viz (Tufte die-hard). Have you checked out Hume AI? Part of the Wisdom Ventures portfolio along with a few others you may like. You might also like Dot if you haven't come across it. The experience feels intimate & kind as soon as you open it. It's a beautifully designed product created by Jason Yuan, founder of new.computer. And lastly I had a brief exchange with someone Woebot Health after hearing a podcast about the tool, sadly not available in the UK yet tho, not even to test.
AI Educator | Using AI to augment human creativity | Copywriter | Course Director of AI Copywriting Masterclass @ CIM | AI for Marketing Tutor @DMA
2 个月Great article, Bronwyn. Thanks for sharing. These are my two images from your GPT. I love the visual representation of emotions. It feels more tangible. I also love the Mapping of the Mind of an LLM from Anthropic. I'm a big fan of the work they're doing. Claude is my go-to for AI-assisted copywriting. I'm always banging on about it in my workshops and talks!