Teaching AI to be Inclusive
Sudha Jamthe
Technology Futurist, Educator, GenAI Author, Researcher, LinkedIn Learning Instructor, Global South in AI, Stanford CSP & Business school of AI: IoT, Autonomous Vehicles, Generative AI
I was on the phone for an hour with a kind customer service person to get some information on a health line and every step of the way she made me feel comfortable, checking on my consent for every information and making sure I was ok since I had to wait several rounds of another 10 minutes. I had dealt with an angel. I left feeling safe. Her job cannot be replaced by an AI automating customer support.
No AI can learn kindness or understand the real emotions of a human being as we go through fear, frustration, entitled anger at being asked to wait on a phone call for this long and the thankfulness of the kindness of a stranger.
And I had this epiphany at the legs of a NLP course I supported Sam Wigglesworth teach Natural Language Processing to a hungry cohort of technology business professionals online at the Business School of AI. Sam taught us to build an AI to do text mining, language, voice and document analysis and build a voice assistant chatbot for customer support.
Can we teach an AI kindness? Or Can we teach an AI to think ethically?
To get ready for the next class with AI Ethicist and Cognitive Science Researcher Susanna Raj as she teaches AI Ethics: Inclusive AI, I taught a session about Emotions AI at last week's weeklywed, where my community of students past and present came together to learn a futuristic topic to find their role in industry.
What is Emotions AI?
Emotions AI is not about teaching AI to have emotions. It is about AI trying to learn and predict human emotions. The market is called emotion detection?and recognition (EDR). It started out as an AI for good to help autistic children. I had written about this first in 2016 in my IoT Disruptions book about how school children in China were watched to see if they are attentive to rate the teachers. Next in 2017 in my book "2030 The Driverless World", I shared a futuristic insight from my friend Prof.Ahmed Banafa. He spoke about a car being able to track the alertness of a driver and I raised questions about what if the car tracks the road rage of a driver and alerts other cars to stay away from the unsafe car.
I felt it unsettling that the agency of humans could be taken away by the AI. I never questioned if we could teach ethics to the AI.
Fast forward to 2020 and we have AI in HR filtering candidates by measuring their intellect, or recommending if someone is a cultural fit for an organization. We have emotions AI claims from startups who have quickly switched from serving autistic children to tracking our video views to predict our political party affiliation or helping marketing sell us more shoes.
The?emotion detection?and recognition (EDR) market was valued at?USD 19.87 million in 2020, and it is expected to reach USD 52.86 million by 2026. (ref: Modor Intelligence)
The biggest market for Emotions AI is tracking our real interest behind buying a product or service to create a customer purchase funnel in retail and ecommerce. And we have zero agency in not sharing our true feelings or transparency to know how far away from reality were these interpretations of our real emotions.
What makes up Emotions AI?
Emotions AI looks for patterns like all AI Machine Learning models do, from our voice, our gait, our facial expressions, our eye movement, tone modulations of speech and sentiments from our words and texts. It adds sensor data to this and checks our pulse to predict a set of basic emotions happy, sad, fear, disgust, surprise or angry and make inferences on abstract topics such as intellect or our personality compatibility with a company culture. It tops it off to not just understand our emotions but the emotions intensity. Are we in pain, is it too much of pain?
Where is AI Ethics in this?
This is alarming in its privacy invasion in prying into our feelings and incorrect in a lack of common construct when different technology companies say that I am surprised or angry at something.
And it lacks the social fabric of our nuanced communication where we do not display one emotion and switch between so many thoughts and feelings but have a cultural wrapper on how we behave at work, in public, in front of elders and in shared social spaces and more.
And it does not stop at adults and tracks emotions of children using toys. And there is no regulation or transparency around this. Not even from EU who gave us GDPR data privacy regulation. Dr Gilad L. Rosner, founder of Internet of Things Privacy Forum has written a research paper about it here, with an academic paper?here.
What can and should we do now? Is this an opportunity of threat?
Today I read this article about employers monitoring employees by their keystrokes and facial recognition or listening to their voice as they worked remotely because of the pandemic. All this brings in more AI that is unethical with no sense of right or wrong, no sense of inclusiveness to represent us all as humans and making predictions which are baseless and likely incorrect and hurt people and us as a society that has some common values and decency at the workplace.
领英推荐
So my call is to you is to get educated first. I tell myself this daily. Do not get excited by the possibilities of technologies without understanding your power in unleashing them to do harm to the world. Do not stay on the sidelines saying I do not know coding that is out of my reach. Get educated about AI, about the datasets that power AI, about the use cases being sold to companies. Make the AI building process representative of you by all your identities and bring your lived experience to create checks and balances to building AI to be inclusive and responsible.
Come audit Susanna Raj's Emotions AI: Inclusive AI pre-class on Sep 27th at noon pt for 60 min (recording available). Signup for free.
How can we help you?
I have created an Emotions AI webinar that you can learn from a 90 min online free course (and earn a certificate too). Here.
We also have done AI Ethics webinar and brought the amazing Diya Wynn of AWS to talk to us about what it means to be an AI Ethicist to help your customer build ethical AI. Ankita Joshi share her experience about being an engineer who guided self-driving cars to be built ethically for General Motors. Here.
Susanna Raj wants us to go deeper (using NoCode) to understand the right datasets from the wrong datasets for emotions AI. She has an AI Ethics: Inclusive AI online course coming up on Sep 28th and Sep 30th. Here.
Learn about 56 types of Bias and how the bias creeps into building AI right at defining the problem or labeling data and learn to do that right with a LiveLab with Susanna Raj (no coding or even looking at code on a screen is a requirement).
Come audit Susanna's pre-class on Sep 27th at noon pt for 60 min (recording available). Click image below to signup for free.
Add a comment below or talk to me if you work in Emotions AI space or an industry impacted by it. I am open to hear about your experience about reality from industry and offer you guidance on your AI learning journey.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — —
About Sudha Jamthe
Sudha Jamthe is a globally recognized Technology Futurist on IoT, AI and Autonomous Vehicles Business with 20+ year mix of entrepreneurial, academic and operational experience from eBay, PayPal, GTE and Harcourt.?Sudha Jamthe enjoys mentoring business leaders with her books, keynotes and courses at Stanford Continuing Studies and live online courses at?BusinessSchoolofAI.com. She chairs the strategic advisory council of Barcelona Technology School where she teaches AI Ethics to three masters programs and is an Ambassador for FundingBox Impact Connected Cars (Europe H2020) Community. She has an MBA from Boston University.
Read some of her past articles:
Connect with Sudha Jamthe:
Checkout Sudha Jamthe's books on Amazon. Sudha's book?serve as textbook for her Stanford courses.
Technology Futurist, Educator, GenAI Author, Researcher, LinkedIn Learning Instructor, Global South in AI, Stanford CSP & Business school of AI: IoT, Autonomous Vehicles, Generative AI
3 年All: I updated the titled from kind AI because I do not believe we can teach AI to be kind and do not want to imply that. Join the #free class audit of Cognitive Science Researcher and #AIEthicist Susanna R. if you want to learn #AIEthics #InclusiveAI. We won't be reviewing any biased datasets but who you resources and methods you can learn to build #InclusiveAI. https://lu.ma/livelabpreclass (tell your friends) In the course on 28th and 29th we'll do a project to data labeling and QC check exercise using Playment #NoCode #dataannotation platform. Feel free to add a comment if you work with #AI or have questions or success with making it #Inclusive or simply join us at the pre-class audit tomorrow.
Technology Futurist, Educator, GenAI Author, Researcher, LinkedIn Learning Instructor, Global South in AI, Stanford CSP & Business school of AI: IoT, Autonomous Vehicles, Generative AI
3 年I wrote this piece in Techcrunch in Oct 2016 about devices, conversations and emotions and getting ready for a future to give up our free will to dull our sense for conveniences served by machines. I feel the chill that I felt as I wrote that sentence and have included this in my books on #IoT and #AutonomousVehicles and #AI. Reality is that when we give up control to a device, we have another human or company essentially capitalizing and I my hope is my students who learn about AI with an ethical lens and build AI ethically right from defining problems to be solved, data collection and annotations, AIX design experience and how to use the data collected to create business value. We are heading in the trajectory I predicted 7 years back but it's still early stages in AI as a business, AI as a product and AI as a market. https://techcrunch.com/2015/10/16/breaking-the-barrier-of-humans-and-machines/
Founder | AI Ethicist
3 年AI cannot be kind. To imply that it has that capacity is unethical. Math is neither kind not cruel. It's math. AI is math. Just. Math.