Emotive AI
Photo by Alexander Sinn on Unsplash

Emotive AI

There is life beyond #ChatGPT – and this post definitely is about something else. Namely, I want to draw your attention to something much bigger that is developing in the world of AI – even bigger than Large Language Models (the technology behind ChatGPT)! Yet it explains why ChatGPT feels differently from, say, a predictive credit score. I call this new skill that AI is developing Emotive AI.

With #EmotiveAI I want to denote the growing ability of Artificial Intelligence to steer humans by provoking strong emotional reactions. Just as the definition of AI itself is constantly evolving – the more sophisticated the most advanced tools are, the more simpler automation tasks are excluded –, also the border between basic predictive models and Emotive AI are fuzzy. This doesn't matter: also the difference between a "strong personality" and an annoying or charismatic person is fluid, and yet, there are undoubtedly charismatic leaders as well as psychopaths who have an uncanny skill to influence (or manipulate) our behaviours.

Emotive AI combines two important elements: The identification of personality as a set of psychological attributes required to tailor messages (and other actions) to optimize their ability to influence us (in what we think and do)?– this is also called affective computing or Emotion AI –, and an iterative set-up where a system learns about us and takes us on an influencing journey over multiple interactions.

When Cambridge Analytica used Facebook likes to classify users by personality and then matched them with ads optimized for different personality types, it created an early demonstration of this first element of Emotive AI. It is also an example of a malicious application of psychology as it resulted in a manipulation of users powerful enough to allegedly influence at least two public votes (one presidential election and one far-reaching plebiscite). A positive and benevolent example of statistical modelling of psychological attributes is credit scoring where psychometric scorecards offer innovative ways to give the still billions of unbanked consumers responsibly access to credit.

These predecessors of Emotive AI are limited to a one-shot decision problem, however – there is a single decision (e.g., which ad to show or whether to approve a loan) that takes the individual's behavioural patterns (e.g., a propensity to overspend) as a static attribute.

Emotive AI, by contrast, takes turns. ChatGPT is only an early demonstration of this insofar as while it engages in a conversation (it reacts to an initial question, provides a first answer, and then builds on it based on further inputs from the user), it optimizes "only" for the plausibility of its output. If ChatGPT is observed as handing out sensible medical advice or harassing a user, it does not choose its words in order to optimize a behavioural outcome (e.g., the user changing his nutrition and exercising more in order to successfully and measurably overcome a certain ailment) but merely creates responses that appear highly plausible based on the ocean of documented conversations of real people with a similar subject that went into the calibration of the underlying Large Language Model.

Building an Emotive AI system therefore requires two very explicit steps: First the system needs to be designed to develop a psychometric profile of the user with relevant attributes for optimizing the choices made in the influencing journey. Second the mechanics of the influencing journey itself need to be designed.

The psychometric profiling usually draws on two sources. There is an initial profile that is derived from available data (Facebook likes would be an option if Facebook still made them available and younger generations of users had not migrated to other social media platforms), including data that is created through an onboarding process (such as a more or less explicit psychometric questionnaire). And then there is the continuous creation of psychologically revealing behavioural data. A thoughtful design of the system will build features that are optimized for creating such data. (Behavioural credit scores taking in new data each month about whether you pay existing debt in time, to what extent you pay off credit card balances, etc., are an excellent example of such behavioural data but they merely use data that is produced anyhow – the most powerful implementations of Emotive AI take additional steps to create more data, e.g., through app functionality, especially in contexts where otherwise behavioural data is scarce.)

The mechanics of the influencing journey involve interaction patterns (i.e., the overall structure of the journey), interventions to influence user behaviour (the "content" to be created by Emotive AI), and the core of the Emotive AI – the decision algorithms to optimize interventions with respect to a defined outcome. A chat (consisting of user inputs and machine responses) is a basic example of such an interaction pattern; an interactive online course consisting of teaching and exercises to practice the newly taught content (and establish the degree to which users have picked up the skills taught by the course) would be another example. If you were to build an app for psychological counselling, you could regularly request the user to reply to standardized assessments of mental health to track progress (e.g., in overcoming bouts of depression) – this feedback on the actual objective of the chat would allow subsequent refinements of the underlying chat algorithm to give not just plausible but actually helpful responses.

A very exciting application of Emotive AI in my area of expertise (credit) is debt counselling and helping customers achieve financial goals. For a consumer struggling with debt to overcome the present challenges it often takes forming new habits (that conserve financial resources) or achieving a higher self-motivation to repay debt. Where in the past the role of psychological profiling in debt collection was limited to choosing the content of a one-off interaction (taking the customer's behaviour as a static basis for optimizing the decision what to say or offer), Emotive AI can go far beyond this by actually helping consumers build financial discipline, develop a financial plan, and fundamentally change their behaviour towards a more responsible use of their income and more prudent consumption decisions.

Undoubtedly, Emotive AI is an extremely powerful technology that can be used for both good and evil purposes – in the best of all worlds, it makes a machine our trustworthy counsellor and advisor who coaches us to be a better human being while in the worst of all worlds, it is a deceitful fraudster that manipulates us to further the interest of others, such as ruthless marketers trying to sell us the most useless things at exorbitant prices. It is yet another example for the need to think hard about how to enforce ethical AI, which is unlikely to be achievable without some form of enlightened regulation.

What are aspects in your world (e.g., your professional domain or amongst the people around you) where individuals would need some help to make better decisions for themselves or change their behaviour to avoid harm? What are the profiles and the respective markers of the people most in need of help? How could Emotive AI help them achieve this?

Surya Ramkumar

Executive Director at Contango | Ex-Microsoft | Ex-McKinsey | Board Member | #1 Bestselling Author

1 年

Excellent article Tobias, thank you for sharing. The two steps on building emotive AI that you articulate are spot on: 1) System that develops a psychometric profile 2) mechanics of the influencing journey itself However I think this needs to be preceded by an all-important step 0. Figuring out the ethics and guidelines. In general, I think it would help, as we develop more powerful tech, to reflect on the ethics (and thus policies), before or at least in parallel to the tech innovation, not as an after thought. With that in mind, curious to hear how you consider this aspect, if you have looked into it? I’ve worked in the collections unit of a credit card company before, manning the lines, literally callng up delinquent customers one by one. And I can viscerally understand the power of emotive AI in such an area. Even there, where you argue (and I don’t disagree) that emotive tech will be largely beneficial, it’s a murky field. Fascinating new area, but curious to have more thoughtful minds working on the foundational and ethical topics as much (if not more) than the tech dev itself.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了