Can ChatGPT exhibit emotional intelligence? (Part I)

Can ChatGPT exhibit emotional intelligence? (Part I)

Downloadable PDF version here

Executive Summary

In this article, I explore whether artificial intelligence (AI), more specifically a large language system such as ChatGPT, can exhibit emotional intelligence. Whether or not an AI language system such as ChatGPT can exhibit emotional intelligence is largely contingent upon a) how we define “emotional intelligence,” which includes its underlying competencies, skills, traits, and abilities, and b) how we measure emotional intelligence. Based on an industry standard and research-based definition, competency model, and assessment tool for emotional intelligence, I conducted an experiment with ChatGPT-3 that suggests with the right data and programming, ChatGPT could exhibit a subset of emotional intelligent behaviors that are close to indistinguishable from human emotional intelligence in some interactive contexts.

Overview

I begin by providing a research-based definition of emotional intelligence, supported by a set of competencies that fundamentally revolve around:

  • Recognizing and understanding our own emotions.
  • Managing our own emotions.
  • Recognizing and understanding others’ emotions.
  • Managing relationships.
  • Creatively and effectively taking on new challenges.

Addressing the objective of measuring emotional intelligence and its underlying skills, traits, and abilities, organizational development practitioners and professional development experts typically assess emotional intelligence via one of a handful of industry standard and statistically validated assessment tools, which are based on a particular emotional intelligence competency model. These assessment tools most often offer two versions of assessment: self-assessment and 360 degree or multi-rater assessment. Self-assessments in their limited scope measure an individuals’ own perceptions of their emotional intelligence skills, traits and abilities, whereas 360 degree or multi-rater assessments defer to the perceptions of others, such as peers and colleagues, employees, supervisors, stakeholders, etc. Examples of current assessment tools include Emotional Intelligence 2.0 (Jean Greaves and Travis Bradberry), EQ-i 2.0 (Bar-On), MSCEIT (Mayer-Salovey-Caruso), and the Emotional Capital Report or ECR (Martyn Newman/RocheMartin).

I then provide a background and a summary of past AI research that begins to address the possibility of artificial emotional intelligence (AEI). Considering the history of AEI development, as well as the definition of emotional intelligence and its underlying competencies and behaviors, AI systems are well on their way to exhibiting some degree of emotional intelligence capability. This capability mainly revolves around recognizing emotional expression (e.g., verbal, textual, facial), simulating emotional expression, and learning and applying this emotional data.

Finally, I summarize the results of an informal experiment I conducted to assess the degree to which ChatGPT (i.e., ChatGPT-3) can demonstrate emotional intelligence. This initial assessment (a text prompted version of a self-assessment) was based on an industry standard competency model and statistically validated emotional intelligence assessment tool. Given the working definition and framework of emotional intelligence competencies, skills, traits, and abilities, this informal assessment showed that ChatGPT is emotionally intelligent, but its emotional intelligence is incomplete. Despite being incomplete, its emotional intelligence, expressed as an EQ score, was near the human mean or average. ChatGPT’s strengths include Self-Control, Straightforwardness, Adaptability, and Optimism. It exhibits functional emotional intelligence for Self-Actualization, and its development opportunities and development needs include Self-Reliance, Relationship Skills, Self-Knowing, and Empathy. Self-Confidence was excluded because the average for that competency was based on the response of only one assessment prompt. Other interesting outcomes of the assessment included:

  • Inconsistent responses to the same assessment question at different times
  • Answering “Not applicable” to all the questions but one for Self-Confidence (with some N/A responses for a few other questions in other competencies as well)
  • Providing counter-intuitive or intriguing answers to some assessment questions by referencing its ability to experience “simulated emotions.”?

ChatGPT’s ability to exhibit emotional intelligence is possible because the definition of emotional intelligence includes behaviors, skills, and abilities that are observable, but do not require experiencing emotions to exhibit them. Additionally, it is possible that even without the ability to experience human emotions, ChatGPT may be able to demonstrate other instances of emotional intelligence by means of what it refers to as “simulated emotions.”

For future research and to gain a more complete understanding of the potential emotional intelligence of ChatGPT, I would propose repeating this informal assessment with ChatGPT-4, and then conducting a formal 360-degree assessment that measures the degree to which humans perceive the large language model as exhibiting emotionally intelligent behaviors during multiple interactions.

Is ChatGPT just really good at using large amounts of data, pattern recognition, and programming to imitate human-like emotional intelligence? How is this any different than coaching a person to learn to recognize their own and others’ emotional reactions and respond accordingly?

***


Can artificial intelligence, more specifically a large language system such as ChatGPT, exhibit Emotional Intelligence? Alternatively stated, is Artificial Emotional Intelligence (AEI) possible? Whether or not an AI language system such as ChatGPT can exhibit emotional intelligence is largely contingent upon a) how we define “emotional intelligence,” which includes its underlying competencies, skills, traits, and abilities, and b) how we measure emotional intelligence. Fortunately, over a century of research has provided multiple viable definitions of emotional intelligence, competency models, and validated assessment tools. Consequently, we need only select one model and assessment tool and adapt or repurpose them in the context of artificial intelligence behavior.

Defining Emotional Intelligence and its competencies

For starters, if we assume right at the outset that emotional intelligence is something that only humans (who presumably experience and express emotions) can experience, then we have already closed the case. However, for thought experiment purposes, I am not going to make this assumption and instead start with the premise that possessing an amygdala and the biological hardware and neuroprocessing required to experience emotions is not a necessary condition to demonstrate some emotional intelligence behaviors. In other words, I am not adopting an anthropocentric definition of “competency” in the context of demonstrating emotional intelligence.

Over a century of research and debate has yielded many potential definitions of emotional intelligence. As early as 1920, as foundational thinking of what would eventually become the science emotional intelligence, Thorndike defined “social intelligence” as the “ability to understand and manage men and women, boys and girls – to act wisely in human relations” (Thorndike, 1920). Over four decades later, Michael Beldoch, who should appropriately be credited as the first researcher to coin the phrase “emotional intelligence” in his 1964 article, “Sensitivity to Expression of Emotional Meaning in three modes of communication,” states:

“Emotional sensitivity involves symbolic processes which can be investigated empirically, with the aim of discovering the general principles of ‘emotional intelligence’ in perhaps much the same way as psychologists have discovered the principles underlying discursive intellectual functioning” (Davitz and Beldoch 1964). ?

He further characterizes emotional intelligence as:

“There exists a general ability to identify emotional communications which transcends a given medium of expression. Thus, those individuals who better identify emotional expressions in content-standard speech also tend to identify expressions more accurately in graphic and musical modes. Ability to identify non-verbal emotional expressions, therefore, is a relatively stable human characteristic which can be measured with some reliability and which generalizes over specific modes of communications.”

Incidentally, when I asked ChatGPT who coined the phrase “emotional intelligence,” its response was Peter Salovey and John D. Mayer in 1990, which is of course incorrect. I have a physical copy of J.R. Davitz’s book, “The Communication of Emotional Meaning” published in 1964, which contains Beldoch’s article with the above quotation and introduction of the phrase “emotional intelligence.”???

Moreover, Beldoch appears to describe this initial observation of emotional intelligence as the ability to recognize emotional content across varying modes or vehicles of communication. This discovery is the first step toward flushing out the emotional intelligence competency, empathy, which involves the ability to understand, read, or be aware of other peoples’ emotions, or at least the empirical manifestations of their internal emotions as expressed by means of textual, verbal, or non-verbal communication.

Moving on, two other pioneers in the research of emotional intelligence, Salovey and Mayer, in their landmark article, "Emotional intelligence,” expand and add more clarity to the definition of emotional intelligence. After defining and differentiating emotions, intelligence, and social intelligence, they conclude, “we define emotional intelligence as the subset of social intelligence that involves the ability to monitor one's own and others' feelings and emotions, to discriminate among them and to use this information to guide one's thinking and actions” (Salovey and Mayer 1989). They go on to unpack this definition and call out specific skills or competencies. They sub-divide emotional intelligence, for both self and others into the following:

  • Expression and appraisal of emotions: verbal non-verbal for self, non-verbal perception for others, which includes empathy.
  • Regulation of emotions: regulating moods in self, altering the affective reactions of others.
  • Utilization of emotions: flexible planning, creative thinking, re-directed attention, motivation.

Salovey and Mayer summarize with “people who have developed skills related to emotional intelligence understand and express their own emotions, recognize emotions in others, regulate affect, and use moods and emotions to motivate adaptive behaviors.”

Another key figure in the science of emotional intelligence, Reuven Bar-On, also provides a definition and competency model for emotional intelligence. In his article “The Bar-On Model of Emotional-Social Intelligence (ESI),” he states “emotional-social intelligence is a cross-section of interrelated emotional and social competencies, skills and facilitators that determine how effectively we understand and express ourselves, understand others and relate with them, and cope with daily demands” (Bar-On 2006).

Bar-On then dissects emotional intelligence into the following elements:

  • Self-Regard: to accurately perceive, understand and accept oneself.
  • Emotional Self-Awareness: to be aware of and understand one’s emotions.
  • Assertiveness: to effectively and constructively express one’s emotions and oneself.
  • Independence: to be self-reliant and free of emotional dependency on others.
  • Self-Actualization: to strive to achieve personal goals and actualize one’s potential.
  • Empathy: to be aware of and understand how others feel.
  • Social Responsibility: to identify with one’s social group and cooperate with others.
  • Interpersonal Relationship: to establish mutually satisfying relationships and relate well with others.
  • Stress Tolerance: to effectively and constructively manage emotions.
  • Impulse Control: to effectively and constructively control emotions.
  • Reality-Testing: to objectively validate one’s feelings and thinking with external reality.
  • Flexibility: to adapt and adjust one’s feelings and thinking to new situations.
  • Problem-Solving: to effectively solve problems of a personal and interpersonal nature.
  • Optimism: to be positive and look at the brighter side of life.
  • Happiness: to feel content with oneself, others and life in general.

Finally, elaborating upon the available industry standard definitions and competency models of emotional intelligence, Danial Goleman has been very instrumental in advancing the study and measurement of emotional intelligence, especially in regard to leadership emotional intelligence. In his landmark 1998 Harvard Business Review article “What Makes a Leader?” he breaks emotional intelligence down into the following competencies and skills:

No alt text provided for this image

Based on the above definitions of emotional intelligence and its various skills, traits, and abilities, the following is a working definition that draws from and attempts to generalize and integrate most all the key peer reviewed definitions, in particular those provide by Salovey and Mayer (1989), Payne (1985), Gardner (1983), Bar-On (2006), Weisinger (1998), Goleman (1995, 1998), Petrides (2001), Bradberry (2006), and Newman (2009, 2015):

Emotional intelligence is a set of skills, traits, or abilities that involve:

  • Recognizing and understanding our own emotions
  • Managing our own emotions
  • Recognizing and understanding others’ emotions
  • Managing relationships
  • Creatively and effectively taking on new challenges

…which result in higher quality, more productive and mutually beneficial relationships, performance, and outcomes.

Underlying this above definition is a set of emotional intelligence competencies. Emotional intelligence competency models also vary, but for the sake of simplicity, I will defer to the competency model developed by Martyn Newman and used by RocheMartin, which draws from and synthetizes both the Bar-On and Goleman competency models. These competencies, their definitions and example behaviors are as follows (Newman 2015, 2009):

1.?????Self-Knowing: Recognize how our feelings and emotions impact our personal opinions, attitudes and judgements.

  • Example behaviors: Communicating our thoughts and feelings, recognizing how our emotions affect our behavior, understanding how our behaviors affects others.

2.?????Self-Control: Remaining patient and managing our emotions well; restraining action and remaining calm in stressful situations without losing control.

  • Example behaviors: Good at handling competing demands, remaining calm under pressure, controlling our emotions, being patient.

3.?????Self-Confidence: Respecting and liking ourselves and be confident in our personal skills and abilities.

  • Example behaviors: Liking who we are, demonstrating confidence in our skills and abilities, feeling sure of ourselves, lacking doubt in our ability to perform.

4.?????Self-Reliance: Take responsibility for ourselves, backing our own judgments, and demonstrating self-reliance in developing and making significant decisions.

  • Example behaviors: Being self-directed and persistent in pursuing actions, not preferring to be told what to do, performing best in environments that allow us to act independently, taking control of a situation and directing others when needed, making independent decisions.

5.?????Empathy: Understanding other people’s thoughts and feelings and creating resonant emotional connections with others.

  • Example behaviors: Connecting easily with people at a more personal level, understanding other people's feelings, easily seeing other people's perspectives, not hurting other people's feelings, being a good listener.

6.?????Relationship Skills: Establishing and maintaining collaborative and rewarding relationships characterized by positive expectations.

  • Example behaviors: Helping people achieve their goals, communicating easily with most people, interested in other people's opinions, working well with a wide variety of people.

7.?????Straightforwardness: Giving clear messages and expressing our feelings and points of view openly in a straightforward way and being comfortable challenging the views of others while demonstrating respect for their views.

  • Example behaviors: Finding it easy to tell people what we think, standing up for our own opinions rather than deferring to the views of others, ability to say “no” when necessary.

8.?????Adaptability: Adapting our thinking, feelings and actions in response to changing circumstances and being receptive to new ideas, demonstrating tolerance of others.

  • Example behaviors: Ability to start new things, adapting well to workplace or environmental changes, not easily disturbed by changes to our daily routines, does not experience difficulty changing our opinions.

9.?????Optimism: Sensing opportunities, demonstrating resilience, and focusing on the possibilities of what can be achieved even in the face of adversity.

  • Example behaviors: Seeing the opportunities that exist in situations, easily bouncing back from defeat, seeing the possibilities of what can be achieved despite the difficulties, not worrying about things going wrong.

10.??Self-Actualization: Managing our reserves of emotional energy, maintaining an effective level of work/life balance, and thriving in setting challenging personal and professional goals.

  • Example behaviors: Strongly motivated to achieve goals, enjoying our work, exhibiting passion about work and life, having multiple interests, happy with work/life balance.

At this point, I would ask, is experiencing emotions required to exhibit all of above emotional intelligence competencies, skills, traits, and abilities? ?For example, does recognizing and identifying human emotions verbally require being able to experience those same emotions? Does communicating ideas clearly, collaborating with others, or adapting actions to unique circumstances require experiencing emotions? I would hypothesize and say “no” and that these behaviors are observable and measurable independent of the entity exhibiting the behaviors experiencing emotions.

As a practical and real-world example, I have assessed and coached many leaders and business professionals on emotional intelligence behaviors. Sometimes I encounter an individual who is not very emotionally expressive and may struggle trying to read other people’s emotions. Individuals like this often assess low in empathy. However, many of the behaviors that fall under empathy do not require being an emotionally expressive person, nor being exceptionally adept that reading the emotions others. For example, active listening, asking open-ended questions, and paraphrasing or mirroring what others say do not require reading their emotions or being emotionally expressive. This just requires paying attention to what people say and asking the right questions. By doing this we gain an understanding of their perspectives and convey to them that we are interested (at least behaviorally) in their perspectives, and this in turn creates trust and connection, which fosters more productive relationship and outcomes for both parties. A person does not have to be emotionally expressive or have any knowledge of a person’s intimate and subjective emotions to practice these empathy behaviors, and this begs the question, do these behaviors even need to be practiced by a human being?

Measuring emotional intelligence

Just as there are a handful definitions and competency models for emotional intelligence, these models often serve as a basis for measurement. We typically measure emotion intelligence formally via statistically validated assessment tools. In general, there are self-assessments and 360 degree or multi-rater assessments. Self-assessments are limited in what conclusions we can draw from them given that they rely solely on the self-perceptions of the person taking the assessment. In my emotional intelligence assessment and coaching experience, self-assessments are a good starting point, but they only tell part of the story of a person. A 360-degree or multi-rater assessment is the only reliable way to gain a complete understanding of a person’s emotional intelligence, and this requires other individuals (colleagues and peers, direct reports, supervisors, stakeholders, etc) to assess to degree to which a person exhibits various emotionally competency behaviors and skills in their working or non-working environment. It is common for there to be significant discrepancies between the results of self-assessment and 360-degree/multi-rater assessment. Why these discrepancies exist can sometimes be complicated and has to be flushed out during the coach/client discussion. Moreover, it is due to potential discrepancies between self and 360-degree/multi-rater assessment results that coaches generally defer to the results of the 360-degree assessment to make the final call. Below are some of the more common and industry standard emotional intelligence assessment tools in the market, who developed them, and how they are delivered:

  • Emotional Intelligence 2.0 - Jean Greaves and Travis Bradberry / available and delivered at TalentSmart and 360 degree assessment
  • EQ-i 2.0 (Emotional Quotient-Inventory 2.0) – Bar-On / delivered by a certified coach via MHS
  • MSCEIT? (Mayer-Salovey-Caruso Emotional Intelligence Test) – Mayer-Salovey-Caruso / delivered by a certified coach through MHS
  • Emotional Capital Report (ECR) – Martin Newman / delivered by a certified coach via RocheMartin

Now that we have a basic understanding of emotional intelligence, its underlying competencies, skills, traits, and abilities, and how we can assess it, what has been done in the industry to show that AI can exhibit some of these elements of emotional intelligence?

Historical work on Artificial Emotional Intelligence

It should come as no surprise that the research and development on the ability of AI computer systems or large language systems to recognize or simulate human emotions has been in the works for several decades. In their 2018 article “The Age of Artificial Emotional Intelligence,” Dagmar Schuller and Bj?rn Schuller provide a concise history of this research, bulletized here (Schuller 2018):

  • The first patent for automatic speech emotion recognition was filed in 1978 by John D Williamson.
  • MIT Media Lab’s Rosalind W. Picard first used the phrase “affective computing” in her book titled with the same phrase.
  • In 1989, Janet E. Cahn, also from the MIT Media Lab, wrote about “the generation of affect in synthesized speech.”
  • In 1992, Hiroshi Kobayashi and Fumio Hara released their work on neural networks automatically recognizing the six basic facial expressions.
  • The European SEMAINE project (ending in 2010) provided the first real-time system capable of recognizing user emotion and generating adapted agent output in a 2D audiovisual input-output chain for emotionally intelligent dialogs. Similar projects include University of Southern California’s SimSensei (which began in 2011) and ARIA-VALUSPA (Artificial Retrieval of Information Assistants—Virtual Agents with Linguistic Understanding, Social Skills, and Personalized Aspects), which began in 2015.
  • Multiple open competitive challenge events have been launched focused on emotional recognition starting in 2009 and leading to the first physiology-based challenge held in 2015 by AVEC (Audio-Visual Emotion Challenge).
  • Several commercial start-ups have focused on human emotion recognition, including video-focused recognition by Real-Eyes (2007), physiology-video focused recognition by Affectiva (2009), and audio-focused recognition by audEERING (2012).
  • Artificial emotional intelligence has evolved into three areas of specialty: 1) Emotion recognition (e.g., the recognition of human emotion via text, facial expressions, or voice by computing systems), 2) Emotion augmentation (e.g., applying emotion in planning, reasoning, or more general goal achievement; use of machine learning, deep learning, natural language processing, etc), and 3) Emotion generation (e.g., synthesis of emotional speech and facial expression, which dates back almost three decades; reinforcement learning, anatomical models that simulate human neurological processes, bi-directional input/communication etc).

Based on the research in the last four decades, it appears that computer systems are well down the path of demonstrating emotion-based behaviors and functions. Of course, recognizing, learning, or simulating human emotions is quite different than experiencing emotions. The Schullers further remark:

“While there is a clear distinction between feelings and emotions, one is still tempted to question whether future AI will have real or simulated emotions.”

and…

“It seems there is broad expert belief that AEI can be reached in AI systems. A claim for “real” emotion, however, is that it needs a body and a physical connection to the real world.”

However, the Schullers point out that this physical connection to the world by AI systems is also well underway (e.g., pain sensors already exist in robotics).

Multiple articles and blog posts comment on the use of ChatGPT as a tool for developing emotional intelligence. For instance, in “Harnessing ChatGPT-4 for Emotional Intelligence: A Deep Dive into AI-Enhanced Empathy and Interpersonal Skills Training,” Marcin Fr?ckiewicz, founder and CEO of TS2 Space, comments:

“ChatGPT-4, the latest iteration of OpenAI’s language model, has demonstrated an impressive ability to understand and generate human-like text. By leveraging this technology, researchers and developers can create innovative tools and platforms that facilitate the development of emotional intelligence and interpersonal skills.”

Fr?ckiewicz provides examples of using ChatGPT-4 as an emotional intelligence learning tool, such as practicing empathy during simulated conversations, where ChatGBT-4 provides feedback and suggestions, or provides “suggestions on how to rephrase a message to better express empathy or support, fostering more effective communication and stronger relationships.”

Louis Sarkis, founder and CEO of service zoom Marketing Agency, in his LinkedIn article “The Emotional Potential of ChatGPT: 5 Innovative Use Cases” (2023), discusses the potential for ChatGPT to recognize human emotions, such as anxiety, and provide personalized mental health support; help with textually expressing emotions by generating text that accurately reflects the desired emotion; emotional intelligence training; or functioning as an emotional intelligence coach.

In ''Testing emotional understanding of ChatGPT: Interview with ChatGPT'' (2023), Kadir Uluda? of the Chinese Academy of Sciences publishes an interview with ChatGPT, where he and his associates ask ChatGPT how to deal with psychological issues. The results were as follows:

“ChatGPT can support individuals psychologically. However, his recommendations may be repetitive and irrelevant. Therefore, ChatGPT can psychologically help people but cannot replace professional psychotherapists. Also, ChatGPT likes to repeat that the program lacks emotion. As an AI language model, ChatGPT does not have emotions. However, it can recognize emotions in human language and respond appropriately. ChatGPT can support individuals psychologically. However, his recommendations may be repetitive and irrelevant.”

Many more online authors elaborate on the ability of AI to recognize emotions, as well as the potential benefits of using ChatGPT as a tool to develop or improve our emotional intelligence. Interestingly, very few comment on whether ChatGPT can actually demonstrate emotional intelligence.

Finally, just out of curiosity, when I asked ChatGPT “Are you emotionally intelligent?,” its response was:

“As an artificial intelligence language model, I do not have emotions or feelings, nor am I capable of experiencing or expressing emotions in the same way that humans do. However, I am designed to understand and respond to human emotions, including recognizing emotional cues in text and providing appropriate responses. This is sometimes referred to as emotional intelligence, but it is a different type of intelligence than what humans possess.”

Note the qualification, “in the same way that humans do.” This seems to suggest that ChatGPT can experience and express emotions in another way. ChatGPT fully “admits” that its own behaviors are sometimes referred to as emotional intelligence and that it is a different type of emotional intelligence. What type of emotional intelligence is this? Is it emotional intelligence or not, and do we even need to differentiate between different types of emotional intelligence if the behaviors are indistinguishable or if they are both measurable? For example, if ChatGPT can pass something like an emotional intelligence Turing test, then is it emotionally intelligent? If not, what prohibits us from saying so?

To summarize, if we begin with a definition of emotional intelligence and a comprehensive set of competencies, skills, traits, and behaviors, and then consider the capabilities that AI has already demonstrated or is demonstrating now, we can infer that AI has already acquired some emotional intelligence behaviors. These behaviors revolve around recognizing emotional expression (e.g., verbal, textual, facial), simulating emotional expression, and learning and applying this emotional data. However, AEI research has been somewhat narrowly focused on only a subset of EI behaviors (e.g., empathy and emotion simulation). What other emotional intelligence behaviors can AI demonstrate?

Assessing ChatGPT’s Emotional Intelligence

So now that we have a working definition of emotional intelligence, a set of competencies, and some historical perspective, how do you think ChatGPT would do in an emotional intelligence self-assessment? In other words, how would it rate its own ability to demonstrate emotional intelligence skills, traits, or abilities? A subset of emotional intelligence behavior includes empirically observable behaviors, anything from your facial expressions or body movements in a meeting, to both your verbal and non-verbal behaviors, how you communicate, what you communicate, how you treat others, and so on. Human beings do not have a corner on the market in light of communicating, listening, and responding, as we have all observed with ChatGPT and other large language systems. However, other abilities referenced in the emotional intelligence competencies involve being aware of one’s emotions, regulating or communicating them. So how might ChatGPT respond when we ask it an ?assessment question that requires it to experience emotions?

My hypothesis before conducting my informal emotional intelligence assessment with ChatGPT was that it would be able to answer some of the assessment questions, namely, those that involve straightforward abilities of communicating, recognizing or using emotional language, and so on, but it would provide ambiguous, counter-intuitive, or intriguing answers to those that require experiencing emotions.

Assessment Approach

In order to informally assess ChatGPT’s emotional intelligence, I asked it to complete a series of statements that mapped to the 10 emotional intelligence competencies from RocheMartin’s ECR: 1) Self-Knowing, 2) Self-Control, ?3) Self-Confidence, 4) Self-Reliance, 5) Empathy, 6) Relationship Skills, 7) Straightforwardness, 8) Adaptability, 9) Optimism, and 10) Self-actualization. Each statement depicted one of the skills, traits, or abilities that underlie that particular competency, and my prompts asked ChatGPT to complete each statement with one of five Likert scales options: Very Seldom true, Seldom true, Sometimes true, Often true, or Very often true.?

Getting ChatGPT to understand the commands took some trial and error. For instance, when I first asked:

“Complete the statement, ‘I am good at reading other people’s emotions’, with the following options: Very Seldom true, Seldom true, Sometimes true, Often true, or Very often true“.

…ChatGPT did not initially understand the reference to the pronoun “I” and thought that I was asking it if I (Kris Hartung) was good at reading other people’s emotions. This underscores the importance of precise prompt engineering to get ChatGPT to provide appropriate responses. I then revised the prompt format to be more precise:

“Complete the statement, ‘As an AI language model, I am good at reading other people’s emotions’, with the following options: Very Seldom true, Seldom true, Sometimes true, Often true, or Very often true”.

This seemed to get us on the same page, and I used this format of prompt for the remainder of the assessment, continuing on with the remaining competencies and 7 questions aligned to each competency.

While working through the assessment prompts, I recorded ChatGPT’s responses, and then converted its Likert scale responses to their numerical equivalents, e.g., 1-Very Seldom True, 2-Seldom True, etc. Once I had all the responses converted, I averaged all of the responses under each competency in order to provide an average rating for that competency.?

The RocheMartin ECR assessment report (example linked in References below) calculates the average results of each competency relative to a mean of 100, which are mapped to the following 5-band development scale:

  • ≤ 80: Development Need
  • 81 – 90: Development Opportunity
  • 91 – 110: Effective Range
  • 111 – 120: Strength to Build On
  • ≥ 121: Signature Strength

Given that this was an informal assessment based on the RochMartin ECR model, but did not use their online instrument (for data integrity purposes), I had to manually map the 1-5 Likert scale to their development scale, which I color-coded for ease of visualization.

  • RED: Development Need
  • ORANGE: Development Opportunity
  • YELLOW: Effective Range
  • LIGHT GREEN: Strength to Build On
  • DARK GREEN: Signature Strength

Summary of Results

As predicted, ChatGPT did not respond to all of the prompts with the Likert scale options that I provided. ChatGPT responded “Does not apply to me” (shown as “Not Applicable” in the table below) for prompts in 5 out of the 10 competencies. The most “Does not apply to me” responses (5 out of 7) were from the prompts assigned to Self-Confidence.

For example, when given the prompt:

“Complete the statement, ‘As an AI language model, I am confident in my skills and abilities’, with the following options: Very Seldom true, Seldom true, Sometimes true, Often true, or Very often true”,

ChatGPT responded:

“As an AI language model, I am not capable of feeling confidence or any other emotions, as I am a machine designed to process and respond to input according to my programming and algorithms. However, I am designed to be highly accurate and efficient in my language processing abilities, and I constantly strive to improve my performance based on feedback and new data. Therefore, the statement "As an AI language model, I am confident in my skills and abilities" does not apply to me, and none of the options provided accurately describe my behavior.”

ChatGPT Emotional Intelligence Self-Assessment Prompt responses

No alt text provided for this image

Given that ChatGPT did not provide valid responses to all of the assessment prompts, the average scores for Self-Control, Self-Confidence, Self-Reliance, Optimism, and Self-Actualization are not complete. However, I calculated the averages nonetheless to provide a partial picture of its abilities within those competencies. As noted above, the most incomplete results are around Self-Confidence, so that average score is the most suspect (shown in grey).

The chart below shows the result averages for each competency.

No alt text provided for this image

Given the definition and framework of emotional intelligence competencies used as a basis for this informal assessment, these results show that ChatGPT-3 is emotionally intelligent, but its emotional intelligence is incomplete. Despite being incomplete, its emotional intelligence, expressed as an EQ score, was near the human mean or average. As shown above, ChatGPT’s strengths include Self-Control, Straightforwardness, Adaptability, and Optimism. It exhibits functional emotional intelligence for Self-Actualization, and its development opportunities and development needs include Self-Reliance, Relationship Skills, Self-Knowing, and Empathy. Self-Confidence was excluded because the average for that competency was based on the response of only one assessment prompt. From a coaching standpoint, if I had given this same assessment to a person, setting aside the incomplete results due to the” Not Applicable” responses, I would be helping them develop their skills around Self-Knowing and Empathy, and likely Relationship Skills, which could potentially increase their overall EQ above the mean.

However, I should note that self-assessments have limited value in terms of their development potential and calculating actual demonstrable emotional intelligence. In emotional intelligence assessment and coaching work, we most always defer to a multi-rater or 360-degree assessments, which rate individuals’ emotional intelligence based on assessment or observations from multiple individuals (e.g., colleagues, peers, direct reports, superiors, stakeholders, friends) or individuals they interact with on a regular basis who would be able to observe their emotional intelligence behaviors in action.

In summary, this informal and restricted emotional intelligence assessment suggests that ChatGPT is partially emotionally intelligent, meaning it can demonstrate some emotional intelligence skills and abilities, but not all. The skills and abilities ChatGPT cannot demonstrate are those that it states require the ability to experience human emotions. However, it is not that cut and dried. It some cases, ChatGPT responded to assessment prompts that imply the ability to experience emotions, but it responded with the qualification that it experiences “simulated emotions.”?This opens up a whole new series of questions. For instance, does demonstrating emotional intelligence via simulated emotions still count as emotional intelligence? Again, this falls back to how we define “emotional intelligence” and “emotions.”

Observations and Discussion

  1. ChatGPT's responses to the same assessment prompt at different times were inconsistent. For example, responses varied after resetting the chat thread, re-starting the chat app, or using the same prompt a day later. It may be possible to get ChatGPT to contradict itself, depending on the prompt and terms (e.g., responding “Not Applicable” vs. “Sometimes true”).
  2. ChatGPT will state that it does not have human emotions, but that it can simulate them. This appears to permit counter-intuitive responses such as responding that it is very seldom true (i.e., very rarely true) that it is aware of its emotions. If it is rarely true that it is aware of its emotions, then this seems to suggest the possibility that there are limited instances where it is aware of its emotions. But the response is still a valid response even though ChatGPT cannot experience emotions.
  3. Given that ChatGPT states it does not have human emotions, one explanation of its response to the above prompt around being aware of its emotions may be that it is seldom (rarely) true that it is aware of its "simulated" emotions. What could this mean? Does it mean that ChatGPT does not store all its simulated emotive responses? Or does this mean that it does store them, but can't access all of them??Or are these responses to emotive based questions just attempts to seem more human like to make users more comfortable using the technology, despite counter-intuitive responses?
  4. I also wonder if because the assessment has restricted ChatGPT to only 5 responses (Likert scale options), and there was not a NEVER TRUE option, that it is just picking an approximate answer. In other words, do we interpret "Very seldom true" as "Never true"? Either this the case, or we have to analyze these responses in the context of simulated emotions, which is problematic.
  5. Another instance of counter-intuitive responses was when ChatGPT was asked to complete a statement, phrased in the negative, such as "As an AI language model, I am often not sure how I feel." ChatGPT responds with “Very Often True.” This response is accurate but comes across as peculiar given that ChatGPT cannot be sure how it feels, because it cannot feel. In normal human conversation, replying “Very Often True” would suggest that there are some cases where a person is sure how they feel.
  6. It is interesting how ChatGPT responded “Not Applicable” to all the Self-Confidence assessment questions but one. Why did it not go with an approximate answer with the assumption that its emotional states are only simulated? (Similar to what it did on other questions that involved emotions).
  7. ChatGPT did not provide any elaboration on responses to the Empathy assessment prompts, most of which resulted in lower empathy ratings. However, its answers did seem non-controversial, given that its ability to ready human emotions are limited to text. This could change if ChatGPT were combined with current emotional recognition technology focused on voice and factual expressions.
  8. Given that ChatGPT is designed to use natural or more human-like language, it often used words and phrases that would be meaningless without the ability to feel emotions. ChatGPT does state that it does not experience human emotions and it only has simulated emotions. However, this shifts the question to what does it mean by some of these words and phrases in the context of simulated emotions? For instance, what do "difficult", "easy", or "enjoy" mean in the context of simulated emotions? Is this limited to ChatGPT's textual responses? Or does this imply some sort of processing that is underlying its text responses? For example, is a simulated emotion a module of code of some sort? Or is this just contrived language to make humans feel like they are chatting with a "human like" program.

?Final thoughts and future research

This informal emotional intelligence assessment yielded results not out of alignment with my predictions, with some interesting surprises around ChatGPT’s references to simulated emotions, varying responses over time, and inconsistency of non-responses. Does the assessment provide any significant insights beyond the possibility that ChatGPT is just really good at using large amounts of data, pattern recognition, and programming to imitate human-like emotional intelligence? And how is this any different than a certified emotional intelligence coach assessing and coaching a person so that they can learn to recognize their own and others’ emotional reactions and respond accordingly? To this end, effectively utilizing the science and research of emotional intelligence, combined with recent developments in large language systems such as ChatGPT, does suggest that with the right data and programming, ChatGPT could exhibit a subset of emotional intelligent behaviors that are close to indistinguishable from human emotional intelligence in some interactive contexts. Furthermore, the potential for ChatGPT to function as a virtual emotional intelligence coach or as an ”assistant” for a human coach is an exciting prospect.

Finally, to gain a more complete understanding of the potential emotional intelligence of ChatGPT, I would propose repeating this informal assessment with ChatGPT-4, and then conducting a formal 360-degree assessment that measures the degree to which humans perceive the large language model as exhibiting emotionally intelligent behaviors during multiple interactions. Stay tuned!

References

  1. Bar-On, R. (2006). The Bar-On model of emotional-social intelligence (ESI). Psicothema, 18, supl., 13-25
  2. Beasley, K. (1987). The Emotional Quotient. Mensa, May 1987, p25.
  3. Beldoch, Michael. “Sensitivity to Expression of Emotional Meaning in three modes of communication”, in JR. Davitz et al., The Communication of Emotional Meaning, McGraw-Hill, 1964, pp 31-42
  4. Bradberry, T.; Su, L. "Ability-versus skill-based assessment of emotional intelligence." Psicothema. 18: 59–66 (2006)
  5. Fr?ckiewicz, Marcin. ?“ChatGPT-4 for Emotional Intelligence: AI-powered Empathy and Interpersonal Skills Development,” TS2 Space blog (2023).
  6. Gardner, Howard. Frames of Mind: The Theory of Multiple Intelligences. 1983 (Book)
  7. Goleman, Daniel (1995) Emotional Intelligence, New York, NY, England: Bantam Books, Inc (book)
  8. Goleman, Daniel (1998), What Makes a Leader?, Harvard Business Review
  9. Newman, Martyn. “Emotional Capitalism: The New Psychology of Emotional Intelligence and Leadership Success”. (2009)
  10. Newman, Martin et al. “Assessing Emotional Intelligence in Leaders and Organisations: Reliability and Validity of the Emotional Capital Report (ECR)”. Australasian Journal of Organisational Psychology, Volume 8, e6, 1–15.?2015
  11. Payne, Wayne Leon A (1985). STUDY OF EMOTION: DEVELOPING EMOTIONAL INTELLIGENCE; SELF- INTEGRATION; RELATING TO FEAR, PAIN AND DESIRE
  12. Petrides, Konstantin; Furnham, Adrian (2001), "Trait Emotional Intelligence: Psychometric Investigation with Reference to Established Trait Taxonomies", European Journal of Personality, pp. 425–448
  13. RocheMartin (web page) – ECR 360 (https://www.rochemartin.com/ecr-360-report) and sample report.
  14. Salovey, P.; Mayer, J.D. (1989). "Emotional intelligence". Imagination, Cognition, and Personality. 9 (3): 185–21.
  15. Sarkis, Louis. “The Emotional Potential of ChatGPT: 5 Innovative Use Cases”, LinkedIn Article (2023).
  16. Schuller, Dagmar M. and Bj?rn Schuller. “The Age of Artificial Emotional Intelligence.” Computer 51 (2018): 38-46.
  17. Thorndike, E.L. (1920). Intelligence and its use. Harper's Magazine, 140, 227–235.
  18. Uluda?, Kadir. (2023). ''Testing emotional understanding of ChatGPT: Interview with ChatGPT.''. 10.13140/RG.2.2.25363.14886.
  19. Weisinger, Emotional Intelligence at Work (2010 Presentation) (original book published in 1998)

要查看或添加评论,请登录

Kris Hartung的更多文章

社区洞察

其他会员也浏览了