Is sentiment analysis using an AI system a prohibited AI practice?
Tomasz Zalewski
Partner at Bird & Bird | Tech-Savvy Lawyer | Assists Clients with IT Contracts, AI Projects & Cybersecurity | Public Procurement Expert | Making Legal Easy in the Digital World
Polish version was sent yesterday.
TL;DR Unfortunately, there is no clear answer - it may or may not be a prohibited practice, depending on whose sentiment is being analysed and how, but this certainly needs to be reviewed before 2 February 2025.
The AI Act is full of various inaccuracies and inconsistencies in the use of terminology. We are slowly discovering these as we conduct analyses related to specific questions. Here is one of those conundrums.
Is an AI system for sentiment analysis a ‘system to infer emotions of a natural person’?
As early as 2 February 2025, the AI Act's provisions on prohibited AI practices will start to apply. One of these is the practice of using AI systems to to infer emotions of a natural person in the areas of workplace and education institutions. The provision reads as follows:
Article 5.1.f
The following AI practices shall be prohibited: (...)
the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;
And this raises the question of whether such a prohibited system would be an AI system for sentiment analysis. The question has a very practical value, as sentiment analysis is widely used in business, especially within call centres.
What is sentiment analysis?
Sentiment analysis means examining the emotional overtones of a statement in a communication. Sentiment analysis is widely used to analyse statements on specific topics in order to assess the attitude of the author of a given statement towards that topic.
The most common result of sentiment analysis is the classification of an utterance as positive, negative or neutral, but there are also systems that allow a more detailed analysis (e.g. on a scale from zero to 100) and even the detection of specific emotions of the author (e.g. frustration, indifference or anxiety).
The object of sentiment analysis can be statements in any form, whether text or audio. Sentiment analysis is typically used in media monitoring, the classification of communications from customers in terms of urgency or in market analysis, but some organisations also use it in HR processes to, for example, measure employee satisfaction or to assess the quality of calls in a call centre. Sentiment analysis can also be used in an educational context, e.g. to analyse students' evaluations of their classes.
Since sentiment analysis is the study of the emotional overtones of a statement, it would seem that there is a simple answer to the title question - yes, an AI system for sentiment analysis can be an AI system to infer emotions of a natural person in the workplace or educational institutions. However, on closer inspection, it becomes apparent that the answer needs to be more nuanced.
First of all, it needs to be verified if sentiment analysis applies to the analysis of an individual's emotions in workplace situations. Sentiment analysis of social media statements about a company will not be covered by this prohibition.
Is an ‘AI system to infer emotions of a natural person’ the same as an ‘emotion recognition system’?
In the AI Act, we do not find a definition of an ‘AI system to infer emotions of a natural persons’, but there is instead a definition of an emotion recognition system, which is as follows:
Article 3(39)
‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data'
The wording of these terms is very similar - so is the system referred to in Article 5(1)(f) (i.e. an AI system to infer emotions) the same as an emotion recognition system or not? And if so, why does Article 5(1)(f) not use the aforementioned defined term, but instead says that the prohibition applies to ‘AI systems to infer emotions‘?
Unfortunately, the AI Act is not consistent in its use of uniform terminology. There are more different terms for systems recognizing emotions in the AI Act.
Thus, recital 44 of the preamble - which is the rationale for the prohibition in Article 5(1)(f) - speaks of AI systems ‘intended to be used to detect the emotional state of individuals in situations related to the workplace and education.’
In turn, Annex III, point 1, identifies as high-risk systems ‘AI systems intended to be used for emotion recognition’.
So we have 4 different concepts that seem to describe the same thing:
In my opinion, we should not attribute special significance to the use of different terminology. The use of different terms was unlikely to have been intentional, but rather a result of the rapid pace at which the final version of the regulation was prepared. All the terms given above should therefore be understood as emotion recognition systems which are defined.
领英推荐
If a ‘system to infer emotions of a natural person’ is really an ‘emotion recognition system’, what is the final scope of systems that will be prohibited?
The definition of an emotion recognition system has three elements:
What are emotions under the AI Act?
Recital 18 of the preamble indicates that emotions are
‘happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement.’
However, the same point indicates that emotions are not physical states such as pain or fatigue.
Emotions are also not readily apparent expressions, gesture or movement, such as:
- basic types of facial expression such as a frown or a smile,
- gestures such as hand, arm or head movements,
- characteristics of the person's voice such as a raised tone or a whisper
unless such forms of expression or gestures are used to identify or infer emotions.
Note that the identification of sentiment can vary in nature. It can be either a nuanced description of the various predefined emotions detected in a given content or an indication of a kind of summary of the emotional attitude of the content's author without indicating specific emotions (e.g. according to a scale of positive, neutral, negative). The capacity of the system used will therefore also determine its status as a potentially prohibited system.
What is biometric data?
Another element of the definition of an emotion recognition system is that identification or inference of emotions is based on biometric data.
The concept of biometric data has its definition in the AI Act:
Article 3(34)
‘‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data;
This means that identification or inference of emotions based on non-biometric data e.g. a text-only analysis does not fall within the scope of this concept, but an analysis based on e.g. a typing pattern, handwriting or based on a voice analysis does meet the criteria.
Summary
Within the scope of this material, I have only hinted at the main topics related to the classification of AI systems for sentiment assessment against the background of the legislation prohibiting the use of AI systems for emotion recognition. Much more could be written on this subject. If you are interested, I refer you to Bird & Bird colleague Nora Santalu‘s article “What is an Emotion Recognition System under the EU's Artificial Intelligence Act?: Part 1 - A machine that ”understands’ your Monday blues!", where these issues are addressed in more detail.
It is worth mentioning that we do not yet have guidance on how to understand many of the concepts described above. According to Article 96 of the AI Act, the European Commission is expected to produce guidelines on the practical implementation of the Regulation and, in particular, on the prohibited practices referred to in Article 5. These are expected to be available even before the ban comes into force.
Regardless of the current lack of "official" guidelines the companies that use AI systems for sentiment analysis must be aware that, as of 2 February 2025, such systems may be considered systems to infer emotions of a natural person in the areas of workplace and education institutions and therefore a prohibited practice.
If you believe that your artificial intelligence systems used to analyze sentiment may be deemed to constitute a prohibited practice, I invite you to contact me.
If you enjoyed this newsletter, please subscribe and follow me on LinkedIn.