Share This Article
The AI Act lists among the artificial intelligence prohibited practices the usage of systems inferring emotions in the workplace, but when does a survey of employees fall into this category?ย
The prohibited artificial intelligence practices able to infer emotions under the AI Act
The AI Act has now come into force, and the first approaching deadline is 2 February 2025, when the provisions on prohibited AI practices (and the relevant sanctions) will become applicable. One of the prohibited AI practices that is more heavily discussed at the moment relates to
the use of AI systems to infer emotions of a natural person in the areas of workplace.
This broad provision applies to any AI system that can infer emotions. Indeed, the ability to infer emotions is also mentioned in recital 14 of the AI Act, where the Act provides that “biometric data can allow for the authentication, identification or categorisation of natural persons and for the recognition of emotions of natural persons.”
However, is the provision of the AI Act limited to artificial intelligence practices using biometric data?
The limits of compliance of a survey on employees with the AI Act
The issues addressed above are particularly relevant to surveys frequently run on employees to understand their level of satisfaction, mood, and potential information about their mental conditions. Especially after the pandemic, such surveys have become exponentially common and are run through software that can analyze and aggregate data.
These surveys trigger significant data protection and employment law issues across the European Union. But when do they also qualify as a prohibited AI practice?
We shall see how AI authorities address the issue. Running these surveys and allowing employees to respond to questions with open-ended answers is risky since they might communicate information beyond the purpose of the survey. However, this aspect is more of an employment/data protection law issue.
Indeed, the usage by the EU legislator of the term “inferring” seems to refer to cases when the artificial intelligence system detects some information that employees are not willing to share but can be understood through their answers. Otherwise, the legislator would have used the term “communicating,” and an AI system would not be necessary to know such information.
We have seen such surveys that rely on keywords to understand the mood of the interviewed individual. In such cases, the system already goes beyond what the potential employee wants to communicate. A case-by-case analysis is likely necessary. However, individuals’ emotions are not inferred even in such a case since predetermined keywords are not tailored to the specific individual.
All in all, only biometric data can detect information unique to a specific individual. However, we shall see how the EU regulator interprets this provision. In any case, given the approaching deadline referred to above, companies shall start scrutinizing their current practices to check whether any of them can be qualified as AI-prohibited practices. On the topic, the following article may be useful: “AI Act into Force Today: Is Your Company Ready for Compliance?“