Share This Article
Artificial intelligence is increasingly an integral part of our daily lives, but at the same time it generates concerns, including legal issues, of a potential cognitive “bias”, and the resulting discrimination of the algorithm.
In this article we analyze the legal issues of cognitive biases in artificial intelligence systems:
What are machine learning algorithms?
Artificial intelligence can be defined as the ability of a machine to exhibit human capabilities such as reasoning, learning, planning and creativity, and is composed of various algorithms that learn through so-called machine learning.
Based on machine learning algorithms, it is in fact possible to guide and “teach” the algorithm what outputs to generate: just like a child who has to be taught the letters of the alphabet, shown through picture books, again the artificial intelligence can “learn” from a data set and produce a predefined output (in cases of so-called supervised machine learning).
In other cases, however, again starting from a dataset, it learns to identify complex processes and patterns without the careful guidance of a human mind (so-called unsupervised machine learning): it is as if you still want to teach a child the letters of the alphabet through picture books, and it will be able to make “its own reasoning” by producing words and sentences that are not predefined outputs.
This is the breakthrough of generative artificial intelligence that not only learns, but autonomously generates content in light of what it has learned.
What are potential cognitive biases of artificial intelligence systems?
It is in these scenarios that we find generative artificial intelligence models such as ChatGPT’s, where so-called cognitive biases can be produced. A s mentioned, algorithms are nothing more than mathematical models that are “trained” through human-provided datasets: taking our example of the child, if the letter “A” is always associated with the color red, it is more likely that the child, when it has to reproduce that letter on a white sheet of paper, will do so precisely through the color red.
At the same time, through the data sets provided in the first instance to the algorithm, it is possible for the algorithm to reproduce “biases” that are simply given by the set of information provided. Bias, in fact, can creep in in a number of ways: the ones we will focus on are biases in relation to preconceptions, opinions, ethnic, cultural, social issues, and so on.
Personnel selection and insurance risk assessment using AI algorithms
One of the landscapes in which the use of artificial intelligence can, on the one hand, create great efficiencies, and on the other hand raise concerns, are workplaces, and more specifically personnel selection through machine learning algorithms: if the personnel selection algorithm is trained on a historical dataset of candidates who have been most successful in a certain role, it might consider the attributes these candidates have in common as most relevant for that role.
This is what happened to a well-known multinational company looking for new resources for an IT role: the algorithm automatically discarded female candidates because it was based on a dataset collected over the past decade where most of the resources hired in tech were male and not female. The algorithms then identified and exposed the biases of their own creators, thus demonstrating that training automated systems on unbiased data leads to future non-neutral decisions.
But still, as far as the insurance landscape is concerned, artificial intelligence systems are being used with increasing frequency to provide increasingly personalized, more competitively priced products and services starting from health and life protection to underwriting and claims evaluation. If not properly developed, even with this in mind, artificial intelligence systems can lead to significant risks to people’s lives, including discrimination. For example, an insurance risk assessment algorithm could use customer data, such as age, gender, income, occupation, and health status, to determine the price of insurance and the level of risk associated with the customer, excluding some users.
Technical and legal remedies to artificial intelligence cognitive biases under the EU AI Act
The risks generated by the above cognitive biases can be limited, through actions from both a technical as well as a legal perspective. First of all, by acting on the algorithm itself: it is indeed necessary to train algorithms on as diverse and representative a dataset as possible, constantly monitoring the outputs produced in order to point out and correct the biases originally. Also, in the selection process, it might be relevant to include in the algorithm review not only technicians, but a variety of experts, so as to prevent the unintentional creation of bias.
On the other hand, the artificial intelligence systems described above already fall under the draft AI Act. The AI Act is based on a risk-based approach (as with the GDPR), identifying three levels of risk (unacceptable, high, and limited). These systems are currently listed in Annex III to the proposed AI Act, and also include systems that fall within the labor and employment context, including the recruitment and selection phase. For these systems, there are a number of obligations (e.g., risk management systems, transparency, human oversight, etc.) that providers must adhere to from the design and development stage, and compliance with which will need to be carefully assessed before the system itself is marketed.
However, the application of these standards will only cover the near future-for the time being, it is necessary to refer to the combination of other regulatory provisions, which are limited to obligations of transparency or the right to opt-out with respect to the processing carried out by these systems, as well as the right to require that there always be human intervention behind the processing of these data. These provisions can be found in Article 22 of the GDPR, and in the new transparency provisions transposed in Italy through the so-called Transparency Decree, which provide important regulatory obligations if “automated decision-making or monitoring systems” are used with respect to workers.
All in all, the use of artificial intelligence in personnel selection processes, the creation of personalized offers and access to certain services can lead to significant improvements in efficiency and accuracy. However, it is important to pay attention to the risks of discrimination and cognitive bias that may be associated with the use of these algorithms. Only through a combination of transparency, fairness, and clear regulatory provisions that impose very specific obligations on the user of artificial intelligence systems can we ensure that artificial intelligence is used more responsibly and fairly.
To support companies in assessing the compliance of artificial intelligence systems in an efficient, cost effective and expedited manner, DLA Piper has developed a legal tech tool called Prisca on which you can view a video presentation HERE.
On a similar topic, the article “What the vote of the EU Parliament on the AI Act means?” may be of interest.