Share This Article
Artificial intelligence and machine learning technologies might find considerable hurdles in privacy obligations provided by the GDPR.
Below is my view on a very hot topic at the moment and you can read a detailed overview in English and also watch a summary (in Italian) as part of my videoblog Diritto al Digitale
What are artificial intelligence technologies?
The definition from Wikipedia is that
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal.ย Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
Therefore the main features of AI are
- the collection of large amounts of information, also from the environment around them and
- the ability to make autonomous decisions/actions aimed at maximizing the chances of success.
The perfect example of artificial intelligence is a self-driving car which needs to take autonomous decision based on whatever happens on a street. And a confirmation of the current concerns (and prejudices) around AI is a new study from Germanyโs Federal Highway Research Institute which found that the autopilot feature of the Tesla Model S constitutes a “considerable traffic hazard“.
This finding was unsurprisingly highly criticized by Tesla CEO, Elon Musk, who said in a tweet that those reports were “not actually based on science” and repeated that “Autopilot is safer than manually driven cars.”
But it is not necessary to consider self-driving cars to deal with the issue above. It is sufficient to have a machine learning technology that is able to collect information about individuals, create a profile of these individuals, placing them into for instance a “credit score” cluster and on the basis of such classification take decisions as to whether or not a mortgage or a loan shall be granted. This unveils new privacy-related issues that become more relevant following the adoption of the EU General Data Protection Regulationย (GDPR), especially after the publication of the guidelines on automated individual decision making and profiling by the Article 29 Working Party.
The prohibition of automated decisions
The EU Privacy Regulation provides that individuals
“shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“.
The prohibition applies to decisions that are based “solely” on automated processing, but as stressed by the article 29 Working Party, the human oversight of the conclusion reached by the machine needs to meaningful. Otherwise, it would be just a way of by-passing the prohibition.
But what is profiling?
The final version of the guidelines of the Article 29 Working Party on automated decision making and profiling clarified that a mere classification of an individual on basis of known characteristics does not per se trigger profiling, but it will depend on the purpose of the classification. In particular, European data protection authorities make the following example
“a business may wish to classify its customers according to their age or gender for statistical purposes and to acquire an aggregated overview of its clients without making any predictions or drawing any conclusion about an individual. In this case, the purpose is not assessing individual characteristics andย is therefore not profiling.”
Therefore, profiling is not triggered per se by the classification of individuals, but by the usage of data that the controller is willing to achieve. This definition remains unclear in my view though since statistical data are always processed by companies to make business decisions, otherwise, there would not be any reason to collect them!
What exceptions apply to the prohibition of automated decisions?
Exceptions to the prohibition of making automated decisions falling under the scope above apply when an automated decision
- is either provided by the law, such as in the case of fraud prevention or money laundering checks,
- or is necessary for the performance of or entering into a contract,
- or is based on the individual’s prior consent.
The applicability of the 3 exceptions above is not straight-forward. And for instance, the fraud prevention and money laundering checks run by means of a machine learning technology might be considered to go beyond what is strictly provided by the law.
Likewise, according to the EU data protection authorities, the interpretation of “necessity” for the entrance into a contract has to be interpreted narrowly. In particular,
“the controller must be able to show that this profiling is necessary, taking into account whether a less privacy-intrusive method could be adopted“.
However, the same data protection authorities mention as an example when the exception would apply the case in which the technologyย enables
“to deliver decisions within a shorter time frame and improves the efficiency of the process“
Therefore, efficiency reasons are deemed to be sufficient to justify the usage of automated decision systems, provided that there are no less privacy-intrusive methods reaching the same result.
Is consent a viable option? What happens in case of health-related data?
Automated decision systems can be used also with the prior consent of individuals. But
who would ever grant his consent to be subject to an automated decision?
My personal view is that this option is viable only in case of usage of such technologies for marketing purposes. In that case, individuals will be required to grant their consent to profiling which will be performed also by means of automated decision systems.
The problem arises though when automated decision systems are used to process special categories of data, such as health-related data. In that case, the GDPR does not provide for the exception to the prohibition linked to the necessity for the performance of or entering into a contract. If you think about insurance companies that need to automatically process health data to assess the insurance risk, the freedom for individuals to decide whether or not they want to give their consent to the automatic processing of their health data might have a massive cost for them.
This might be sorted by means of a local law limiting the scope of prohibition provided by the GDPR, but such circumstance would create the same level of inconsistency among the EU Member States that the European General Data Protection Regulation was willing to avoid!
You need to explain the logic followed by the artificial intelligence under the GDPR
The drafting of a privacy information notice which was a sort of commodity work before the GDPR is becoming like playing one of the highest levels of “Tetris”… The GDPR requires in relation to machine learning, artificial intelligence, and automated decision systems to provide details on
- the usage of such technologies;
- the significance and envisaged consequences for the individual; and
- “meaningful information about the logic involved“.ย
According to the article 29 Working Party, the explanation of the logic involved would include details on the rationale behind, or the criteria relied on in reaching the decision, without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm.
The clarification on the level of details to be disclosed is important because otherwise individuals might understand the logic followed by the machine and act in a manner so that they can take unfair advantages. However, the above also means that it is not possible to adopt a GDPR compliant privacy information notice that would cover any type of machine learning or artificial intelligence technology. The privacy information notice shall outline the main characteristics considered in reaching the decision, the source of this information and their relevance.
Individuals can object to the automated decision
Even when the automated decision is necessary to the performance of a contract or was performed following the consent of the relevant individual, individuals will still have the right to obtain human intervention to express their point of view and to contest the decision which is commonly known as the right to receive a justification of the automated decision.
The most frequent example is when a mortgage or a recruiting application is turned down since, according to the system, the applying individual does not meet some parameters. This means that a procedure shall be put in place to manually review the matter. However, the main issue for GDPR compliance arises when artificial intelligence becomes so complex and its decisions are based on such a large number of data that is not actually possible to give a justification of a specific decision.
The solution might be that, in order to ensure GDPR compliance, the artificial intelligence system whose decisions might impact individuals shall be structured in a way that it will be possible to track the reasoning of the decision. But this also depends on what level of justification would be sufficient to meet the criteria set out in the EU Privacy Regulation. Is it sufficient to say that the applicant for a mortgage did not meet the creditworthiness parameters? Or it will be required to identify the specific parameter and if the parameter has become relevant only because it was linked to a number of other parameters?
Is all data collected by the AI or ML legally processed?
An additional privacy issue is whether all the information about an individual which is used by an artificial intelligence system has been obtained with the GDPR compliant consent of that individual or on the basis of a different legal ground and is used for the purposes for which it was initially collected.
Indeed, artificial intelligence is by definition based on the processing of a very large amount of data from different sources which raises the GDPR related risk. And individuals might object to decisions taken on them also because they are based on data illegally processed.
What happens in case of wrong decisions?
The complexity of artificial intelligence systems is expected to escalate in the coming years, making GDPR compliance even more complex. And such complexity might make more difficult to determine when a cyber-attack has occurred and therefore a data breach notification obligation is triggered. This is a relevant circumstance since the EU General Data Protection Regulation introduces the obligation to notify an unauthorised access to personal data to the competent privacy authority and to the individuals whose data was affected.
We recently saw the case involving the UK telecom provider Talk Talk that was sanctioned by the Information Commissioner with a fine of ยฃ 400,000 for not having prevented a cyber-attack which led to the access to data of over 150,000 customers. But what would have happened if Talk Talk was not able to determine whether a cyberattack had occurred and all of sudden its system starts taking “unusual” decisions? Given the potentially massive fines provided by the EU Privacy Regulation, this is a relevant issue.
And the common issue of smart technologies such as those of the Internet of Things, but also artificial intelligence relates to the difficulty to identify the entity liable for malfunctioning or a data breach under GDPR rules.
A data protection impact assessment is an obligation and becomes a protection for your business
The GDPR provides that a data protection impact assessment is necessary when there is
“a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person;“
It is important to stress that the provision above does not refer only to evaluations that are “solely” based on automated processing. Therefore, under the GDPR, a DPIA is necessary in case of any automated profiling run by means of artificial intelligence, machine learning or other technologies able to produce effects on individuals, even if there is human intervention in evaluating the findings of the machines.
This represents a quite burdensome obligation, but especially in the light of the principle of accountability, it is quite a relevant protection in case of claims. Indeed, a privacy impact assessment will show that the controller considered all the factors involved and put in place adequate protection of individuals’ privacy rights.