Share This Article
The data protection authority issued a € 20 million privacy fine in Italy against Clearview AI for unlawful processing of data of local individuals.
Clearview AI technology and why it is helpful for police investigations
Clearview AI runs a facial recognition search engine and claims to have a database of over 10 billion images of individuals’ faces worldwide, extracted from public web-sources via web scraping (such as news sites, social media and online video). The collected images are processed using biometric techniques to extract the identifying characteristics of each of them and, subsequently, transformed into “vector representations”.
The technology behind the service aims to improve public safety, reduce investigation times, and assist law enforcement agencies in identifying criminals. And the success of this technology was confirmed by the supply agreements entered with several police departments in the United States and the FBI. Cases that had remained unsorted could be solved now with the support of the Clearview AI system. And now – according to the press – Ukraine has started using Clearview AI’s facial recognition during the war.
Given the above, there is no doubt that the system is efficient and can be helpful to support police investigations. A facial recognition system is not flawless since there are some cases of people wrongfully accused because of mistakes performed by an algorithm. But any technology needs to be supported by human assessments, and human errors usually are the primary source of errors.
The privacy challenges against Clearview AI facial recognition system
As already occurred with the challenges raised by the Swedish and the French data protection authorities, the Garante challenged Clearview the unlawful processing of personal data of individuals in Italy.
In particular, it appeared that personal data held by the company, including biometric and geolocation data, are processed unlawfully, without an adequate legal basis, which could not be legitimate interests, given the special category of personal data involved. Moreover, according to the Garante, the company violated the obligations of transparency, having failed to adequately inform users, of purpose limitation of processing, having used users’ data for purposes other than those for which they were published online, and of limitation of retention, having failed to establish data retention periods. Besides, the Italian data protection authority challenged the lack of appointment of a representative in Italy.
Clearview AI challenged the lack of applicability of the GDPR to the company. However, the service had been offered during a trial period to EU police authorities which makes the targeting principle as per Article 3.2, letter a) of the GDPR applicable. And the Garante argued that creating a database of images meets the monitoring requirement as per Article 3.2, letter b) of the GDPR.
On this basis, the Italian data protection authority issued a fine of € 20 million against Clearview, equal to the maximum non-turnover based sanction provided by the GDPR, without giving details criteria on how the amount was calculated.
My view on the case
The negative publicity against systems of biometric facial recognition is quite widespread. There is a risk of monitoring of individuals and invasiveness of this measure. Italy, like other countries, provided a moratorium on facial recognition biometric systems in public places or places open to the public until December 31, 2023, except, however, for the processing carried out by competent authorities to prevent and suppress crimes or enforce criminal penalties. Indeed, the goal is to have the new EU regulation on artificial intelligence in place that would create a pan-European framework on the matter.
In the meantime, there is no doubt that services like Clearview AI are helpful for police investigations. And I have strong concerns about the arguments raised by the Garante as to the applicability of the GDPR. The company could have implemented some safeguards to limit the risk of challenges of EU privacy laws. Apart from the offering of the service on a trial period to EU authorities that is now over, the setting up of the database could occur upon request of the relevant authorities, which would trigger the applicability of the legal basis regarding the processing “necessary for reasons of substantial public interest“.
The solution cannot be to ban such technologies since there are considerable benefits for police investigations, and the underpinning public interest to support them is certain. However, the same technology cannot be used for commercial purposes. Data protection authorities shall instead cooperate with these businesses to find technical solutions that protect individuals’ rights and enable the public interest to be pursued.
On a similar topic, you may find interesting the article “Is the draft EU Artificial Intelligence Regulation on the right track?“.
Photo by Maksim Chernishev on Unsplash