Share This Article
X’s suspension of processing certain personal data for training its AI chatbot tool, Grok, following the order by the Irish Data Protection Commission, mirrors actions taken by the Garante, the CNIL, and the Hamburg privacy authority in the past months. How will developers and deployers of artificial intelligence systems react to this?
As reported by my Irish DLA Piper colleagues in their article, the latest news relates to the dispute against X’s data processing for AI training of its chatbot tool, which originated from a complaint by consumer associations.
X claimed it had relied on the lawful basis of legitimate interest under the GDPR, but the complainants argued that X’s privacy policy โ dating back to September 2023 โ was insufficiently clear about how this applied to processing user data for training AI models like Grok.
Following this challenge, the Irish Data Protection Commission issued an order to suspend data processing for such purposes. A similar scenario occurred a few months earlier following complaints by NOYB against Meta’s reliance on legitimate interest for using data to train AI models. This led to engagement with the DPC and Meta’s eventual decision in June to suspend the relevant data processing.
This chain of events is similar to what impacted OpenAI in March 2023, when the Italian privacy authority, the Garante, ordered the temporary limitation of ChatGPT’s data processing of Italian individuals, leading to a month-long suspension of the AI chatbot in Italy. Eventually, the temporary limitation was waived, but OpenAI had to commit to, among other things, identifying the legal basis for processing users’ personal data for algorithmic training, which OpenAI should modify (Read more on the topic: “Italian case on ChatGPT benchmarks generative AI’s privacy?“).
All of this occurs as the Hamburg privacy authority issued its Discussion Paper on Large Language Models and Personal Data, arguing, among other points, that there is no processing of personal data in the information stored by LLMs. This position shows an opening by EU privacy authorities towards GDPR-compliant AI training that might be validated in the current CNIL’s consultation on the topic (Read more on the matter: “Is Privacy for Generative AI at a Turning Point?“).
There is no doubt that the lawfulness of AI training and its compliance with the GDPR can be better advocated by adopting a more transparent approach towards individuals regarding:
- how their personal data is processed as part of this process,
- what legitimate interest underlies the data processing,
- why such a legitimate interest is grounded, taking into account the technical measures needed to minimize data processing.
And the above can be further improved by implementing legal design solutions that are increasingly upheld by data protection authorities since they help users to actually understand what is performed and why that practice is compliant.
We are at a crucial point for the development of artificial intelligence in the European Union, a compromise shall be found, but it requires an effort by the multiple parties involved. This practice is quite new for data protection authorities that are used to merely enforce legislation. However, generative AI is likely to be the most prominent technology of the century, and it definitely deserves a special treatment.
What is your view on the above? On the topic, you can read the articles on the legal issues of artificial intelligence and how to overcome them that I published HERE.