Share This Article
The recent European Court of Justice (ECJ) Advocate General’s opinion in case C-203/22 is an important development in addressing how companies using artificial intelligence (AI) can balance automated decision transparency with the protection of trade secrets, while complying with the requirements of the GDPR.
The GDPR case on Automated Decision and its relevance to AI
In the case at hand, an Austrian citizen was denied a mobile phone contract following an automated credit check conducted by a company. The decision was fully automated, with no human intervention. The individual sought to understand how her personal data was processed and the logic behind the automated decision that affected her. However, the company refused to disclose critical details, citing its algorithm as a protected trade secret under Directive (EU) 2016/943.
The CJEU’s involvement drew attention to two key issues:
-
Transparency under GDPR: How much detail about AI-driven decisions must companies disclose to data subjects?
-
Protection of trade secrets: Can companies refuse to disclose details of their AI algorithms by invoking trade secret protection?
The opinion of the Advocate General provides important guidance on how these issues intersect and impact the development and deployment of AI technologies.
AI and GDPR: The Right to Transparency
Under Article 22 of the GDPR, individuals have the right not to be subject to decisions based solely on automated processing, including profiling, where those decisions have legal or significant personal implications. This provision is particularly relevant for AI systems, which often make autonomous decisions without human oversight. In addition, Article 15(1)(h) of the GDPR grants individuals the right to “meaningful information” about the logic behind the automated decision (such as an AI decision) that affected them.
For AI developers, this means that transparency is not optional; individuals must be given enough information to understand how their personal data is processed and how AI-driven decisions are made. The opinion clarified that this doesn’t necessarily mean disclosing all the technical details of an algorithm, but rather providing clear and understandable information about
-
The main factors that influenced the ECJ’s AG opinion
-
The weight of those factors.
-
The outcome of the decision.
For example, if an AI system evaluates creditworthiness, the company should explain what types of data (such as income or payment history) were used, how those factors were weighted, and how they led to the final decision. This explanation must be accessible and clear enough for the average person to understand.
The role of trade secrets in AI
Many companies using AI view their algorithms as proprietary trade secrets that give them a competitive advantage. The ECJ Advocate General’s opinion recognized the importance of protecting trade secrets, but emphasized that trade secrets cannot be used as an all-encompassing shield to avoid transparency obligations under the GDPR.
Instead, the AG suggested that companies must strike a balance:
-
Companies should provide general explanations of how their AI systems work without disclosing detailed, proprietary algorithms.
-
Regulators or courts can step in to ensure that companies provide sufficient transparency, while protecting intellectual property.
This sets a precedent for AI developers, signaling that while trade secret protection remains important, it cannot override the rights of individuals to understand how AI-driven decisions are made about them.
Implications for AI development and deployment
The CJEU Advocate General’s opinion has significant implications for businesses and industries that rely on AI for decision-making, particularly in areas such as finance, healthcare, insurance, and recruitment, where AI is often used to make decisions with significant personal impact.
Key takeaways include:
-
Explainable AI is non-negotiable: Organizations must ensure that their AI systems are not only accurate, but also explainable. Individuals affected by AI decisions have a right to clear explanations, and companies must be prepared to provide them.
-
Balance innovation with compliance: AI developers need to be strategic in protecting their trade secrets, while ensuring compliance with transparency obligations under GDPR. They must focus on a high level of transparency – disclosing enough for individuals to understand decisions, without revealing the inner workings of their proprietary systems.
-
Building trust in AI: This ruling reinforces the idea that transparency is key to building trust in AI systems. Individuals are more likely to trust AI-driven decisions if they can understand how their data is being used and how decisions are being made.
-
Regulatory oversight: The involvement of regulators in the event of disputes is likely to become more common. As AI systems become more complex, courts may increasingly serve as arbiter in balancing transparency and the protection of trade secrets.
The future of AI and privacy
As AI continues to evolve and play a central role in decision making, ensuring compliance with the GDPR will be critical for businesses. The ECJ Advocate General’s opinion in case C-203/22 provides valuable guidance on how companies can achieve this balance. Organizations must prioritize creating AI systems that are not only powerful and efficient, but also transparent, fair, and respectful of individual rights.
This obligation is further amplified by the obligations arising under the EU AI Act that are based on the same principles of transparency and human oversight. You can read more on the topic at the link HERE.