Share This Article
The EU Parliament’s Committees have agreed on the AI Act. However, Google’s decision to postpone the launch of Bard in the European Union prompts questions about the direction we’re headed in.
The Compromise Text on the EU AI Act regulates Foundation Models (like ChatGPT, GPT4, Palm, etc.), which are AI models trained on extensive data designed for a wide range of tasks.
Here are some insights on the regulations applicable to them:
- Providers of generative AI foundation models, such as Midjourney, Dall-E, and ChatGPT, must comply with transparency obligations, safeguard against EU law breaches, and publicly document the use of copyright-protected training data ๐คย Notably, the text does not cross-reference the EU Copyright Text and data mining exemption, which could resolve disputes.
- Prior to releasing a foundation model, providers must comply with the EU AI Act. This involves risk management in areas like health, safety, fundamental rights, and more, using independent expert involvement. They should also use datasets with robust data governance measures ๐คย These broad principles might lead to inconsistent interpretations by authorities and courts. It is the same approach as the GDPR. Still, the GDPR was built on 20 years of privacy legislation where principles had already been decrypted, while we never had legislation on artificial intelligence.
- Providers must design foundation models with high performance, safety, and cybersecurity standards, assessed by independent experts, documented analysis, and extensive testing. Providers should document compliance for ten years post-release. Also, they need to provide extensive technical documentation and establish a quality management system to ensure compliance with the Act’s requirements ๐คย ISO technical standards will gain relevance to guide providers and users in this process.
- AI Impact Assessments should clearly outline the system’s intended purpose, use scope, and categories of affected persons ๐คย These assessments touch on multiple legal areas like privacy and intellectual property. Finding efficient, cost-effective solutions for these assessments will be critical.
This expansive legislation has garnered reactions from AI developers, such as Google’s decision to delay Google Bard’s EU launch. One can’t help but wonder if the AI Act agreed by the Committees of the EU Parliament and the recent actions by the Italian data protection authority against Open AI’s ChatGPT influenced this decision.ย If so, it sends a cautionary message to EU legislators and authorities – innovation in the EU may face delays.
On a similar topic, you can find the following article interesting “The Italian case on ChatGPT benchmarks generative AIโs privacy compliance?“.