Share This Article
The European Parliament reached an agreement on the AI Act introducing stricter regulations for foundation models like ChatGPT, which will distinguish them from general-purpose AI.
The AI Act, which aims to regulate AI based on its potential to cause harm, did not initially cover AI solutions capable of handling a wide range of tasks, leading to delays.ย However, a compromise text on which the European Parliament reached an agreement on April 27, 2023, suggests a new approach to artificial intelligence systems like ChatGPT and similar applications.
The proposed regulations would require foundation models to undergo a thorough risk assessment before use in the EU, evaluating the model’s potential to cause harm and its ability to be controlled and monitored.ย The rules would also require transparency from developers about how the models were developed and how they work.
Foundation models like ChatGPT under the upcoming AI Act
Stanford University defines foundation models as AI models trained on broad data at scale, designed for the generality of output, and adaptable to a wide range of tasks. On the other hand, general-purpose AI can be used in and adapted to a broad range of applications beyond its original design. The difference lies in their training data, adaptability, and potential unintended uses.
Generative AI systems like ChatGPT and Stable Diffusion might fall under the category of foundation models, as they are trained on a large amount of data.
The EU lawmakers have proposed a series of requirements for foundation models to ensure compliance with health, safety, fundamental rights, the environment, democracy, and the rule of law. Foundation models that fall into the generative AI category must
- comply with additional transparency obligations and implement adequate safeguards against generating content in breach of EU law;
- be tested throughout their lifecycle to maintain appropriate levels of performance, interpretability, corrigibility, safety, and cybersecurity;
- implement a quality management system, provide relevant documents up to 10 years after the model is launched, and register the models on the EU database;
- comply with data governance measures, including measures to examine the sustainability of data sources, possible biases, and appropriate mitigation; and
- disclose the computing power required and the training time of the model
The AI Office is tasked with keeping regular dialogue with providers of foundation models about their compliance efforts and providing guidance on the energy consumption related to training these models.
How liability will be allocated across the value chain of artificial intelligence
The European Parliament is proposing measures to ensure proportionate sharing of responsibilities along the artificial intelligence value chain to protect fundamental rights, health, and safety.
A downstream operator would become responsible for complying with the AI Act’s stricter regime if they substantially modify an AI system that qualifies as a high-risk model. Still, the original provider must supply all relevant information and documentation on the AI model’s capabilities to support the compliance process.
MEPs want the EU Commission to develop non-binding standard contractual clauses that regulate rights and obligations consistent with each party’s level of control while considering specific business cases.ย Unfair contractual obligations imposed on SMEs and start-ups should be banned. Foundation model providers should cooperate with downstream operators throughout the entire service or transfer the training model with appropriate information.