Share This Article
The growth of generative artificial intelligence systems has led EU lawmakers to focus on General Purpose AI, like ChatGPT, in drafting the AI Act, which will set the framework governing artificial intelligence in the European Union.
As previously reported, the EU Parliament has already broadened the definition of artificial intelligence for the purposes of the AI Act to cover generative AI systems like ChatGPT and Stable Diffusion.ย Now, they are working on specific provisions of the AI Act that will deal with General Purpose AI (GPAI), which includes the above-mentioned large language models that can be adapted for various tasks.
To address this issue, the offices of the European Parliamentโs co-rapporteurs have proposed a set of obligations for providers of GPAI and responsibilities for the different economic actors involved.ย The proposed legislation requires GPAI providers to comply with specific requirements initially intended for AI solutions that are more likely to cause significant harm.
These obligations for GPAI providers include
- ensuring that the design, testing, and analysis of GPAI solutions align with the risk management requirements of the regulation to protect peopleโs safety, fundamental rights, and EU values;
- following appropriate data governance measures when dealing with datasets that feed these large language models.ย This includes assessing their relevance, suitability, and potential biases, identifying possible shortcomings, and implementing relative mitigation measures;
- throughout the artificial intelligence system’s lifecycle, external audits are conducted to test its performance, predictability, interpretability, corrigibility, safety, and cybersecurity in line with the AI Actโs strictest requirements;
- complying with cost-effective guidance and capabilities to measure and benchmark the compliance aspects of AI systems, including GPAI, to be developed by European authorities and the AI Office with international partners;
- in the case of AI models that generate text based on human prompts that could be mistaken for authentic human-made content, complying with the same data governance and transparency obligations of high-risk systems unless someone is legally responsible for the text; and
- registering the artificial intelligence model in the EU database, complying with the quality management and technical document requirements as high-risk AI providers, and following the same conformity assessment procedure.
Also, any AI distributor, importer, or deployer that substantially modifies an AI system, including a GPAI one, will be qualified as a provider of a high-risk system with related obligations.
EU lawmakers have also proposed introducing a new article that prevents all providers from unilaterally imposing unfair contractual terms on small and medium-sized enterprises (SMEs) for using or integrating tools into a high-risk system.ย They have also extended the list of tasks of the AI Office to include issuing guidance on how the AI regulation would apply to fast-changing AI value chains and the related implications in terms of accountability.
While the proposed obligations for GPAI providers and responsibilities for different economic actors involved are part of the EUโs effort to regulate the development and use of AI in a responsible and ethical manner, there are concerns that the established regime is excessively burdensome for technologies like ChatGPT, which will be at the core of innovation in the coming years. If the regulatory obligations become unaffordable, the launch of AI systems in the EU might either be prevented or considerably delayed. Additionally, given the pace of the evolution of artificial intelligence, EU lawmakers risk approving an AI Act that will already be out of date.
On a similar topic, you may find the following article interesting “US Copyright Office allows limited protection of artificial intelligence generated works.”