Share This Article
The European Union has made a momentous decision when it approved the AI Act, the first legislation to regulate the much-discussed artificial intelligence (AI) that promises to revolutionize our lives.
After a marathon three-day negotiation by the trialogue composed of the European Commission, Council, and Parliament, on December 8, 2023, they approved the AI Act to regulate the use of artificial intelligence systems in the European Union.
This outcome was by no means a foregone conclusion until a few days ago after the French, German, and even Italian governments called for replacing the AI Act with the adoption of a mere code of conduct to decrease regulatory obligations on European companies to enable them to better compete in the international arena. European legislators disagreed with this approach, believing (in my opinion, rightly so) that a balanced regulation would have instead forced foreign companies to comply with the AI Act as well, creating a more balanced environment that would have allowed for fairer competition.
The definition of artificial intelligence under the AI Act
In line with this objective, the definition of artificial intelligence systems in the AI Act aligns with internationally recognized criteria, following OECD guidelines, which defines an AI system as the following:
a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
It gives it a broad scope since artificial intelligence could impact every sector, excluding from its application only the use of AI in sectors that require special regulations, such as military and defense, in addition to research, innovation and non-professional usage.
A further highly discussed exception to the applicability of the AI Act operates in relation to AI systems exploiting free and open source software. But the scope of the exception is so narrow that basically it will apply only to personal usage of artificial intelligence. Indeed, the exemption will not apply
1. If the AI system is either (i) for high-risk use-case, or (ii) falls under prohibited uses, or (iii) is a use-case with transparency requirements; and
2. If AI systems provided under an open source license provide information such as model usage, architecture, etc. as, in this case, their obligations are limited to providing “detailed summary” of content and to adhere to copyright law. But the exemption does not apply if the AI system is โmade available on the marketโ or โput into serviceโ i.e., it is used for professional purposes.
In any case, the free and open source exemptions do not apply if the AI system is designated as GPAI with systemic risks.
The classification of AI systems
The legal framework for AI is characterized by a dual regime distinguishing between AI systems of limited risk and those posing high risk:
- All AI systems are subject to basic transparency obligations to ensure a minimal level of clarity and understanding across the board, also informing individuals that they are interacting with an AI system;
- AI systems identified as carrying systemic risk โ those with the potential to significantly impact health, safety, fundamental rights, the environment, democracy, or the rule of law โ are held to comprehensive regulatory standards. These standards necessitate a well-defined apportionment of liability among developers and users, mandating that developers assist users in meeting high-risk AI system assessment criteria;
- The metrics to determine these classifications, including FLOPS (i.e., their computational power) with a relevance limit of 10^25 FLOPs (which many consider extremely high), are adjustable and can be revised over time.
Additionally, the regulation enacts prohibitions on practices considered detrimental, such as:
- Techniques that manipulate individual cognition and behavior.
- Random collection of facial recognition data from public internet spaces or CCTV.
- Use of emotion recognition systems in workplaces and educational settings.
- Deployment of social credit scores.
- Biometric processing for the inference of sensitive personal data like sexual orientation or religious beliefs.
- Certain applications of predictive policing targeting individuals.
Stricter obligations for foundation models and general purpose AI
A special regime is introduced for general-purpose AI systems and foundation models. A general-purpose AI is an artificial intelligence that can be used for many different purposes. It is also governed for cases when integrated into another high-risk system. Foundation models are large systems capable of performing a wide range of distinctive tasks, such as generating video, text, and images, conversing in lateral language, computing, or generating computer code.
The AI Act places a strong emphasis on rights and transparency for foundation models, also making fundamental rights impact assessments mandatory for high-risk AI systems. In particular, foundation models and general purpose AI systems that can cause a systemic risk have the following obligations:
-
Risk Management: Organizations are required to conduct model evaluations using cutting-edge protocols and tools.
-
Red Teaming: An adversarial testing shall be carried out and thoroughly documented in order to identify and address systemic risks.
-
Cybersecurity: A robust level of cybersecurity for both the AI model and its physical infrastructure shall be ensured with also an incident reporting obligation.
-
Energy Consumption: Entities are obligated to monitor, record, and disclose the actual or estimated energy consumption of the model.
Besides, providers are obligated to
-
adopt a policy that adheres to Union copyright laws, employing advanced technologies as needed; and
-
must also prepare a comprehensive summary of the materials used to train the AI model. I assume that the recitals of the AI Act will clearly state that the summary isn’t expected to cover individual training data points, as this would be excessively costly.
There is also a mandatory registration in a European database and such provision together with the disclosure of content used by the artificial intelligence system referred above could give rise to significant litigation because rights holders of materials used by AI systems could challenge their use under copyright and privacy legislation.
AI Governance under the Approved AI Act
In terms of governance and compliance, the AI Act establishes a European AI Office to monitor the most complex AI models. It provides for the creation of a scientific panel and an advisory forum to integrate the perspectives of the different stakeholders. This ensures that regulation is always informed and up-to-date with respect to developments in the field.
But a topic of considerable discussion will be about what powers are given to local AI authorities and which entities will be appointed as national authorities. As happened with GDPR, local authorities will not want to give up their powers. The AI Office should reduce the risk of inconsistent approaches across the EU among local authorities, but political friction between different local authorities cannot be ruled out.
AI Act sanctions
The AI Act also, of course, establishes a system of penalties that, as has been the case with a number of recent European regulations, is based on companies’ global turnover or a predetermined amount, whichever is higher. They will be โฌ 35 million or 7% for violations of the banned AI applications, โฌ 15 million or 3% for violations of the AI actโs obligations and โฌ 7.5 million or 1.5% for the supply of incorrect information. We will have to review the wording since it is more likely that sanctions will rather be “up to” those amounts as provided for other EU legislations.
Exceptions are made for smaller businesses, with limited penalties for SMEs and startups. Thus, even on penalties, a balance has been struck between the need to regulate AI and the goal of not restricting the development of this technology in the EU. For the same reason, the so-called “sandboxing” solutions are provided where solutions can be tested while benefiting from a special regime.
The timeline of the AI Act
The applicability date of the approved AI Act will follow a precise timeline, with a transition period of six months for the introduction of bans, one year for foundation models and AI systems for general use, and two years for the launch of other AI systems, distinguished according to the associated risk.
The text of the agreed AI Act is not yet available and the above information is also derived from information currently made public. The final text will be published in the Official Journal of the European Union in January 2024, at which time the above deadlines will begin to run. However, there is no doubt that regardless of the length of the transition period, no company will be willing to adopt AI solutions that do not comply with the AI Act that would force it to divest from the technology anytime soon.
At DLA Piper we build innovative solutions to ensure artificial intelligence compliance in a cost-effective and efficient manner, please reach out to us to know more. In the meantime, you can obtain more information in two exclusive webinars that we arranged in Italian (register HERE) and in English (register HERE) with our international colleagues, also comparing the AI Act with the US executive order on AI and the current UK AI draft legislation.