Share This Article
On December 6, 2022, the EU Council made a bold move to regulate artificial intelligence systems with their proposal for the AI Act.
This ambitious legislation aims to enhance Europe Union’s digital transformation by creating a safe and ethical digital realm for both citizens and businesses. By implementing this act, the EU is committed to upholding fundamental rights and values as they pave the way for a brighter, more technologically advanced future.
The regulatory track and targets of the proposed AI Act on artificial intelligence
As the European Union strives to become a world leader in artificial intelligence, it is crucial that citizens and businesses have confidence in the ethical and responsible use of AI systems. To achieve this, the EU is determined to ensure that these systems align with relevant regulations and meet high standards of technical and moral excellence. In April 2022, the European Commission took a major step towards this goal by releasing a draft of its position on the AI Act. After a thorough examination by the European Council and Parliament, the Council recently released a legislative proposal with amended provisions to facilitate its implementation. With these measures in place, the EU is poised to establish itself as a beacon for responsible and innovative use of artificial intelligence. The legislation on artificial intelligence systems addresses both producers, defined as “any natural or legal person, public authority or other body that develops or causes to be developed an artificial intelligence system,” and end users, identified as “any natural or legal person that uses an artificial intelligence system under its own authority.“
The approach taken in the act distinguishes artificial intelligence systems on the basis of the level of risk they pose, dividing them into
- unacceptable level of risk, and therefore prohibited, such as social scoring;
- high level of risk-targeted by the proposed legislation, such as tools for analyzing and ranking candidates’ CVs as part of a selection interview; and
- limited or minimal level of risk, such as chatbots, which are simply encouraged to adhere to voluntary codes of conduct.
It is now up to the negotiations between European Parliament and Council to find common ground for agreement on adopting the official document.
What are the main new elements of the AI Act compared to the previous draft?
The changes made by the Council of the EU to the previous AI Act proposal to introduce European legislation on artificial intelligence are aimed at making the legal framework clear and practical in nature, with a focus on improving governance and effective enforcement.
- Definition of artificial intelligence system: to make it easier to distinguish between AI and simpler software systems, the Council proposal narrows and clarifies the definition, referring to “systems developed through machine learning approaches and logic and knowledge-based approaches.”
- Prohibited AI practices: the ban on using AI systems for social scoring (falling under the unacceptable risk level) is also extended to individuals. Added to this is the inclusion of those who are in a vulnerable condition due to their economic and/or social conditions within the category of activities for which the use of artificial intelligence is prohibited.
- Classification of artificial intelligence systems as “high risk level” and related requirements: the requirements for high-risk artificial intelligence systems have been clarified to be simpler and less burdensome to implement (such as by referring to the documentation that SMEs will have to prepare to comply with the provisions of the act). More guidance also affects the assignment of roles and responsibilities within the value chain in which artificial intelligence systems are embedded, articulating the relationship between the AI Act’s guidance and existing regulations (e.g., for the financial services sector).
- General-purpose AI systems: new provisions regulate the uses of general-purpose AI systems (i.e., for various purposes) and also account for the possibility of integrating general-purpose AI systems within a high-risk AI system. The Council expects to ensure that such general-purpose artificial intelligence systems meet the same requirements imposed on high-risk systems through an implementing act resulting from an ad hoc impact assessment.
- Enforcement Compliance and Market Surveillance: military, defense, and national security purposes have been explicitly excluded from the scope of the AI Act. The use of artificial intelligence systems for research and development purposes and the obligations of persons using such systems for non-professional purposes were likewise excluded (with the exception of transparency obligations). In addition, taking into account the peculiarities specific to law enforcement agencies, and subject to the relevant protections, some changes have also been introduced to acknowledge the “need to respect the confidentiality of sensitive operational data in relation to their activities.“
- Conformity Assessment Procedures and AI Committee: in an effort to simplify the compliance framework for AI legislation and enforcement in the context of the European single market, the Council proposal simplifies and clarifies how to assess compliance against the Artificial Intelligence Systems Act. More autonomy is also given to the committee dealing with governance with respect to AI legislation. Added to this is a requirement for said committee to “establish a permanent subgroup to serve as a platform for a wide range of stakeholders.“
- Reduced penalties for SMEs: the Council’s proposal reduces the maximum administrative penalties that can be imposed on SMEs and start-ups for AI Act violations.
- Transparency: increased transparency for high-risk AI systems is among the changes made from the Commission text, providing for a registration requirement in the EU database for high-risk AI systems for public users-entities of such systems (with particular regard to emotion recognition systems, for which disclosure requirement is introduced for individuals who are exposed to such systems). The Council text also clarifies that it is possible for data subjects to lodge complaints with the competent authority on AI compliance.
- Sandboxes to support innovation: the proposal introduces measures to support innovation to enable study-based development in a real-world experiential setting and derive an improved legal system from it. In particular, “regulatory experimentation spaces” will allow AI systems to be tested in everyday life, without controls and under favorable conditions and guarantees.
Critical issues and perspectives of the AI Act
As the European Union continues to release a series of measures on artificial intelligence, it has the potential to set a global standard, much like the GDPR did in 2018. However, there are several critical issues that need to be addressed in the proposed AI Act to ensure that AI systems are used for positive purposes and inspire trust in those who rely on their output.
One such issue is the classification of risks presented by these systems, as some AIs deemed “high risk” are banned, leaving room for oversights and gaps in regulation. To adequately address AI’s current and future risks, it is necessary to consider the overall risk to society and fundamental rights. It is also crucial to remember that human oversight and intervention play a vital role in upholding fundamental rights and values while examining the work of artificial intelligence.
On the same topic, you may be interested in Proposals for European legislation on liability for artificial intelligence and the digital age.