Share This Article
The preliminary agreement on the AI Act led to the introduction of the Fundamental Rights Impact Assessment, or FRIA, but what is a FRIA?
After three days of intensive negotiations, the European Commission, the Council of the European Union and the European Parliament reached a political agreement on the text of the European Regulation on Artificial Intelligence (better known as the EU AI Act) on December 8, 2023. Among the amendments added to the text initially proposed by the Commission was the introduction of a requirement to conduct a fundamental rights impact assessment under certain circumstances. Below we will analyze existing sources to find out who are the entities required to conduct this assessment and what a FRIA should include.
The essential elements of a Fundamental Rights Impact Assessment (FRIA)
The agreement reached during the December 6-8, 2023 trilogue session by the European co-legislators expands the initial decision by including an obligation to conduct a fundamental rights impact assessment for certain entities that employ high-risk AI systems. This obligation applies to public law entities or private operators providing public services, as well as operators providing high-risk systems. These entities are required to conduct a fundamental rights impact assessment and report the results to the national authority.
The scope of this assessment includes several key elements:
- Description of the implementation process: The assessment must include a detailed description of the process in which the high-risk AI system will be used.
- Time of use and frequency: Organizations must specify the duration and frequency of use arranged for the high-risk AI system.
- Categories of persons or groups affected: The assessment must identify the categories of individuals and groups that could be affected by the use of the AI system in the specific context.
- Specific risks of harm: It is mandatory to describe the specific risks of harm that may impact the identified categories of people or groups.
- Human supervision measures: Implementers must detail the implementation of human supervision measures to monitor the AI system.
- Risk materialization remediation measures: Entities must outline the measures to be taken if the identified risks materialize.
Importantly, if the implementer has already fulfilled these obligations through a data protection impact assessment, the fundamental rights impact assessment should be conducted along with the data protection impact assessment. This integrated approach ensures a thorough assessment of the implications of deploying high-risk AI systems, prioritizing fundamental rights protection and data security.
Finally, it is noted that the requirement to conduct a FRIA has also been introduced with respect to the new category of General Purpose AI (GPAI) with systemic risk. Also known as high-impact GPAI models, these models were identified during the latest negotiations that delimited a stricter regulatory regime for GPAI models characterized by particularly intensive training; in fact, the unit of measurement that identifies computer power was used to determine which model falls into this category. This decision stems from the fact that there is often a direct correspondence between the power used in training and thus also the amount of data learned and the capabilities of the final model. The limit set at 10^25 FLOPs, today would include only the top-tier of AI models such as GPT-4 and probably Google’s Gemini. Additional requirements have been identified for these models to be implemented prior to deployment that address: risk mitigation, incident readiness, cybersecurity, level of testing during development, environmental impact documentation, and of course fundamental rights impact assessment.
How to prepare a FRIA: A multi-step methodology
In anticipation of the final text, companies can adequately prepare for the enactment of the AI Act by gathering useful information in advance to conduct FRIAs, optimizing efforts and making the most of what has already been mapped during the DPIA process, especially in cases where AI systems involve potentially risky processing of personal data.
A multiple-step methodology for impact assessment is proposed below, including key elements for conducting a FRIA:
Perimeter Identification:
- What: Identification of the geographic scope and ecosystem of implementation of the AI system.
- How: Detailed description of the system, with emphasis on inputs, outputs, data management procedures, accessibility, explainability, and potential impacts on fundamental rights.
- Who: Definition of operational context, identification of stakeholders, categories of individuals involved, with focus on vulnerable groups.
- Why: Defining the goals and objectives of the AI system in alignment with the organization’s goals.
- When: Definition of the expected operational life of the system, considering technological, regulatory and market changes.
Verification of Legal Framework:
- Identification of regulatory requirements applicable to the system, including non-discrimination obligations, environmental protection, health and safety standards.
- Verification of compliance with EU and national legislation related to fundamental rights.
Identification of Impact and Severity:
- Assessment of the seriousness of the potential impact of the system on the fundamental rights of the affected groups, with particular attention to the vulnerable.
Risk Mitigation:
- Description of measures to remove or mitigate identified risks, with strategies to address residual risks.
Monitoring of Risks and Mitigation:
- Setting risk indicators, metrics and safety thresholds.
- Ongoing oversight of controls and mitigation measures, with a schedule for periodic updates.
The FRIA methodology should integrate synergistically with internal risk management processes and DPIA methodology in cases of joint application.
Conclusions
The development of AI systems in some cases requires large resource investments spread over long time windows. The ability to carry out post-development or post-market release mitigation actions can prove extremely costly or even technically difficult in certain circumstances (e.g., following the completion of training of an AI model) so anticipating the conduct of such impact assessments in the design and prototyping phase of AI systems can significantly contribute to implementing or integrating AI solutions that respect the fundamental rights of individuals and comply with legal requirements.
On a similar topic, “EU AI Act Approved: Everything You Need to Know on the Artificial Intelligence legislation in Europe” may be of interest. Also, you may find interesting the legal tech tool that we developed to support businesses to ensure compliance of AI solutions, you can have more information HERE.
Authors: Tommaso Ricci and Marco Guarna – The image in this article was generated by the generative AI system, DALL-E.