Share This Article
Article 15 of the EU AI Act requires that high-risk artificial intelligence systems meet an adequate level of accuracy, robustness, safety, and cybersecurity.
A detailed report from the European Commissionโs science and knowledge service, the Joint Research Center, offers insights, dissecting the obligations of the AI Act in terms of cybersecurity.
Indeed, as any other EU law, the AI Act is principle-based, and companies will have the prove compliance through documentation supporting their practices. Article 9.1 of the version of the AI Act approved by the EU Parliament only provides:
High-risk AI systems shall be designed and developed following the principle of security by design and by default. In the light of their intended purpose, they should achieveย an appropriate level of accuracy, robustness, safety, and cybersecurity, and perform consistently in those respects throughout their lifecycle. Compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application.
The obligations arising from Article 9 of the AI Act have unfolded in four guiding principles:
1. The focus of the AI Act is on AI systems
An AI system is defined by Article 3(1) and is a software, housing several AI models and integral components such as interfaces and databases. It is crucial to comprehend that the AI models, though essential, do not represent the AI system as a whole. The AI Act’s cybersecurity obligations apply to the entirety of the AI system, not solely to its internal components.
This obligation appears to obvious, but it is not always the case. We frequently see clients placing on the market AI systems that combine several AI models even from different providers that are then incorporated in a single AI system. Normally, there is a tendency in focusing on the single modules rather their interaction that might leave room to potential bugs eventually exploited during a cyber attack.
2. Compliance with the AI Act necessarily requires a cybersecurity risk assessment
Ensuring that a high-risk AI system is mandates a meticulous cybersecurity risk assessment, linking system-level requirements to individual components. This task involves identifying and addressing specific risks, translating the overarching regulatory cybersecurity requirements into specific mandates for the systemโs components.
The cybersecurity risk assessment is part of the Risk Management System mentioned in Article 9 of the AI Act. This requirement reminds companies how important is the ability to document compliance rather than just performing the required activities. We continuously advice clients that perform the required activities, but donโt have a procedure regulating it and do not document the outcomes of assessments. Such an approach is not in line with the AI Act that, like the GDPR, is based on the accountability principle and requires businesses to provide evidence of their compliance activities.
3. Securing AI systems requires an integrated and continuous approach using proven practices and AI-specific controls
Establishing robust AI systems necessitates the amalgamation of existing cybersecurity practices with AI-specific measures, embodying a holistic approach grounded in principles of in-depth security and security by design that shall be in place throughout the whole lifecycle of the product.
This requirement is pivotal for AI systems that continuously learn from their usage and therefore can generate bugs and weaknesses potentially exploited by cyber threat actors. Indeed, hackers have spiders that continuously monitor systems trying to identify a โdoorโ that can be exploited.
4. There are limits in the state of the art for securing AI models
Given the multiplicity in the maturity of current AI technologies, it is imperative to acknowledge that not all are suitable for high-risk scenarios unless their cybersecurity shortcomings are addressed. Particularly for emerging technologies, compliance can only be achieved by adopting the previously mentioned holistic approach due to inherent limitations.
The adoption of approved standards can generate a presumption of compliance. However, it is crucial to remember that companies shall not ensure that no cyberattack can ever take place, they need to prove to have performed whatever required to ensure compliance. And this aspect is also relevant to defend the business in relation to potential liability claims.
To support businesses in ensuring compliance of their artificial intelligence systems, at DLA Piper we have developed PRISCA AI Compliance, a legal tech solution which facilitates the maturity assessment of artificial intelligence systems. Read moreย HERE
Besides, on a similar topic, you can find the following article interesting “ENISA Report on Cybersecurity of Artificial Intelligence warns on the lack of standardization on AI“.