Share This Article
ENISA published a report that addresses standards of cybersecurity for artificial intelligence, and their relevance to ensure compliance with the upcoming EU AI Act.
The increasing prominence of artificial intelligence (AI) in various industries has necessitated the establishment of comprehensive standards for cybersecurity.ย In this context, ENISA published a report that explores existing, drafted, under consideration, and planned standards related to the cybersecurity of artificial intelligence, their coverage, and identifies gaps in standardisation.ย Besides, it examines the specificities of AI, particularly machine learning, and adopts a broad view of cybersecurity to encompass the traditional confidentiality-integrity-availability paradigm and the broader concept of AI trustworthiness, also in the prospective of the current draft of the EU AI Act.
The Standardisation Landscape in Cybersecurity of Artificial Intelligence
The standardisation landscape for artificial intelligence cybersecurity is primarily driven by Standards-Developing Organisations (SDOs) that are concerned about the insufficient knowledge of applying existing techniques to counter threats and vulnerabilities arising from AI. This concern leads to the ongoing development of ad hoc reports, guidance, and standards.
In this respect, existing general-purpose technical and organisational standards, such as ISO-IEC 27001 and ISO-IEC 9001, can mitigate some cyber risks faced by artificial intelligence with specific guidance on their application in an AI context.ย The rationale behind this is that AI is fundamentally software, allowing software security measures to be transposed to the AI domain.
However, this approach is not exhaustive and has limitations.ย For instance, artificial intelligence includes technical and organisational elements beyond software, like hardware or infrastructure.ย Additionally, determining appropriate security measures relies on system-specific analysis, and some aspects of cybersecurity are still under research and development, making them not mature enough for exhaustive standardisation.ย Moreover, existing standards may not address specific aspects such as traceability, lineage of data and AI components, or robustness metrics.
According to ENISA, there is interdependence between cybersecurity and AI trustworthiness.ย As such, the report emphasizes the need to ensure that trustworthiness is not handled separately within AI-specific and cybersecurity-specific standardisation initiatives, particularly in areas like conformity assessment.
Relevance of cybersecurity in the risk assessment under the AI Act
The ENISA stresses the importance of including cybersecurity aspects in risk assessments for high-risk artificial intelligence systems and there currently is a lack of standards covering the competences and tools of actors performing conformity assessments.ย In this respect, according to ENISA, the draft AI Act and the Cybersecurity Act (CSA) need to work in harmony to avoid duplication of efforts at the national level.
The European Commission is speeding up the approval of the AI Act, but it is likely to struggle in any case with the pace of advancement of artificial intelligence. ย Standardisation can support cybersecurity of AI technologies that is essential toย ensure the safe and trustworthy deployment of AI systems across various industries.
On a similar topic, you can find the following article interesting โEU Parliament broadens the definition of artificial intelligence under the AI Actโ.