Share This Article
The adoption of the first Code of Conduct on Artificial Intelligence by the G7 is a noteworthy step toward global AI legislation.
The adoption of the Code of Conduct on Artificial Intelligence by the G7
Within the AI Act, Article 69 provides for the voluntary adoption, for AI systems other than high-risk AI systems, of a Code of Conduct with the aim of adopting and complying with the requirements of the Regulations for high-risk AI systems.
In this regard, on October 30, 2023, G7 leaders announced an important agreement on the International Guiding Principles on Artificial Intelligence (AI) and Code of Conduct for AI developers in the context of the “Hiroshima AI Process.”
This G7 process was established at the May 19, 2023 G7 Summit to promote global boundaries for advanced AI systems. The initiative is part of a wider range of international discussions on the topic, including at the OECD, the Global Partnership on Artificial Intelligence (GPAI), and in the context of the EU-US Trade and Technology Council and the EU Digital Partnerships.
The Hiroshima AI Process aims to promote safe and reliable AI worldwide and will provide voluntary guidance for the actions of developers of advanced AI systems, including foundational models and generative AI systems. The two documents will complement internationally the binding regulations that EU lawmakers are still defining under the AI Act; it is no coincidence that most of the principles and content of the Code of Conduct echo the content of the AI Act articles.
The Guiding Principles of the Code of Conduct
With the goal of ensuring the safety and reliability of technology, 11 guiding principles have been identified that provide guidance to organizations that develop, deploy, and use advanced AI systems. Those targeted by the principles are not only companies developing AI systems, but also entities from academia, civil society, the private sector and the public sector that develop advanced AI systems.
The guiding principles identified for organizations developing advanced AI systems faithfully reproduce the contents of Annex IV of the AI Act related to technical documentation in Article 11(1) of the same regulation and can be summarized as:
- Take appropriate measures during the development of advanced AI systems, including before and during their deployment and commercialization, to identify, assess, and reduce risks throughout the AI life cycle;
- Identify and reduce vulnerabilities and, where appropriate, incidents and misuse, after deployment and commercialization;
- Publicly communicate the capabilities, limitations, and areas of appropriate and misuse of advanced AI systems to ensure transparency and accountability;
- Responsibly share information and report incidents among AI developer organizations;
- Develop, implement and disclose risk-based governance and risk management policies, including privacy and mitigation aspects;
- Invest in and implement robust security controls, including physical, cyber and insider threat control;
- Implement reliable content authentication and provenance mechanisms to enable users to identify AI-generated content;
- Prioritize research to mitigate social, safety and security risks, and invest in effective mitigation measures.
- Develop advanced AI systems to address global challenges, such as climate crisis, health, and education;
- Promote the development and adoption of international technical standards, where appropriate; and
- Implement appropriate measures for the input and protection of personal data and intellectual property.
The contents of the Code of Conduct
Building on these principles, the Code of Conduct was then written, containing detailed, practical guidance for AI developers related to each of the 11 principles.
Regarding the first principle, the Code of Conduct states that during the development, implementation and commercialization of advanced AI systems, organizations should take appropriate measures to identify, assess and mitigate risks throughout the AI life cycle. This includes taking various internal and external testing measures to evaluate systems, such as red-teaming. Organizations must implement appropriate mitigation measures to address identified risks and vulnerabilities, ensuring the reliability and security of systems. These tests should be performed in secure environments and at various points in the AI lifecycle, especially before implementation and commercialization, to identify risks and vulnerabilities and take appropriate remedies. In addition, organizations must pay special attention to security-related risks, including chemical, biological, radiological and nuclear security, offensive cyber capabilities, health and safety risks, self-reproduction of AI systems, and risks to society and democracy. Organizations committed to following this code must also invest in research to improve the security, reliability, transparency and interpretability of advanced AI systems, paying particular attention to avoiding abuse, discrimination and misinformation.
For the second principle, organizations are expected to use, as and when appropriate based on the level of risk, systems and procedures to monitor vulnerabilities, incidents, emerging risks, and misuse after implementation, and take appropriate measures to address them. Organizations are encouraged to consider, among other measures, facilitating and enabling the investigation and reporting of problems and vulnerabilities to third parties and users as well; for example, through contest or reward systems to incentivize responsible disclosure of problems and vulnerabilities. Organizations are also, encouraged to maintain appropriate records of reported incidents and mitigate identified risks and vulnerabilities.
The third principle requires that organizations publish transparency reports containing information and instructions for use and technical documentation and that these be kept up-to-date. These documents should include:
- Details of assessments conducted for potential security and societal risks, as well as human rights risks;
- Model/system capabilities and significant limitations in performance on appropriate use cases;
- Discussion and evaluation of model/system effects and risks to security and society such as bias, discrimination, privacy threats, and effects on social equity;
- Results of red-teaming conducted to assess the suitability of the model/system to pass the development stage.
Organizations should make this information sufficiently clear and understandable to enable users and users, as appropriate, to interpret the output of the model/system and use it appropriately.
The fourth principle requires the sharing of information, including assessment reports, information on safety risks, anticipated or unanticipated hazards, and attempts by AI actors to circumvent safeguards, throughout the life cycle of the AI. Organizations should establish or adhere to mechanisms to develop, promote and adopt, where appropriate, shared standards, tools, mechanisms and best practices to ensure the safety and reliability of advanced AI systems. Organizations should collaborate by sharing and reporting relevant information to the public with the goal of promoting the safety and reliability of AI systems, involving relevant public authorities where appropriate.
The fifth principle states that organizations should establish organizational mechanisms to develop and implement risk management policies, including risk identification and mitigation. Privacy policies should cover a variety of risks and be regularly updated, including ensuring ongoing staff training.
According to the sixth principle, organizations should ensure the security of weights, patterns and data through appropriate operational security measures and cyber/physical access controls. They must also assess cyber risks, implement appropriate security policies and technical solutions; it is essential to store model weights in secure environments with restricted access to prevent unauthorized disclosure and undue access. They must also implement an insider threat detection program and periodically review security measures to keep them effective and adequate.
The seventh principle requires organizations to implement authentication and provenance mechanisms for content created with their advanced AI systems. Provenance data should include an identifier of the service or model that created the content, but need not include information about the users who contributed to the creation. Organizations should also develop tools or APIs to allow users to verify whether a particular piece of content was created with their AI system, such as through watermarks. The implementation of other mechanisms such as tagging or declarations to let users know when they are interacting with an AI system is also encouraged.
With the eighth principle, organizations are urged to invest in research to improve AI security by addressing key risks and developing mitigation tools. A focus on democratic values, human rights, protecting the vulnerable, intellectual property, and countering harmful bias and misinformation is called for.
The ninth principle requires organizations to commit jointly to AI development that benefits the whole world, in line with the United Nations Sustainable Development Goals. The principle mentions three aspects:
- Prioritize the responsible management of AI;
- Supporting digital literacy; and
- Collaborate with civil society to address the most important global challenges.
The tenth principle requires organizations to contribute to the development and adoption of international technical standards and best practices, as well as work with Standards Development Organizations (SDOs), including when developing data testing methodologies, authentication and content provenance mechanisms, cybersecurity policies, public reporting and other measures. Measures to help users distinguish AI-generated content from non-AI-generated content are also encouraged.
The last principle requires organizations to take data quality management measures, including those used in training and data collection, to mitigate bias. Appropriate measures could include, among others, transparency, privacy-preserving training techniques, and fine-tuning to ensure that systems do not disclose confidential or sensitive data. Finally, organizations are encouraged to implement safeguards to respect privacy and intellectual property.
What impact comes from the Code of Conduct?
Unlike the AI Act, which, even if passed in the next few months, will take effect in 18 to 24 months, the Artificial Intelligence Code of Conduct is immediately applicable, although companies can comply with it on a voluntary basis.
However, in the current historical context in which there is considerable focus with respect to compliance of AI systems, non-compliance with the Artificial Intelligence Code of Conduct would be difficult to justify internally. For this reason, DLA Piper is extending its methodology and legal tech tool for evaluating AI systems called “PRISCA AI Compliance” to also assess about the applicability of the Code of Conduct. You can read more about PRISCA here.