Share This Article
The NIST Artificial Intelligence Risk Management Framework sets out guidelines to assess risks deriving from AI and increase trust in the current situation of uncertainty as to the applicable regime.ย
NIST is the National Institute of Standards and Technology at the U.S. Department of Commerce and has developed theย Artificial Intelligence Risk Management Framework (AI RMF).
The AI RMF is a voluntary, flexible, comprehensive, and stakeholder-driven framework that guides on incorporating trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. These elements are an exponentially relevant aspect of artificial intelligence in a period where users and businesses are concerned about the invasiveness and the lack of control over the operation of AI.
The NIST Artificial Intelligence Risk Management Framework is based on four components that work together to support risk management processes for AI systems:
- Core: A set of trustworthiness objectives (such as accuracy, fairness, privacy, and security) and activities (such as data quality assessment and bias mitigation) that can be applied to different aspects of AI systems (such as data inputs/outputs/processes). In particular, the Core consists of four subcomponents:
- Govern establishing policies/guidelines/principles for AI risk management;
- Map identifying/analyzing/scoping relevant AI system elements;
- Measure assessing/monitoring/reporting on AI system performance/outcomes; and
- Manage mitigating/resolving/responding to AI system issues/risks;
- Profiles: A prioritization of trustworthiness objectives based on context-specific factors, such as mission/goals/values/stakeholders;
- Implementation Tiers: A characterization of organizational readiness to manage AI risks, such as resources/capabilities/culture/governance; and
- Roadmap: A plan for improvement and alignment with best practices by providing recommendations on addressing gaps/challenges/opportunities in implementing the AI RMF.
The NIST AI RMF can be used by various organizations across different domains, sectors, and use cases to identify, assess, and mitigate risks associated with AI systems throughout their life cycle. For example,ย
- Aย healthcare providerย may use the NIST Artificial Intelligence Risk Management Framework to ensure that an AI system for diagnosing diseases meets high standards of accuracy, fairness, privacy, and security;ย
- Aย financial institutionย may use the AI RMF to monitor and audit an artificial intelligence system for detecting fraud or money laundering;ย
- Anย educational institutionย may use the AI RMF to evaluate and improve an artificial intelligence system for personalized learning or testing.ย
Applying the NIST Artificial Intelligence Risk Management Framework may involve some challenges, such as
- data availability, data quality, and data diversity;
- stakeholder engagement, involvement, and accountability;
- ethical, legal, and regulatory compliance;
- explainability, transparency, and interpretability of the artificial intelligence’s operations; and
- human oversight, control, and intervention.
However, applying for the AI RMF may also offer some opportunities, such as
- enhancing customer satisfaction, trust, and confidence;
- improving organizational performance, efficiency, and innovation;
- reducing operational costs, risks, and liabilities; and
- increasing social welfare, benefits, and responsibility.
We are in a historical moment when there is massive hype for artificial intelligence-driven by the growth of generative AI systems. The European Commission has published a draft version of theย AI Actย that will provide a definition of artificial intelligence and of the different applicable regimes and theย Directive on the liability of artificial intelligence. However, these pieces of legislation have yet to be adopted because they struggle to keep up with innovation, and they might hinder the development of such technologies. At the same time, there needs to be more trust in customers in these technologies because of their potential invasiveness, while businesses see their massive potential.
In this context, DLA Piper has developed an artificial intelligence compliance assessment legal tech tool that can evaluate an AI system against the most relevant pieces of legislation, international standards, and draft laws to provide a compliance score and recommended actions to be followed. If you want to know more about the topic, please get in touch with me.