Share This Article
Artificial Intelligence (AI) will be a game changer for the insurance sector, and how shall the AI risk be insured?ย
AI is rapidly permeating various economic sectors. From financial services and insurance to life sciences and encompassing industries as diverse as retail, industrials, real estate, media and sports, AI is reshaping traditional paradigms and marking a shift in how enterprises operate and innovate.
Just to name a few, in financial services, AI technologies are being deployed to boost customer service capabilities, streamline credit assessments, and strengthen fraud detection mechanisms. Similarly, AI models are transforming insurance operations in claims processing, underwriting, customer service, and risk assessment, enhancing decision-making and resource management. In life sciences, AI enable groundbreaking advances in research and development, facilitating personalized patient care and innovating diagnostic and treatment methods. Industrials are also harnessing the power of AI to optimize operational efficiency and improve organizational resilience against market fluctuations and disruptions.
According to DLA Piperโs Global AI Governance Report from September 2023, large and medium-sized business are rapidly embracing AI, with 96% of organizations rolling out AI in some way and at least four projects live in each company. The report also reveals that 83% of companies have a defined AI strategy, and 86% of those have incorporated guidelines to steer their AI initiatives. This indicates a strong commitment to responsible AI adoption. However, the report also suggests that while companies are taking steps towards compliant and ethical AI use, more than these measures will be required to address the scale of issues raised by AI.
Benefits and risks of AI in the insurance sector
Despite the transformative potential of AI to change business practices and drive unprecedented value creation, the path to artificial intelligence integration is full of potential pitfalls. The EU AI Act will grant companies 24 months (with a few exceptions) following its entering into force to get compliant. However, the existing legislation and sector regulations already pose several challenges on the route to AI corporate implementation, and the risk of issues, fines, investigations, and legal actions is significant.
One serious challenge is the lack of transparency, especially in complex deep learning models, such as LLMs (Large Language Models). This opacity makes it difficult for users to understand how AI systems make decisions, fostering distrust and resistance to adoption. Moreover, it may hinder efforts to identify and rectify biases in AI algorithms, as stakeholders might have problems in scrutinizing the underlying decision-making processes. Additionally, the absence of transparency poses challenges to regulatory compliance and ethical oversight, as it becomes more challenging to assess whether AI systems abide by laws, regulations, and standards.
Furthermore, AI systems have the potential to propagate societal biases, resulting in discriminatory outcomes. This occurs when the data utilized for training AI models mirrors societal prejudices, including those rooted in race, gender, or socioeconomic status. Should the training data exhibit bias, the AI system might internalize and perpetuate these biases in its decision-making processes. For instance, within the insurance sector, if historical data used to train an AI-based risk assessment tool demonstrates a bias against certain demographic groups, the AI system could unconsciously perpetuate this bias by unfairly pricing policies or denying coverage to individuals from those groups, thus exacerbating existing disparities in access to insurance.
Further issues regard data privacy and cybersecurity. AI systems rely on extensive datasets to train their algorithms and enhance performance. These datasets encompass a wide array of information, and may include personal data such as names, addresses, financial information, and sensitive information like medical records and social security numbers. The collection and processing of such data raise significant concerns, such as the risk of data breaches and unauthorized access to personal information. In addition to the data privacy risks, implementing AI systems may entail specific vulnerabilities and threats, including potential breaches, data manipulation, adversarial attacks, and the exploitation of AI models for malicious purposes.
AI also brings about some challenges to intellectual property (IP), particularly regarding training AI algorithms on third-party datasets and protecting AI-generated outputs. Training AI algorithms using proprietary datasets owned by third parties carries the risk of infringing their IP rights, potentially leading to legal disputes involving copyright, trade secrets, or other IP rights. Furthermore, questions arise concerning the protectability of AI-generated outputs, such as software code, text, images, or other content. Determining whether the AI output can be protected through IP and defining the boundaries of such protection can be complex, especially when it involves combining and transforming existing works.
All these risks may expose companies adopting AI to severe liability towards their customers, partners, and stakeholders. To mitigate these risks โ aside from the requirements and obligations that will apply under the AI Act โ companies should implement robust internal policies and guidelines to govern AI systemsโ development, deployment, and usage. Additionally, they should incorporate contractual safeguards into agreements with third-party providers and stakeholders to outline responsibilities and liabilities related to AI usage. Technical measures, such as encryption, access controls, and anomaly detection, should be employed to protect data and AI systems from breaches and unauthorized access. Regular security audits and vulnerability assessments can help identify and mitigate potential weaknesses in AI systems and infrastructure. Furthermore, implementing organizational measures, such as regular employee training and awareness programs, can create a culture of accountability and compliance with regulatory requirements and ethical standards in AI deployment.
How can you insure the AI risks?
On top of the said measures, companies of various industries are assessing with their brokers whether the risks stemming from their usage of AI systems can be considered comprised within their existing insurance coverage or must be addressed with new policies.
Although certain existing policies (such as PL/PI insurance, cyber and third-party liability) may cover some AI risks, there are significant gaps that require rethinking of existing policies and possible creation of new solutions depending on specific situations.
Risk assessment is the basis of the insurance contract. A risk can be insured if:
- it causes definitive loss taking place at a specific time, in a specific place, and arising from certain specific causes;
- the loss caused is accidental;
- losses are predictable (predictability allows evaluating frequency and severability);
- underwriters can provide affordable premiums.
Carrying out an accurate risk assessment is fundamental both for underwriters and for insureds. In fact, on the one hand, underwriters can exclude or limit certain specific risks and Insureds, on the other hand, can shape the best cover for them against specific claims.
In case of AI liability, risk assessment is a new frontier to be explored. In such case, insurance is not a static concept: it could be very difficult appraising the aggravation or reduction of the risks, since risks can change rapidly.
The first policies currently available on the market granting coverage against third-party risks for AI provide a range of solutions to adapt existing insurance to AI challenges such as:
- specific exclusions
- consistent deductibles
- co-insurance risk-taking
- specific coverage limits/sub-limits which can transform unquantifiable underlying risks into known maximum exposures.
In the next future, some risks currently covered could be no longer be insurable, at least not without a higher premium.
At the same time, it could happen that some risks currently left without cover are better understood and become affordably insurable. More risks than previously could be eligible for coverage on reasonable terms, based on tailor-made evaluations.
More precise indications could be provided by EU. The European Union has intervened with (i) a proposal for a Directive (i.e. the AI Liability Directive), with the aim of harmonizing the liability regime in case of damages caused by AI systems; and (ii) a proposal for a Regulation (i.e. the AI Act), which mainly aims at preventing damages.
Article 6 of the AI Liability Directive states that the EU will consider imposing compulsory insurance cover. However, Article 6 does not clarify who would be subject to this obligation: companies that use AI? Companies that produce AI systems? The companies that sell AI systems? All of them?
The use of AI by insurance carriers
The use of AI by insurance companies themselves will also contribute to:
- the implementation of the risk assessment. In this evolution, it is likely that insurance will shift from its current state of โasses and indemnifyโ to โpredict and prevent,โ transforming every aspect of the insurance industry;
- the differentiation of risks more precisely;
- deeper and faster detection of insurance fraud;
- accelerate claims handling and management; and
- require affordable premiums.
As a consequence of the implementation of the risk assessment, according to various operators, the use of AI also by insurance companies will allow the expansion of the services in the following sectors:
- cybersecurity insurance
- blockchain integration
- climate risk assessment.
In Italy, the legislator has recently introduced mandatory insurance against catastrophic events. An accurate and prompt assessment of the risk with AI will be essential to respond efficiently to the demand of the insureds.
Nowadays the use of AI in claims handling is a concrete reality for motor and property insurance world. Insureds can notify a claim providing all photos of the damages to be indemnified with a simple click from their phone and obtain immediate assistance for repairing them.
AI is already used also for the management of the claims: various insurtech companies recently created software providing summary of judicial pleadings and a quick analysis of the claim, speeding up the relevant management for claims handlers.
Keys legal challenges generated by artificial intelligence in the insurance sector
They can be summarized as follows:
- AI implies benefits and risks
- Insurability depends on understanding of the risk
- The use of AI could imply new risks to be insured
- New insurance products are expected to cover risks which could have been considered uninsurable in the past
- The use of AI by insurance companies will allow:
- differentiation of risks
- creation of more tailor-made insurance solutions
- detection of insurance fraud
- affordable premiums
- speed-up in claims handling process
- implementation of cybersecurity insurance, blockchain integration and climate risk assessment.
You can find several other articles on legal issues of artificial intelligence HERE
Authors: Giacomo Lusardi, Karin Tayel, Andrea Olivieri