Share This Article
The new EU product liability directive, published in the Official Journal on 18 November 2024, is expected to have a significant impact on companies involved in the production and commercialization of artificial intelligence (AI) systems.
The key point is the extension of the scope of the Directive to software, to allow injured parties to seek compensation for damages caused by AI systems.
After a long wait, the EU Council approved the Directive and published it in the Official Journal on 18 November 2024. The Directive will apply to products placed on the market after 9 December 2026.
Until the publication of the Directive, it was unclear whether the original 1985 version covered intangible products, including software. With the progressive development of technology โ and the related risks โ the legal doctrine argued for an interpretative extension of product liability to include software. This was based on the assumption that differentiating liability between tangible and intangible products was unjustifiable.
But the European Court of Justice had never given a definitive clarification on this issue. So the matter was โ up to the publication of the Directive โ uncertain. The Directive seeks to fill this gap by explicitly extending the regime to software, ensuring robust protection even in cases where the damage is caused by an intangible product.
The Product Liability Directive applies to AI
The product liability directive establishes joint and several liability among various economic operators involved in the production chain of an AI system. Specifically, Article 8 of the Directive identifies the following liable parties:
- The manufacturer of a defective product, defined as:
- Any party that develops, produces, or manufactures a product. If the product is composed of multiple components, this includes the party responsible for the final assembly. This provision is relevant where an AI system is embedded in a physical product, making the final assembler liable even if it did not directly develop the AI system.
- Any party that markets a product under their own name or brand, even if it did not manufacture the product. In the context of the AI Act, this would be the provider of the AI system.
- Any party that develops, produces, or manufactures a product for its own use.
- The manufacturer of a defective component, if that component is integrated into a product or interconnected with a product under the control of the manufacturer. This is particularly relevant when an AI system embedded in a physical product causes a malfunction (eg software controlling a robot malfunctions and injures a person).
- Any party that significantly modifies a product already placed on the market. This provision aligns with the AI Actโs concept of reclassification as a provider when a deployer significantly alters a system. For instance, liability may arise if a user modifies or integrates AI software, altering its functionality.
- The importer of a defective product or component, or the authorized representative of the manufacturer. If thereโs no importer or authorized representative established in the EU, the liability extends to the logistics service provider.
Definition of product
Article 4(1) of the directive broadens the definition of product. According to the directive, products are not only tangible goods but also electricity, files for digital manufacturing, raw materials, and software, including AI.
The only exclusion applies to open-source software, provided that itโs developed or supplied as part of a non-commercial activity.
Damages
Article 6 of the Directive limits compensable damages to the following:
- Death or personal injuries, including medically recognized psychological harm. This is particularly significant as it allows for compensation for psychological harm potentially caused by chatbots or algorithms displaying harmful content to users.
- Damage to or destruction of property.
- Destruction or corruption of non-professional data. This covers cases where software causes data breaches impacting data integrity (eg corruption of data) or availability (eg deletion of data). However, this applies only if the affected data isnโt used for professional purposes.
Further, under Article 5 of the Directive, the right to compensation is no longer limited to consumers but extends to any individual harmed by a defective product.
Definition of defective product
According to Article 7 of the product liability directive, a product is considered defective when it fails to offer the safety that a consumer can reasonably expect. This concept is particularly challenging to apply to AI systems, as their complexity and advanced learning capabilities (eg through machine learning techniques) create a โblack boxโ effect, making it difficult โ even for the developers โ to understand why the system produced a certain output.
Nevertheless, under Article 10 of the Directive, a productโs defectiveness is presumed when:
- the claimant demonstrates that the product fails to meet mandatory safety requirements set by EU or national law aimed at preventing the specific harm suffered; or
- the claimant demonstrates that the damage was caused by an obvious malfunction.
These presumptions underscore the importance of compliance with the obligations set out in the AI Act in determining a productโs defectiveness, since the lack of compliance trigger the presumption of defectiveness. As a result, in addition to the penalties imposed under the AI Act, companies face the risk that non-compliance may provide grounds for third-party compensation claims. This position of the Directive seems reasonable, since adherence to standards set by sectoral regulations โ including those on cybersecurity โ remains the most effective way to facilitate the assessment of a productโs defectiveness.
Burden of proof
The directive confirms a strict liability system, meaning that it doesnโt require the injured party to prove fault or negligence. However, under Article 9 of the directive, claimants must still prove:
- the defectiveness of the product;
- the damage suffered; and
- the causal link between the defect and the damage.
Proving defectiveness and causation is particularly complex due to the technical nature of AI systems and the โblack boxโ effect, as injured parties typically lack access to the necessary technical information.
To address this, the directive introduces a disclosure mechanism. Member states have to ensure that, upon request from a claimant presenting sufficient facts and evidence to establish a plausible claim, the defendant has to disclose relevant evidence in their possession. If the defendant refuses to disclose, it triggers the presumption of defectiveness. This measure is designed to help injured parties meet their burden of proof.
The Directive stops short of reversing the burden of proof, a measure that would have had a significant impact on liable companies.
Exemptions from liability
The product liability directive retains and introduces several grounds for exemption from liability. Particularly relevant for AI systems are:
- Proof that the defect didnโt exist at the time the product was placed on the market. However, Article 11(2) specifies that this exemption doesnโt apply if the damage was caused by software embedded in a physical device under the manufacturerโs control (eg failure to update software). In such a case, the manufacturer has to ensure the security of the software throughout the entire period in which the product is under its control.
- Proof that the defectiveness resulted from compliance with legal requirements.
- Proof that the state of scientific and technical knowledge at the time the product was placed on the market or while under the manufacturerโs control didnโt allow for the discovery of the defect. This is especially relevant in cases where an AI system produces completely unpredictable outputs that cannot be traced back to errors in training or other development stages.
- Proof that the defect was unrelated to any modifications made to the product.
The product liability directive significantly expands the scope for obtaining compensation for damages caused by defective products, now explicitly including AI software. This represents an additional layer of regulation for the AI ecosystem, where coordination with other legislation โ particularly the AI Act โ will play a crucial role in defining liability cases.
On a similar topic, you can read a collection of articles on the legal challenges of artificial intelligence available HERE.