Share This Article
Alongside the traditional profiles of civil liability for human acts, we must confront the legal profiles of civil liability arising from the use of artificial intelligence (AI) systems.
Artificial intelligence systems are reaching increasingly advanced levels of autonomy, also thanks to the boost given by generative AI systems with the ability to learn and develop solutions in an (almost) entirely autonomous manner.
But what is meant by artificial intelligence? The European Union, first, gave a definition of artificial intelligence in the “Coordinated Plan on Artificial Intelligence COM (2018) 795 final“: “Artificial intelligence (AI) refers to those systems that exhibit intelligent behavior by analyzing their environment and performing actions, with some degree of autonomy, to achieve specific goals.”
A similar definition is contained in the 2020 “White Paper on Artificial Intelligence” and the subsequent EU Communication COM (2021) 205.
In the proposed AI Act, artificial intelligence is defined as “software developed…, which can, for a given set of human-defined goals, generate outputs such as content, predictions, recommendations or decisions that influence the environments with which they interact.” ย The proposal also envisions the regulation of so-called “high-risk” artificial intelligences, i.e., those systems whose use may pose risks to “fundamental rights.”
At the Italian level, we have limited ourselves to implementing the principles set at the European level, as reflected in the “Artificial Intelligence Strategic Program 2022 – 2024.”
The absence of regulation on artificial intelligence liability
In the absence of specific regulation of liability arising from the use of artificial intelligence systems, what provisions can/should we refer to in Italy today in this area?
We are familiar with the dichotomy, in the context of civil liability, between non-contractual and contractual liability.
Leaving aside the issue of defective product liability (applicable in the abstract also to artificial intelligence systems), the first rules to refer to, in case of extracontractual liability, could be Articles 2050 c.c. and 2051 of the Italian Civil Code, which provide, respectively, liability for “dangerous activity” and “thing in custody.”
However, these are provisions that may not be entirely adequate with respect to the new scenarios. It is not necessarily the case that the activity of artificial intelligence is a “dangerous activity,” that is, one that involves a significant probability of causing harm to third parties. ย On the other hand, the traditional notion of “custody” might also prove inadequate. with respect to a system capable of making decisions or expressing opinions independently.
Moreover, these provisions still do not exempt the injured party from proving the harm suffered, as well as the causal link between the harm suffered and, respectively, the dangerous activity or the thing in custody.
On the other hand, the general rule of tort liability in Article 2043 of the Italian Civil Code also requires the injured party (hypothetically by the AI system) to prove fault or the injurer.
Contractual liability could come to the rescue only where there is actually a relationship between the artificial intelligence service provider and the user. If, then, a product or service was supplied/provided by making use of an artificial intelligence system, the application of Article 1228 of the Civil Code on liability for the act of the auxiliary could be hypothesized, assuming that a third party (i.e., artificial intelligence system) – debtor (i.e., party who makes use of it to supply a product or service) relationship could be configured, a relationship precisely required by Article 1228.
Some insights from recent case law experience
In some cases that have been brought to the attention of the courts, in order to establish liability for damage by artificial intelligence, the rules on producer liability have been applied, while in other cases liability has been identified in the head of the subject who in any case had control over the use of the machine (see Brouse vs. United States).
Interesting, in a different respect, is the decision of the Australian Federal Court in Thaler vs. Commissioner of Patents, which denied the possibility of patenting an invention created by an artificial intelligence system because it lacked legal personality, i.e., the ability to be the owner of subjective legal situations.
Still in a different vein, the Italian Supreme Court recently decided a dispute concerning liability for damage caused by a reputational rating artificial intelligence system for unlawful processing of personal data. In this case, the cause of the damage from unlawful processing of personal data by an artificial intelligence system was found to be the lack of transparency about the algorithm used by the system itself to determine the rating.
The EU is working on a Directive that is meant to regulate the liability regime for artificial intelligence, along with the AI Act, however the upcoming legal framework is still uncertain during a time when any business understands the need to embrace AI in their operations. Some safeguards can be implemented and companies are adopting AI policies. ย No business can actually leverage artificial intelligence without the necessary guardrails.
On a similar topic, you can find the following article interesting โEU Directive regarding the liability of artificial intelligence and the digital age upcomingโ.
Authors: David Marino and Andrea Olivieri