Share This Article
The first case involving an artificial intelligence system that lost a large amount of money investing in the stock exchange market raises the question of what liability regime shall apply to the conduct and misconduct of an AI.
You can read the article below on the topic, and also watch my video as part of Diritto al Digitale
The dispute between Samathur Li Kin-kan and Raffaele Costa
This case relates to a dispute reported by Bloomberg that relates to the claim brought by Mr. Samathur Li Kin-kan against Mr. Raffaele Costa who had recommended to Mr. Li the usage of an artificial intelligence system named K1 developed by an AI company 42.cx. The machine was a product which Mr. Costa was offering to the clients of his investment firm to recommend investments.
The artificial intelligence system would collect information through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures. The AI would then send instructions to a broker to execute trades, also adjusting its strategy over time through a machine learning system.
Apparently, Mr. Li instructed Mr. Costa’s company to use K2 to manage over $ 2.5 billion of his wealth, but the predictions of the AI system ended up not to be correct, with losses that exceeded even $ 20 million in a single day.
These events led to a claim from Mr. Li against Mr. Costa for having misrepresented the capabilities of the artificial intelligence system. On the contrary, the position of Mr. Costa seems to be – according to Bloomberg’s article – that they had never guaranteed the AI strategy would make money.
Who is liable for failures of the artificial intelligence system?
The liability regime applicable to AI is an open question heavily discussed during the last years. I had published an article “Artificial intelligence shall be unleashed?” some time ago on the topic.
There was indeed a report Legal Affairs Committee of the European Parliament calling the EU Commission for the introduction of a set of rules on robotics. The options outlined in the report were the introduction of aย strict liability regime for producers of artificial intelligence systems, along the lines of the product liability regime. Under such scheme, it would be necessary to provide evidence of the caused damages and of a causal link between the harmful behavior of the robot and the damages suffered by the injured party to be able to recover them from its producer.
But such a solution opens the questions as to
- Whether the liable producer shall be the manufacturer of the final product incorporating the AI, e.g. the self-driving car, or the mere manufacturer of the artificial intelligence system? In the case of Mr. Li and Mr. Costa, 42.cx had manufactured the machine, but the claim was against Mr. Costa who had “sold” its services to Mr. Li;
- What happens in the case of โautonomousโ robots like Google DeepMind that does not receive instructions from its producer, but develops its knowledge and conclusions? The strict liability regime would extend to any possible decision that the machine might take? and
- If an insurance scheme becomes compulsory for robots producers or owners (e.g., in the case of producers of self-driving cars), the issue is whether such obligation would represent an additional cost that either would be on customers or would even prevent (or at least hinder) the development of such technologies.
The solution is not to postpone the usage of AI until there are zero risks!
I was recently giving a speech on the liability regime of artificial intelligence at a conference, and a smart guy from the audience made a statement on the topic. He said that AI should not be used for self-driving cars (but I believe that he referred to any possible sector) until we are 100% sure that it cannot fail, be hacked or in general misbehave.
This statement is, in my view, symbolic of the lack of understanding of the issue. If we look at statistics relating to car accidents, for instance, in the vast majority of the cases, they are due to human errors. Humans are allowed to drive cars, even though they can fail. At the same time, AI systems can considerably reduce the number of car accidents.
This conclusion means that if we had to wait until the artificial intelligence system cannot fail, we miss the opportunity to substantially reduce the risk of car accidents, just because we are scared of machines.
The first Asimov law of Robotics is that
A robot may not injure a human being or, through inaction, allow a human being to come to harm
But setting the right limit between regulating AI and avoiding to over-regulate artificial intelligence is one of the significant challenges of the coming years. There is no doubt though that artificial intelligence is the future, can improve our lives and we need to find the way exploit it properly.
On the same topic, you may read, “Should artificial intelligence have freedom of choice?“