Share This Article
Artificial intelligence systems are starting to think “like humans” rather than just calculating potential options, but might their full exploitation trigger some liability risks?
Artificial intelligence can “think” like humans
As anticipated, IoTItaly, the Italian Association on the Internet of Things of which I am one of the founders, ran an event in collaboration with STMicroelectronics named โCreativity and technology at the time of Industry 4.0โ on 30 May 2017.
There were a number of panels during the event, but I participated to a very interesting discussion about artificial intelligence and the topic quickly focused on whether
- either artificial intelligence should just accumulate knowledge and on the basis of such knowledge enable assessments that would be impossible to humans;
- or it can go beyond a logical reasoning and adopt decisions that are more “intuitive“.
I found fascinating the video below that tries to explain Google’s DeepMind system.
As mentioned in the video, the “symbolic” event which is considered the moment when machines started to be “intuitive” is the victory of the AlphaGo artificial intelligence system against a master of the ancient Chinese game Go.
DeepMind is the evolution of such approach. Indeed, itย is defined on its website as
DeepMind is the world leader in artificial intelligence research and its application for positive impact.ย Weโre on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how.
AI does no longer receives instructions, but it learns itself how to do things and starts thinking like humans. This is really impressive since the level of understanding of situations that can be reached by artificial intelligence systems is limitless and the gap between humans and machines is disappearing.
What legal issues may arise from AI’s free decisions?
If artificial intelligence systems are left free to reach their own decisions, a number of “unexpected” new legal issue come up:
1. Who is liable for artificial intelligence systems
The Legal Affairs Committee of the European Parliament approved a report calling the EU Commission for the introduction of a set of rules on robotics.ย The Committee is for the introduction of strict liability rules for damages caused byย requiring only proof that damage has occurred and the establishment of a causal link between the harmful behaviour of the robot and the damage suffered by the injured party.
But what happens in case of systems like Google DeepMind that are not instructed to do some activities, but just do them?ย Compulsory insurance schemes might be the solution, but this would add a further lawyer of costs limiting the growth of these technologies.
2. Did artificial intelligence act ethically?
This is another topic touched during the IoTItaly event. The best choice taken by the machine might not be the most ethical choice. Does it mean that artificial intelligence cannot be left totally free to take its decisions?
As discussed in this blog post, some companies are already establishing ethical committees to define ethical principles to be imposed on machines. This might mean that artificial intelligence systems might
- learn to also act ethically;
- have its decisions reassessed by a human as it happens in the medical sector; or
- be used in a fully unleashed manner only in contexts where no harm to humans might be caused.
3. Are you able to justify the decision of artificial intelligence?
Under the terms of the European General Data Protection Regulation, individuals shall be able to object to decisions taken by automated systems that in case of health related data can be used only with their consent.
What will happen if the manual reassessment of a situation is not able to achieve a full understanding of the reasoning performed by the machine?
These are just some heads-up on a topic that is definitely fascinating.