Share This Article
AI and the magic formula on how to regulate it are continuously invoked, but can traditional regulations set rules for artificial intelligence? What’s missing?
One of my favorite movies of my childhood was “Back to the Future,” and you may remember the Wild Gunman scene in Back to the Future II
The comment from the kids was
“You mean you have to use your hands? that’s like a baby’s toy”
The main difference between how to operate a videogame with a traditional Atari 2600 joystick controller and the way you can do it through your brain is that through the Atari joystick you can see at every moment what is being done by the player, while if the players’ brain operates the game, this is all invisible to our eyes and to any sort of control.
This appeared to be the future, but artificial intelligence systems are going beyond human nature in a manner that cannot be controlled through both tools that are of the age ofย Atari 2600 joystick controller and most modern technologies.
How AI is going beyond what human nature can understand
AI had the privilege of running a presentation at the Digital Legal Day of the German-Italian Chamber of Commerce on “AI and Human laws” with Fabio Moioli, Head Consulting & Services at Microsoft. In a few minutes, Fabio gave a useful snapshot of how our life is changing due to artificial intelligence and how it will change soon due to the limitless potentials of AI.
One of the ground-breaking events that he outlined was when the AI system, AlphaGo, manufactured by Google’s DeepMind, defeated the GO world champion in 2016.
For those that do not know the GO game, this is a Chinese game which is deemed to be the most complex game in the world, due to its vast number of variations in individual games. And the defeat was so relevant because Google had not just given a large amount of literature on the GO game to AlphaGo. It had provided a limited amount ofย “direct” instructions, and mostly used deep learningย where AlphaGo played itself in hundreds of millions of games so that it could understand the game intuitively.
The playing strategy followed by AlphaGo was not based on logic. And indeed, during the game, it made a move that appeared illogical, and someone thought there was a “bug” in the system. But then the wrong move seemed to be a winning move that led to AlphaGo’s victory, leaving the world champion so disappointed that he had to exit the room…
The word “intuition” is the keyword in analyzing the evolution of AI since it means that machines are going beyond any reasoning and added a component that CANNOT be logically explained.
And since then the evolution has even speeded up, with AlphaGo Zero that did not receive any instruction and just relied on deep learning, winning all 100 games played against AlphaGo.
Do we need to regulate AI urgently?
The fast evolution of artificial intelligence systems led a super genius like Elon Musk to call for urgentย regulations on AI in a fascinating and “scary”ย interview run by Joe Rogan.
His view is that “Normally the way regulations are set up is when a bunch of bad things happens, thereโs a public outcry, and after many years a regulatory agency is set up to regulate that industry,” but
“AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, itโll be too late, [—]ย AI is a fundamental risk to the existence of human civilization.“
This scenario is happening whenย the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe adopted theย first European charterย setting out ethical principles relating to the use of artificial intelligence (AI) in judicial systems. You can read the LawBytes update from Tommaso Ricci explaining the contents of the charter (“EU Electronic Communications Code and AI ethical charter“), but mostly it sets the five ethical principles to regulate AI below:
- Respect of fundamental rights;
- Non-discrimination;
- Quality and security;
- Transparency, impartiality and fairness; and
- Under user control.
The issue that I see with these principles is that they have been drafted having in mind “traditional” conducts to be regulated, behaviors that can be visible to our eyes so that, if there is a violation, we can challenge it.
How to control the unexplainable artificial intelligence?
The title of this paragraph was inspired by an inspiring speech from Andrew Burt at the University of Chicago named “Regulating Artificial Intelligence: How to Control the Unexplainable.”
It is possible to react to the “unexplainable” just prohibiting, rather than regulating, AI. This scenario is, for instance, what European data protection regulators have been trying to do with the GDPR. You can read my article on the topicย “Artificial intelligence โ What privacy issues with the GDPR?“. But mainly the European privacy regulation prohibits the usage of AI from taking automated decisions that produce legal effects concerning individuals or similarly significantly affects them unless this is
- provided by the law, such as in the case of fraud prevention or money laundering checks,
- isย necessary for the performance of or entering into a contract,
- is based on theย individualโs prior consent.
Those exceptions are interpreted narrowly, but the major obstacle is that individuals need to be given with a right to object to automated decisions which are commonly known as the right to receive justification and the privacy information notice shall outline the criteria according to which the machine will take automated decisions.
However, explained above, AI cannot be justified sometimes. This is, for instance, the case neural networks that are a type of artificial intelligence able to mimic the human brain, adapting to changing the input, so the system generates the best possible result without needing to redesign the output criteria.
Artificial intelligence is going to disrupt any market
The conclusion cannot be to ban artificial intelligence since, for instance, the same Elon Musk who is urging to regulate AI founded Neuralink thatย is an Americanย neurotechnology reported to be developing implantable brain-computer.
According to a study from GlobalData, almost every industry will be disrupted by AI.
Several new entrants are coming into the market, but also companies like Google, Amazon, Microsoft, and IBM are heavily investing in AI and might become the new competitors of traditional businesses.
How shall we regulate AI properly?
AI cannot be ignored as it will become part of our lives, so how to regulate it? Traditional regulatory approaches risk to be
- either ineffective since without the support of technology as part of investigations, it will be impossible to identify misconducts;
- or inefficient since they might limit the growth of AI technologies, discriminating some countries to the benefit of low regulated nations.
Some of these issues above had been discussed as part of a consultation of the European Commission on how to regulate the Internet of Things (Read on the topicย “How the IoT will change with new European regulations?“). But my view is that it is necessary to focus on 3 main aspects to regulate AI properly:
1. Liability rules need to be affordable
No software is without bugs, and artificial intelligence is expected to considerably reduce costs and accidents if compared to any process which is manually handled. If we expect AI to be “perfect” before being used, this will never happen, while humans have never been perfect and their errors are, for instance, the primary source of cyber-attacks.
Liability rules shall make those that benefit from AI technologies accountable for them. But such rules cannot provide sanctions/fines/penalties that can’t be afforded by businesses since this will hinder the exploitation of these technologies.
Countries that understand the relevance of artificial intelligence and might create funds to support potential victims of AI’s errors, just as it happens with funds generated for victims of car accidents.
2. Ethical rules have to be objective
I previously published an article on the topic (Read “What ethics for IoT and artificial intelligence?“). Most of the major companies investing in AI created an internal ethical committee, but ethical rules need to be coded in details so that the compliance with them can be verified. Also, such committees shall have an actual role within companies, rather than just being an internal consulting body with no control of the business of companies.
Compliance with ethical rules shall also be audited by and reported to competent authorities, as otherwise, the compliance with such principles will become only a sort of “advertising campaign“. On the contrary, the results achieved in ensuring compliance with ethical rules could become a competitive advantage in a business that will exponentially rely on trust between companies and their customers (Read on the topicย “Trust is the backbone of IoT, and there is no shortcut to success”).
3. AI can be regulated only with AI
As it happens with cybercrimes occurring on the Internet that can be investigated only supporting investigations with technical tools, the same happens with artificial intelligence. However, as emphasized by Elon Musk in his comment above, the difference is that we cannot regulate AI waiting for the first misconducts, but shall adopt a proactive approach, setting rules and starting investigations at this stage, when several technologies are being developed.
Such rules and investigations shall be run with the support of AI, since only AI can understand how to regulate AI and identify potential misconducts.
What is your view on the above? Some of the topics addressed in this article have been touched in my previous posts listed below
- Artificial intelligence โ What privacy issues with the GDPR?
- How the IoT will change with new European regulations?
- What ethics for IoT and artificial intelligence?
- Trust is the backbone of IoT, and there is no shortcut to success