Share This Article
Artificial intelligence can hardly be governed by local regulations as it requires global rules that might not be too far to achieve.
Artificial intelligence (AI) is rapidly transforming many areas of modern society, from healthcare to industry, from automation to transportation. However, in the face of this rapid technological evolution, an urgent need has emerged to establish appropriate legislation to regulate the use and development of artificial intelligence.
The European Union (EU), the United States, and China have responded to this rallying cry and are working to adopt AI legislation, and for once it appears that these regulations are moving in the same direction.
The EU has taken a more proactive approach to regulating artificial intelligence. The European Parliament is currently working on the AI Act, a regulation that classifies AI applications according to their level of risk and imposes strict requirements not only for high-risk technologies, but also for so-called foundation models of generative AI. In fact, the launch of generative AI systems such as ChatGPT has accelerated the adoption of the final regulations. It has led to the introduction in the AI Act of ad hoc provisions to regulate this new type of AI that until a few months ago was unknown to most and is now viewed with much suspicion. The regulation includes principles of transparency, legitimacy of data and documents used for AI training, and risk analysis before marketing, creating a somewhat comprehensive regulatory framework. The European Parliament is expected to vote on the proposal in mid-June and the final text will be approved by the end of the year.
Similar discussions about an AI regulation are underway in the United States. At theย event organized by the law firm DLA Piper on May 30, 2023, Victoria Espinel, who is part of the group of super experts tasked with working on a draft US AI regulation, represented how the U.S. legislature is focusing on ethical and possible discrimination issues that could arise from the use of AI. Artificial intelligence systems in fact do not perform any “reasoning,” as if they were a human being, but “infer” the response in light of the information with which they have been trained which, however, could include the so-called biases that could lead to discriminatory decisions.
According to the position represented by Victoria Espinel, there will be no global legislation on artificial intelligence because it would be a futuristic solution. However, European and American legislators are increasingly confronting each other to ensure a uniform approach to AI on both sides of the Atlantic. This is also happening to prevent companies from being harmed in any way by their own local AI regulations being too stringent.
But even if a shared approach is reached, the problem would be the timing of implementation of the legislation. If the AI Act were adopted in late 2023, there would indeed be at least a year, if not two years, to wait before it becomes binding. But is this timeline compatible with the speed of AI evolution? Aren’t we in danger of a legislation being adopted that is already “old” before it becomes enforceable? Moreover, some activists argue that the world cannot wait two years to introduce an AI regulation.
For this reason, the European Union is working – in parallel with the AI Act – on a voluntary code of conduct on artificial intelligence. These are non-binding, but immediately enforceable, rules that could easily be agreed with the United States and updated as technology evolves. There will be no sanctions, but the principles behind the AI Act already exist, in a confused and disorganized way, in other existing legislation. Therefore, if the authorities start interpreting existing regulatory obligations in line with the code of conduct and major market players commit to comply with it, the code would de facto become a binding discipline.
Global rules on artificial intelligence seem like a futuristic scenario when we see the limited number of companies that control the artificial intelligence market to date.
In conclusion, the future of artificial intelligence regulation in both the European Union and the United States will depend on many factors, chief among them the need to balance technological innovation with the protection of individuals. Both the EU and the United States are aware of the importance of appropriate regulation to ensure that artificial intelligence is used responsibly, ethically, and safely. However, the approach that will be followed will depend on so many variables that to date seem uncontrollable and fast changing.
The hope is that overly restrictive regulations will not be instinctively adopted simply out of fear of AI, because generative artificial intelligence is surely the most disruptive invention of recent years and our society needs it to grow.
To support companies not only in this transitional phase, but also in future uses of artificial intelligence, DLA Piper is developing legal tech solutions that efficiently, quickly, and reliably assess the compliance of AI solutions with applicable regulations and market standards.
On a similar topic, the following article may be of interest “EUโs AI Act agreed as ChatGPT returns to Italy: Accelerating the AI revolution“.