Share This Article
IBMโs recent paper on ethical risks of artificial intelligence (AI) raises some important questions for companies exploiting AI.
Many believe that all obligations and duties relating to generative AI lie with the developers of these solutions, such as Microsoft, OpenAI, and Google โ๏ธ As such, the companies exploiting these technologies donโt often have an AI Policy and donโt perform compliance assessments. Rather they rely on the assessments performed by the provider, as if a properly drafted contractual clause can protect them from any liability of challenge.
Moreover, companies adopting AI solutions often rely on their IT departments to deal with AI providers which focus on the technical functioning of the solution, arguing that potential risks are remote and that all the technical safeguards are in place โ๏ธ Consequently, companies do not document the implementation process and the implemented safeguards in legal documents, they just put them in place.
IBM’s paper highlights the considerable potential risks of unpredictable outputs generated by AI solutions. No software is bias-free, and the same applies to generative artificial intelligence โ๏ธ None expects that a strict liability regime applies to AI solutions. It might not be a question of strict liability though if a company cannot prove to have adopted all the measures necessary to ensure compliance and avoid any non-ethical outputs with consequential risks for AI solutions.
Investing in AI compliance may be challenging, especially as many companies are still running pilot programs โ๏ธ AI might be the biggest resource or the most dangerous threat for a company though.
We created a legal tech tool to assess compliance of artificial intelligence solutions, it is named PRISCA AI Compliance and you watch HERE a video of presentation. Especially in this period when businesses are aware of the potentials of AI, but do not exploit it yet to its full capabilities, PRISCA is a convenient solution to deal with AI compliance in a convenience and trustworthy manner.
On a similar topic, you can read HERE some of the most relevant articles on AI compliance.