Share This Article
Microsoft Xiaoice is one of the most advanced examples of artificial intelligence, but AI systems that are becoming “almost human” lead to major privacy issues that led the Council of Europe to issue guidelines on the topic.
Xiaoice – the “almost human” AI system
Thanks to an interesting article from Fabio Moioli, I got to know Xiaoice which is an AI system originally developed by Microsoft in China, in 2014, based on an emotional computing framework. Through the comprehensive application of algorithms, cloud computing and big data, Xiaoice adopts the intergenerational upgrade method to gradually form a complete artificial intelligence system.
Since its creation, in 2014, Xiaoice has already engaged in over 30 billion conversations and has become one of the leading celebrities on Chinese social media. She is a singer, a poet, journalist and has machine reading and text creation capabilities that she uses for instance to synthesize massive amounts of information to generate quarterly earnings report summaries for 90% of Chinaโs financial institutions.
Artificial intelligence might become “too human“?
The example of Xiaoice is quite impressive and shows how rapidly artificial intelligence is evolving. We are no longer talking about the future, but of something that is already in our everyday life. And the “humanity” of AI has led discussions among regulators on how to ensure that we don’t lose control of artificial intelligence.
I already discussed about the ethical issues around AI and IoT (Read on the topic “What ethics for IoT and artificial intelligence?“), but there is also a compliance issue. The GDPR introduced relevant (or maybe excessive) limitations to the usage of artificial intelligence (Read on the topic “Artificial intelligence โ What privacy issues with the GDPR?“), but the Council of Europe by means of the Consultative Committee of the so called Convention 108 issued a report on the challenges and possible remedies of artificial intelligence and guidelines for developers on how to make AI systems privacy compliant and for regulators on how to exploit and control such technologies.
What challenges for artificial intelligence?
The report of the Council of Europe which was written by Professor Alessandro Mantelero, identifies the major threats from AI in
“the disputed sets of values adopted by AI developers and users, the latter including both consumers and decision-makers who use AI to support their choices. There is an emerging tendency towards a technocratic and market-driven society, which pushes for personal data monetisation, forms of social control and โcheap & fastโ decision-making solutions on a large (e.g. smart cities) and small (e.g. precision medicine) scale.“
In other words, companies want to monetize data and save time and resources through AI that allows to process very large amount of data in a quite limited time and to automate decisions which are faster and more precise.
On the basis of the above, the report stresses:
The importance of protecting the effective freedom of the human decision-maker
This is a principle already set out by the GDPR under which individuals have the right to have the automated decision impacting them reviewed by a human. But the matter shall be assessed within a broader scope to avoid that artificial intelligence systems and big data indipendently determine choices of companies. Therefore “communities or groups potentially affected towards a participatory discussion on the adoption of AI solutions” shall be involved in the discuss.
I don’t think that at the moment the strategy of companies is entirely decided by machines, but if Xiaoice has rapidly become a super star in China, there is no doubt that companies boards shall be accountable in the future if their decision is not in line with the recommendation given by the AI system.
Transparency obligations are not enough against potential bias
A full disclosure of the reasoning behind the decision of the artificial intelligence system would negatively affect IP rights and lead to competition issues. Just disclosing the logic of the algorithms may not be enough to detect potential bias. Also algorithms are dynamic and continously change.
The balance between opposite needs is difficult to find. The level of disclosure given in a privacy information notice cannot be too detailed since it needs to protect the assets of the company and also avoid abuses by fraudsters. There is no doubt that issues relating to potential bias in decisions taken by AI systems might lead to major disputes. The issue will be to identify the entity liable for the bias if this was just the result of the reasoning of the machine, rather than of information given by the developer.
Risk assessment shall consider ethical issues
Privacy laws already prescribe the need to perform a risk assessment of the impact of data processing activities on personal data and the affected individuals. But such assessments shall take into account ethical issues that however change over time and might not keep pace with evolving technologies. Such assessments would have the result of increasing trust in technologies and therefore their usage by customers.
Assessments in order to be objective could be performed with the support of indipendent ethical committees that are already in place in major companies. However these committees shall actually have an important role in leading to a decision, otherwise their involvement will appear just as a new marketing initiative. The same applies to the possible involvement of stakeholders in the decision making process so that it is possible to consider the interests of all the parties affected by the technology.
Liability remains an open issue, but vigilance is also important
There is no ad hoc liability regime for artificial intelligence. Product liability principles could be applied making producers liable for the decisions of the AI system.
My view is that such solution might cause the impossibility for small to medium sized companies to invest in AI, given the potential risks and liabilities that can arise. It could be possible to set up a special regime of compulsory insurance coverage together with a fund for the victims of AI, as it happens for victims of car accidents. However, the regime might also set limits to the cost of such insurance coverage to avoid that it becomes an obstacle to the entrance in the market.
The Council of Europe privacy guidelines on artificial intelligence
Based on the findings of the report, the Council of Europe issued guidelines on how to ensure compliance of articificial intelligence with privacy law obligations that can be summarized as follows:
- The protection of human dignity, rights and fundamental freedoms, and in particular the right to the protection of personal data, are essential when developing and adopting AI applications;
- AI development relying on the processing of personal data should be based on lawfulness, fairness, purpose specification, proportionality of data processing, privacy-by-design and by default, accountability, transparency, data security and risk management;
- An approach focused on avoiding and mitigating the potential risks of processing personal data is a necessary element of responsible innovation in the field of AI;
- A wider view of the possible outcomes of data processing should be adopted;
- AI applications must at all times fully respect the rights of data subjects;
- AI applications should allow meaningful control by data subjects over the data processing and related effects on individuals and on society.
My concern is that such type of “soft law” could not be effective enough to limit the risks that could derive from artificial intelligence, as I outlined in this recent article “Top 3 predictions on AI and IoT for 2019“.