Share This Article
The Italian Privacy Authority issued a decalogue of rules on artificial intelligence in the healthcare that sets out principles that can be applicable to both public and private companies using AI.
Artificial intelligence (AI) systems are assuming an increasingly prominent role in healthcare services, however, the use of AI in this area can be particularly risky. ย This is why the Italian Privacy Authority, the Garante, recently decided to issue a decalogue of rules for the implementation of healthcare services in the public sector through AI systems that are useful for any business developing or reling on artificial intelligence.
Below are the most relevant points of the decalogue in our opinion:
1. The legal basis for processing personal data in the public healthcare sector must be the public interest, which requires a specific law allowing the use of AI and defining in detail the requirements to be met. Normally this requirement is met with a very detailed ministerial decree that also defines the technical measures to be met;
2. Businesses must be able to demonstrate that the solution was created in line with the principles of privacy by design. This requirement is normally met through documents and assessments drafted during product development;
3. The logic and metrics behind the algorithm must be disclosed in the privacy information notice and DPIA. Although this is an obligation for the data controller, the information will probably have to be provided by the vendor on whom the obligation consequently falls;
4. The data used to train the algorithm must be accurate and up-to-date. ย The problem with generative AI is that it is possible to train the algorithm to disregard specific information in evaluation, but once the algorithm has learned the information that information cannot be removed. ย The same issue arises in case of exercise of the right of objection by individuals, like patients;
5. The data controller must provide a very detailed DPIA that must include a risk and nondiscrimination analysis that is very reminiscent of the AI Act; and
6. The adequacy of the security measures implemented to protect the data processed must be demonstrated.
There is no doubt that the Italian data protection authority is trying to acquire a leading role within the European Union as authority of reference on artificial intelligence. ย This intent was already confirmed by the actions undertaken against OpenAI that led to the undertakings taken by OpenAI towards the Garante. ย You can read about it in this article โThe Italian case on ChatGPT benchmarks generative AIโs privacy compliance?โ.
On a similar topic, the article โThe Power of Artificial Intelligence in Healthcare: Exploring Opportunities, Legal Obligations, and Risksโ may be of interest.