Share This Article
On February 2, 2025, the EU AI Act officially comes into force, introducing stringent rules and prohibited practices aimed at preventing AI from infringing on fundamental rights.
This article unpacks the key provisions and provides essential guidance for organizations navigating the Act’s new compliance landscape. Watch the latest episode of our podcast on the topic below and on Apple Podcasts, Google Podcasts, Spotify, and Audible and read the article below:
The EU AI Act in Context
Just as the GDPR transformed data protection practices worldwide, the EU AI Act represents Europe’s effort to govern the use of Artificial Intelligence. While aimed at fostering innovation, the Act is unequivocal about protecting fundamental rights such as human dignity, privacy, and safety. It does so by categorizing AI use cases by risk levels and, crucially, explicitly banning certain practices that pose an unacceptable threat to individuals or society.
Although these new prohibitions became enforceable on February 2, 2025, regulatory oversight has already intensified. The DeepSeek case showcases how swiftly authorities can move when concerns surface about mass data scraping, biased profiling, or intrusive predictive analytics. Read more HERE.
Prohibited AI Practices Under the EU AI Act
Under the EU AI Act, certain AI applications are flatly banned, reflecting lawmakers’ view that they are inherently incompatible with fundamental rights. The main categories include:
- Subliminal or Deceptive AI Techniques
- Description: AI that manipulates or distorts user behavior beyond their consciousness, impairing the ability to make informed decisions.
- Example: An online shopping platform using imperceptible “nudges” to push users into buying premium add-ons they do not need.
- Exploiting Vulnerabilities
- Description: AI exploiting vulnerabilities related to age, disability, social, or economic status.
- Example: A financial app specifically targeting elderly customers with risky investment products, knowing they may lack the resources or knowledge to resist or understand the risks.
- Social Scoring
- Description: AI that evaluates or ranks individuals based on personal characteristics or predicted behavior over time, leading to harmful treatment or discrimination.
- Example: Denying a loan solely on the basis of an AI-generated “social score” gathered from social media.
- AI for Criminal Risk Assessment
- Description: Predicting criminal behavior using profiling or personality traits as the sole basis.
- Exception: AI tools may be used to support a human decision grounded in verifiable evidence, but not to make autonomous determinations of criminal propensity.
- Untargeted Facial Recognition Data Collection
- Description: Large-scale scraping of images, from the internet or CCTV, to create or expand facial recognition databases without clear user consent or other legal basis.
- Emotion Recognition in Work/Education
- Description: Using AI to detect or infer emotions in workplaces or educational settings, except for strictly necessary health or safety purposes.
- Example: Scanning students’ facial expressions during exams to label them as “stressed” or “distracted,” and using those inferences to penalize or discipline them.
- Biometric Categorization for Sensitive Traits
- Description: Inferring or classifying individuals by race, religion, political views, sexual orientation, etc., through biometric data.
- Allowed Exception: Certain law enforcement activities under a clear legal basis. Purely commercial uses remain off-limits.
- Real-Time Remote Biometric Identification in Public Spaces
- Description: Live facial scanning to identify people in publicly accessible areas.
- Allowed Exception: Extremely limited use for law enforcement, such as finding missing children or preventing imminent terror threats, with prior judicial or independent authority approval.
Practical Implications for Businesses
1. Review Your AI Systems Now
- Conduct an immediate audit of all AI applications, especially those involving biometric data or sensitive inferences. Map out data flows, identify sources of personal data, and document any high-risk AI uses.
2. Ensure Transparent Data Collection
- Scrutinize how you obtain data. Web-scraping or purchasing large datasets might violate consent requirements or legal bases if not done properly. Keep a paper trail of compliance for each dataset.
3. Embed Compliance by Design
- Implement human oversight, explanation mechanisms, and robust privacy controls from the outset. Align your data scientists, legal teams, and product managers so that compliance measures are integral to the system—not an afterthought.
4. Conduct Thorough Risk Assessments
- Train staff to spot and address manipulative or exploitative tendencies in AI outputs. Document decisions, especially where AI influences critical areas like lending, insurance, or hiring.
5. Maintain a Human-in-the-Loop Where Needed
- Autonomous decisions in areas deemed high risk can easily cross into prohibited territory. Incorporating meaningful human review can reduce the risk of infringing the Act.
Conclusion and Key Takeaways
- No Simple Workarounds
- Prohibited AI cannot be legitimized merely by user disclaimers or contracts. If a practice is banned, it stays banned.
- Be Proactive
- If your AI’s functionality is borderline or uncertain, seek legal guidance early. A minor tweak in design or data usage can be the difference between compliance and an investigation.
- Stay Informed
- The EU AI Act is dynamic. Keep tabs on enforcement trends, regulatory guidance, and legislative updates. Consider designated roles or committees to monitor AI risk across your organization.
Ultimately, compliance under the EU AI Act is more than a regulatory checkbox—it is an opportunity to build trust and differentiate your business in a competitive, innovation-driven market.
Also, you can read DLA Piper’s guide to the EU AI Act HERE and watch a video of presentation of our AI Compliance legal tech tool Prisca HERE.