Publications

Technology and Intellectual Property / January 2025

EU AI Act – Prohibited AI Practices

You should read this if you are developing, deploying or using AI or you are considering an investment or an acquisition of a company that develops, deploys or uses AI.

Starting February 2, 2025, certain AI practices will be prohibited by the EU AI ACT.

The list of prohibited practices includes the placing on the market, the putting into service or the use of AI systems engaging in the following:

  • Subliminal and Manipulative Techniques: AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behavior of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.
  • Exploitation of Vulnerabilities: AI systems that exploit any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behavior of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.
  • Social Scoring: AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behavior or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavorable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavorable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behavior or its gravity.
  • Predictive Risk Assessments:  AI systems for making risk assessments of natural persons to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics.
  • Facial Recognition:  AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
  • Emotion Inference:  AI systems to infer emotions of a natural person in the areas of workplace and education institutions.
  • Biometric Categorization: Biometric categorization systems that categorize individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
  • Real-Time Biometric Identification: The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for certain objectives defined by the EU AI ACT.

Engaging in the abovementioned practices could involve administrative fines of up to 35 000 000 EUR or up to 7 % of the infringing company’s total worldwide annual turnover for the preceding financial year, whichever is higher. Same as with GDPR, failure to comply with the EU AI ACT could also involve losing business in the EU, losing investment or M&A opportunities, exposure to litigation, or reputational damage.

Given that the abovementioned provisions have nuances and exceptions, it is important to review your AI systems, models and practices to ensure compliance, mitigate potential legal risks, and facilitate seamless interactions with your customers and business partners.

If you have any questions or need assistance in navigating these changes, you can contact us.