Publications

Technology and Intellectual Property / December 2023

New European AI Act

You should read this client update if: 
1. You are a company developing or using AI,
2. You are an investor or an acquiror considering an investment or an acquisition of a company developing or using AI,
3. Or, like us, you simply love AI and technology regulations.
On December 8, 2023, members of the European Parliament, EU member states represented by the Council, and experts from the European Commission reached an agreement on the EU AI Act. The EU AI Act aims to ensure that AI systems placed on the European market and used in the European Union respect fundamental rights and EU values and are designed and used safely. It also aims to promote innovation and investment in artificial intelligence in the European Union. With the EU AI Act, the EU becomes the very first jurisdiction to set clear rules for the use of AI.

What are the main elements of the EU AI Act?
The rules establish obligations based on the potential risks and impact levels of AI systems, and this includes a risk classification. For example, it prohibits certain AI systems and practices for posing “unacceptable risks”, including:
–    biometric categorization systems that use sensitive characteristics (such as political, religious, philosophical beliefs, sexual orientation, race),
–    untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases,
–    emotion recognition in the workplace and educational institutions,
–    social scoring based on social behavior or personal characteristics,
–    AI systems that manipulate human behavior to circumvent their free will, and
–    AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
For AI systems not prohibited but classified as “high-risk” (due to their significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law), clear obligations have been established. For example, the EU AI Act requires a mandatory fundamental rights impact assessment (FRIA), increased transparency obligations and registration obligations, among other requirements.
AI systems presenting only “limited risks” would be subject to very light transparency obligations, for example disclosing that the content was AI-generated so users can make informed decisions on further use.
Finally, the EU AI Act sets rules concerning AI systems that can be used for many different purposes (referred to as “general-purpose AI”) and specific rules have also been set for “foundation models” (large systems capable to competently perform a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code). For instance, the Act provides that foundation models must comply with specific transparency obligations before they are placed in the market. A stricter regime has been introduced for ‘high impact’ foundation models (these are foundation models trained with large amounts of data and with advanced complexity, capabilities, and performance well above the average, which can give rise to systemic risks). The exact obligations and requirements for general-purpose AI and foundation models should be published in the next few days once the final text is published by EU authorities and becomes generally available.

What are the consequences of noncompliance?
Noncompliance with the EU AI Act could lead to fines, determined as a percentage of the infringing company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations regarding the prohibited AI practices, €15 million or 3% for violations of the AI Act’s obligations and €7.5 million or 1.5% for the supply of incorrect information in certain contexts. The Act also includes more proportionate caps on fines for start-ups (further details should be available in the next few weeks) and also clarifies that individuals and companies will have the right to file direct complaints to the relevant authorities concerning non-compliance.

What about use of AI by law enforcement authorities?
The Act is not intended to apply to areas outside the scope of EU law. Equally, it is not intended to affect member states’ competences in national security or apply to systems which are used exclusively for military or defense purposes. Moreover, the Act includes the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to certain safeguards (this activity had been prohibited by previous versions of the Act and is now permitted provided that additional safeguards are respected by law enforcement authorities).

What are the next steps?
The Act and its details will be published and available in the upcoming days. Subsequently, it will have to be confirmed, endorsed, and formally adopted by EU institutions before it enters into force and becomes EU law. Following this, the AI Act will apply two years after its entry into force, with some exceptions for specific provisions. However, the Act is unlikely to change substantially, and you may want to commence compliance or consider its principles already. For example, if you are company currently developing or using AI, you may want to avoid falling within any of the “prohibited practices” and if you are an investor currently considering an investment in an AI company, you may want to verify that the target company´s AI will not be deemed prohibited by the EU AI Act.

Do you want to learn more about AI? More questions on the EU AI Act?
Contact us or read our previous client updates and newsletters on Meitar’s media center (for example, here or here).