Update: The EU AI Act and Its Implementation

On July 12, 2024, the European Union officially published the Artificial Intelligence Act (EU AI Act), Regulation (EU) 2024/1689, in its Official Journal. This landmark legislation establishes the first comprehensive legal framework for regulating AI systems across the EU’s 27 Member States. The Act will come into force on August 1, 2024, with most of its provisions being enforceable starting August 2, 2026.

large language model wittmann legal services

Overview

The EU AI Act is a product of extensive negotiations, designed to create a harmonized legal framework for the development, market placement, and use of AI systems within the EU. The regulation comprises 180 recitals and 113 articles, employing a risk-based approach to oversee the entire lifecycle of AI systems. Non-compliance with the Act can result in substantial financial penalties—up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher.

Scope of Application

The Act defines AI systems as machine-based systems with varying levels of autonomy, designed to generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. This broad definition aims to differentiate AI from simpler software systems, aligning with international standards like those set by the OECD.

The EU AI Act applies to various stakeholders, including providers, deployers, importers, and distributors of AI systems, particularly those that interact with the EU market. Importantly, it also covers third-country providers if their AI system outputs are used within the EU. Certain exceptions apply, such as for open-source AI systems not classified as high-risk or used solely for scientific research.

Prohibited AI Systems

The Act bans specific AI practices deemed harmful or abusive, such as subliminal techniques that manipulate behavior or deceptive practices that distort human decision-making. Limited exceptions are provided for law enforcement, particularly concerning real-time remote biometric identification in public spaces.

High-Risk AI Systems

The EU AI Act categorizes AI systems based on the risks they present, with "high-risk" systems being subject to the most stringent requirements. These include AI systems used in critical areas like education, employment, and law enforcement. High-risk systems must comply with rigorous standards in data governance, transparency, human oversight, and cybersecurity. The Act also includes a process for updating the list of high-risk use cases as AI technology evolves.

General-Purpose AI Models (GPAI Models)

The Act dedicates a specific chapter to General-Purpose AI Models (GPAI models), which are defined as AI models capable of performing a wide range of tasks across different applications. Providers of GPAI models must meet detailed obligations, including maintaining technical documentation, ensuring compliance with national laws, and cooperating with EU authorities. Models classified as having systemic risks face additional requirements, such as standardized evaluations and incident reporting.

Penalties

Non-compliance with the EU AI Act's stringent rules can lead to severe penalties. The maximum fine for breaches involving prohibited AI practices is EUR 35 million or 7% of global turnover. Lesser violations can result in fines of up to EUR 15 million or 3% of turnover. Special penalties also apply to providers of GPAI models, with fines reaching up to EUR 15 million or 3% of turnover for serious infringements.

Implementation Timeline

The EU AI Act enters into force on August 1, 2024, with full enforcement beginning on August 2, 2026. However, some provisions will take effect earlier, including rules on prohibited AI practices and initial governance requirements, which will be enforced from February 2, 2025. Member States must establish regulatory bodies, including at least one regulatory sandbox, by August 2026, to facilitate compliance and innovation in AI development.

This update provides an essential overview of the EU AI Act's scope, requirements, and penalties. As the implementation progresses, stakeholders involved in AI systems within the EU will need to closely monitor the upcoming deadlines and ensure compliance with this comprehensive regulation.

eu ai act infographics

More Articles

Latest news about Data Protection, GDPR, AI Legislation and more

ai visualized wittmann legal services

15 guidelines for companies using large language model chatbots

Key recommendations include specifying internal directives, involving data protection officers, securing authentication, refraining from personal data input and output, offering opt-out options, ensuring human involvement in legal decisions, and staying updated on evolving regulations, particularly the EU's upcoming AI Regulation.

Read more
chatgpt screen darkmode ai regulations wittmann legal services

EU vs. US - Comparison on AI legislation

Comparing White House's Executive Order (EO) and the EU's AI Act. Both policies prioritize AI testing, monitoring, and privacy protection but differ in their approach. The EU Act is more comprehensive, risk-based, and regulates high-risk use cases while the US EO is more flexible, sector-specific, and addresses broader political dimensions. It is important to understand both approaches for effective privacy compliance and AI governance programs as businesses navigate overlapping compliance efforts globally.

Read more
Contact us now