Insights

EUAIActFirstRulesSOCIAL

EU AI Act: First Rules Take Effect on Prohibited AI Systems and AI Literacy

In Short

The Situation: The European Union's Artificial Intelligence Act ("AI Act"), the world's first comprehensive legal framework on AI, entered into force on August 1, 2024. The AI Act sets out staggered compliance deadlines for the various areas that it regulates.

The Development: The AI Act's first compliance deadline was February 2, 2025. As of this date, the EU now applies the prohibited risk category, thereby effectively prohibiting the use of AI systems deemed to pose "unacceptable risks." Additionally, AI Act Literacy Rules became applicable the same day.

Looking Ahead: More compliance deadlines lie ahead in the next few years, in addition to the European Commission issuing further guidelines for complying with the AI Act. The EC also issued the Second Draft of General Purpose AI Code of Practice to provide clarity and support consistent compliance for general purpose AI models.

The EU's AI Act, which entered into force on August 1, 2024, aims to guarantee that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. 

The AI Act includes a set of risk-based rules for developers and deployers applicable to varying AI use cases. By taking a risk-based approach, it seeks to achieve a balance that fosters customer trust, as well as investment and innovation in the field of AI within Europe. A key AI Act feature is its classification of AI systems into different risk categories: prohibited, high-risk, and those subject to transparency obligations. General-purpose AI ("GPAI") models can fall into any of these categories, depending on their application and potential impact. Relatedly, the EC has issued a second draft GPAI Code of Practice to clarify the AI Act's requirements applicable to these models.

First Compliance Deadline

As of February 2, 2025, the AI Act's first compliance deadline, the following provisions took effect: 

  • Prohibited AI Systems: Application of the AI Act's prohibited risk category effectively bans the use of AI systems deemed to pose "unacceptable risks." Prohibited AI systems include tools that perform social scoring, manipulate or exploit individuals, infer individuals' emotions in the areas of workplace or education institutions, involve real-time biometric identification in public spaces, and untargeted scraping of internet or CCTV for facial images to build-up or expand face-recognition databases. Alongside that compliance milestone, on February 4, the European Commission ("EC") published Guidelines on prohibited AI systems and practices, aimed at ensuring the AI Act's effective and uniform application across the EU. The Guidelines, in particular, explain the legal context and provide practical examples on prohibited AI systems to support stakeholder compliance.
  • AI Act Literacy Rules: The AI Act's literacy rules require all providers and deployers of AI systems (even AI systems classified as low-risk or no risk) to ensure that their personnel have a sufficient level of understanding of AI, including its opportunities and risks, to use AI systems effectively and responsibly. To comply, companies must develop and implement appropriate AI governance policies and training programs for their personnel. 

Guidance: Draft General-Purpose AI Code of Practice

The EC has issued a Second Draft General-Purpose AI Code of Practice (the "Code") for developers of GPAI models (i.e., AI systems that can perform multiple tasks across different domains and contexts). This draft Code, developed with industry stakeholders, aims to provide clarity on compliance requirements for the AI Act's consistent and effective application across the EU. The draft Code, expected to be finalized by May 2025, will serve as a guideline for developers to adhere to the AI Act's provisions.

Notably, the EC already unveiled a template for summarizing training data used in GPAI models on January 17, 2025. This template is a key component of the forthcoming GPAI Code of Practice (see also our Jones Day Commentary, "European Commission's AI Code of Practice and Training Data Summary Template."

Risks of Non-Compliance / Enforcement

The AI Act's prohibitions and obligations apply to companies offering or using AI systems. Violators face significant penalties depending on the nature of the non-compliance, including the greater of either up to €35 million or 7% of their global annual turnover. 

In particular, for providers of GPAI models, the EC may impose a fine of up to €15 million or 3% of the worldwide annual turnover. The AI Office, based in Brussels, will enforce the obligations for providers of GPAI models, as well as support EU Member State national authorities in enforcing the AI Act's requirements for AI systems, among other tasks. 

Next Compliance Deadlines

The next major compliance deadline is August 2, 2025. By that date, EU Member States must designate national authorities responsible for the AI Act's enforcement. On this date, rules also take effect regarding penalties, governance, and confidentiality. 

On August 2, 2026, most other AI Act obligations become effective, including rules applicable to high-risk AI systems (Annex III) used: (i) as safety components in certain critical infrastructures; (ii) in employment and workers management; and (iii) in access to essential private and public services, creditworthiness evaluation, and risk assessment and pricing in relation to life and health insurance. Specific transparency requirements for AI systems also become effective on this date. 

By August 2, 2027, in particular, providers of GPAI models placed on the market before August 2, 2025 must comply with the AI Act. 

Immediate Steps to Take

Companies must assess whether and how the AI Act applies to their AI systems or GPAI models by:

  • Identifying and documenting all AI systems or GPAI models that a company develops or deploys, and their intended use cases;
  • Classifying all AI systems or GPAI models according to the respective risk category and compliance requirements;
  • Conducting a compliance gap and risk analysis to identify and address any compliance issues or challenges; and

Developing and implementing an AI strategy and a governance program, including AI literacy training program for personnel.

Three Key Takeaways

  1. Following the February 2, 2025 compliance deadline on prohibited AI systems and AI literacy rules, and with more compliance deadlines approaching, companies must act now to assess whether and how the AI Act applies to their AI systems or General-Purpose AI models.
  2. With fast-evolving technology and regulatory frameworks, companies should conduct regular audits to review and update internal governance, risk, and compliance programs for AI systems.
  3. Failure to comply with the AI Act can lead to significant penalties, including the greater of either up to €35 million or 7% of a company's global annual turnover.
Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.