The EU AI Act: What does it mean for you?

Updated September 23rd, 2024

Share this article

After extensive debate and discussion among its 27 member states, the European Union has officially published the EU AI Act. The new Act will regulate the use of Artificial Intelligence in EU countries — the first such comprehensive legislation of its kind in the world. Here’s a look at what the EU AI Act covers, when it goes into effect, and how it might impact you.

Unlock Your Data’s Potential With Atlan – Start Product Tour


Table of contents #

  1. The structure of the EU AI Act
  2. What types of AI systems are regulated under the EU AI Act?
  3. Does the EU AI Act regulate foundational models like ChatGPT?
  4. What are the penalties for non-compliance with the EU AI Act?
  5. When does the EU AI Act go into effect?
  6. What does the EU AI Act mean for you?
  7. Conclusion
  8. Related Reads

The structure of the EU AI Act #

According to the European Parliament, the goal of the EU AI Act “is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.” In particular, the Act aims to enshrine human control over the operation and output of AI systems.

The EU AI Act itself consists of 13 chapters and 180 recitals. (There is a helpful tool, the EU Artificial Intelligence Act Explorer, that allows you to explore the text in an intuitive way, including the ability to search for parts that are most relevant to you). The chapters touch on several topics, including prohibited practices, high-risk systems, guidelines around the use of general-purpose AI models, principles of AI governance, and codes of conduct for companies that use AI. The Act also specifies penalties for non-compliance.


What types of AI systems are regulated under the EU AI Act? #

The EU AI Act defines an AI system as A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

In other words, the Act covers both what we would think of as “traditional AI” or Machine Learning (prediction & recommendation models) as well as “Generative AI” (creation of texts, images, and other outputs based on prompts).

The Act covers any such system that is made available for sale and use in the European Union.

The Act further defines two types of risks associated with AI systems: Unacceptable risk and high-risk systems. AI systems that don’t fall into either category are still required to meet other data transparency requirements.

Unacceptable risk AI #


The Act considers an AI system to pose an unacceptable risk if it can be “considered a threat to people.” These include, among other things:

  • Products that engage in “cognitive behavior manipulation” or target vulnerable populations. This includes techniques such as subliminal messaging and manipulation of people through machine-brain interfaces.
  • Social scoring systems based on socio-economic status or personal characteristics.
  • Untargeted facial recognition via scraping of CCTV cameras or public footage (e.g., YouTube videos).
  • The use of biometric identification in law enforcement, with a few specific exceptions (e.g., an imminent threat to human life).

Real-time biometric identification systems used for law enforcement must be pre-approved and registered with the EU.

High-risk AI systems #


The EU AI Act further specifies “high-risk AI systems” as any system that falls under the EU’s existing product safety legislation. This includes aviation vehicles, cars, medical devices, and elevators, among others.

The Act also establishes high-risk AI systems in a number of other categories — infrastructure management, educational and vocational training, law enforcement, migration and border control. These systems are required to be registered with the EU in a centralized database.

All high-risk AI systems will require an initial assessment and ongoing assessments to ensure their safety. Consumers can also lodge complaints against AI systems they believe may pose an unacceptable risk to the public.

Transparency requirements #


AI systems that don’t fall into one of these two buckets still need to conform to the EU AI Act’s transparency requirements. According to the EU, this means they must:

  • Disclose that content was generated by AI so that users are aware it isn’t authentic (e.g., so-called “deep fake” videos using images of real people)
  • Design the model to prevent generation of illegal content
  • Publish summaries of what copyrighted data the model’s engineering team used to train it

Does the EU AI Act regulate foundational models like ChatGPT? #

According to the EU AI Act, foundational models themselves — general-purpose generative AI models such as OpenAI’s GPT, Meta’s Llama, and Anthropic’s Claude — will not be regarded as high-risk. However, the Act also specifies that the EU may subject “more advanced” models like GPT-4 to “thorough evaluations” before allowing them on the market.


What are the penalties for non-compliance with the EU AI Act? #

The maximum penalty for violating the EU AI Act’s stipulations on prohibited uses of AI can be a fine as high as EUR 5M (USD 5.5M), or seven percent of a company’s worldwide annual turnover. There are other, slightly lower, fines for violating other provisions or knowingly providing false information to relevant authorities.

In addition, providers of the general-purpose large language models that AI systems are built on top of — can face fines of three percent of annual worldwide turnover or EUR 15M (whichever is higher) for violating the Act, whether knowingly or through negligence.

Large enterprises will be penalized with the maximum fine possible. Small and medium enterprises (SMEs) and startups found to be in violation, though, will pay the lowest amount applicable by law. The EU is moving to further support startups and SMEs by providing public testbeds for these companies to trial their products on real-world data before going to market.


When does the EU AI Act go into effect? #

The EU AI Act was approved in May 2024. The EU will begin to enforce it on August 2nd, 2026.

Some more critical provisions will go into effect sooner, however; in particular, systems posing unacceptable risks will be banned in early 2025. Additionally, all general-purpose AI systems must comply with transparency requirements by mid-2025.


What does the EU AI Act mean for you? #

Given how new the Act is, it is early to say how exactly it may apply to a specific use case. What’s clear, however, is that any AI project with plans to launch in the EU within the next two years needs to plan now to ensure compliance.

This means engaging your legal team early in the requirements and design process, and basing product decisions on a careful reading of the law.

Although the EU AI Act is the first comprehensive law regulating AI, it won’t be the last. The inevitability of future regulatory actions from government entities all over the world means that now is the time to consider crafting policies that can help future-proof your upcoming AI projects.

Companies that wish to remain ahead of the AI regulations curve should craft and institute firm AI data governance policies before launching new AI endeavors — or adding AI capabilities to existing offerings. These include creating guidelines within important areas such as transparency, accountability, fairness, reliability, privacy and security, and establishing clear rules around responsible use.

Regulations around data transparency require your company to verify the source of all data used in training any AI that is part of your systems or products. That means using techniques such as data lineage to map the data used in AI apps back to its origin.

Gartner suggests companies seeking to future-proof their AI data governance leverage Generative AI capabilities themselves. This includes creating AI-powered data workflows for classification, cataloging, enriching customer data, and targeting.


Conclusion #

When it was first released, the EU’s General Data Protection Regulation (GDPR) was a comprehensive and unique law granting consumers specific rights to their data. Over time, it quickly became a template for similar legislation across the world.

Time will tell the full impact of the EU AI Act, but its launch holds strong potential to influence AI legislation in the same way that GDPR regulations rippled outward to influence data legislation globally.

Understanding and supporting the EU AI Act today, before it is fully in force, can go a long way toward future-proofing your AI initiatives not just in the EU but every other part of the world as well. And the time to start is now.



Share this article

[Website env: production]