Make AI more responsible!
Wenn Sie auf „Ja“ klicken, erklären Sie sich damit einverstanden, dass Cookies auf Ihrem Gerät gespeichert werden, um die Navigation auf der Website zu verbessern und unser Marketing zu optimieren — so erfahren mehr Menschen, wie eine effektive KI-Governance aussieht. Weitere Informationen finden Sie in unserer Datenschutzerklärung. Möchten Sie uns helfen?

EU AI Act: How risk is classified

The EU AI Act classifies AI systems into four different risk levels: unacceptable, high, limited, and minimal risk. Each class has different regulation and requirements for organizations developing or using AI systems. This article explains how AI systems and GPAI are classified and gives examples for high-risk cases.

EU AI Act: How risk is classified

We previously outlined the EU AI Act in a short article, where we already showed that the EU wants to take a risk-based approach in regulating AI systems. Different use-cases entail different levels of risk, which the EU co-legislators describe in their agreement. This article drills down on the risk-classifications and the corresponding application areas to help you understand whether your AI use-case classifies as high-risk.

Wondering if you are affected by the AI Act and which risk level your AI application is classified as? Then take this free self-assessment to quickly get an answer.

Current Status

After a long negotiation period, the European Commission, Council, Parliament and the EU Member States came to a final agreement in early 2024. Hence, the EU AI Act will come into force in August 2024. The first provisions of the AI Act will be enforced 6 months later in February 2025.

Risk-Classifications according to the EU AI Act

The EU's Artificial Intelligence Act (AIA) sets out four risk levels for AI systems: unacceptable, high, limited, and minimal (or no) risk. There will be different regulations and requirements for each class.

Unacceptable risk is the highest level of risk. This tier can be divided into eight (initially four) AI application types that are incompatible with EU values and fundamental rights. These are applications related to:

  1. Subliminal manipulation: changing a person's behavior without them being aware of it, which would harm a person in any way. An example could be a system that influences people to vote for a particular political party without their knowledge or consent.
  2. Exploitation of the vulnerabilities of persons resulting in harmful behavior: this includes social or economic situation, age and physical or mental ability. For instance, a toy with voice assistants that may animate children to do dangerous things.
  3. Biometric categorization of persons based on sensitive characteristics: this includes gender, ethnicity, political orientation, religion, sexual orientation and philosophical beliefs.
  4. General purpose social scoring: using AI systems to rate individuals based on their personal characteristics, social behavior and activities, such as online purchases or social media interactions. The concern is that, for example, someone could be denied a job or a loan simply because of their social score that was derived from their shopping behavior or social media interactions, which might be unjustified or unrelated.
  5. Real-time remote biometric identification (in public spaces): biometric identification systems will be completely banned, including ex-post identification. Exceptions can be made for law enforcement with judicial approval and the Commission’s supervisory. This is only possible for the pre-defined purposes of targeted search of crime victims, terrorism prevention and targeted search of serious criminals or suspects (e.g. trafficking, sexual exploitation, armed robbery, environmental crime).
  6. Assessing the emotional state of a person: this holds for AI systems at the workplace or in education. Emotion recognition may be allowed as high-risk application, if they have a safety purpose (e.g. detect if a driver falls asleep).
  7. Predictive policing: assessing the risk of persons for committing a future crime based on personal traits.
  8. Scraping facial images: creating or expanding databases with untargeted scraping of facial images available on the internet or from video surveillance footage.

AI systems related to these areas will be prohibited in the EU.

High-risk AI systems will be the most regulated systems allowed in the EU market. In essence, this level includes safety components of already regulated products and stand-alone AI systems in specific areas (see below), which could negatively affect the health and safety of people, their fundamental rights or the environment. These AI systems can potentially cause significant harm if they fail or are misused. We will detail what classifies as high-risk in the next section.

The third level of risk is limited risk, which includes AI systems with a risk of manipulation or deceit. AI systems falling under this category must be transparent, meaning humans must be informed about their interaction with the AI (unless this is obvious), and any deep fakes should be denoted as such. For example, chatbots classify as limited risk. This is especially relevant for generative AI systems and its content.

The lowest level of risk described by the EU AI Act is minimal risk. This level includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter. AI systems under minimal risk do not have any restrictions or mandatory obligations. However, it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.

As the AI Act is quite complex to understand, we have built a free-to-use self-assessment tool to help you identify which risk level your AI use-case is classified as and which obligations you are likely to face under the AI Act.

There are four risk classes in the EU AI Act: Unacceptable, high, limited and minimal risk. Each level is obliged to different rules.
The four risk classes of the EU AI Act

What counts as high-risk in the EU AI Act?

The high-risk classification of AI systems defined by the EU AI Act was one of the most controversial and discussed area, as it imposes a significant burden on organizations. As previously mentioned, this includes all AI applications that could negatively affect the health and safety of people, their fundamental rights or the environment. To be put on the market and operated in the EU, AI systems in this risk-class must meet certain requirements.

One part that falls under this classification is AI systems related to the safety components of regulated products, i.e., products already subject to third-party assessments. These are, for example, AI applications integrated into medical devices, lifts, vehicles, or machinery.

Annex III of the AI Act identifies additional areas which would classify stand-alone AI systems as high-risk. These include:

(a) biometric and biometrics-based systems (such as remote biometric identification, categorization of persons and emotion recognition systems),

(b) management and operation of critical infrastructure (such as road traffic, energy supply or digital infrastructure),

(c) education and vocational training (such as assessment of students in educational institutions),

(d) employment and workers management (such as recruitment, performance evaluation, or task-allocation),

(e) access to essential private and public services and benefits (such as credit-scoring, risk assessments in health insurance and dispatching emergency services),

(f) law enforcement (such as evaluating the reliability of evidence or crime analytics),

(g) migration, asylum and border control management (such as assessing the security risk of a person or the examination of applications for asylum, visa, or residence permits),

(h) administration of justice and democratic processes (such as assisting in interpreting & researching facts, law, and the application of the law or for influencing elections).

High-risk AI systems in the EU AIA can be categorized into eight areas. This includes AI systems in safety components or regulated systems.
EU AI Act high-risk AI systems

Consult this article to learn how to meet the regulatory requirements if you develop or deploy a high-risk AI system. Law enforcement authorities may employ high-risk systems related to public security without prior conformity assessment with judicial authorization.

The EU also wants to make an online register publicly accessible, listing all deployed high-risk AI systems and use-cases, as well as foundation models on the market (Article 71 of the EU AIA). Only law-enforcement agencies (police and migration control) are enabled to register their systems non-publicly, accessible to an independent supervisor.

GPAI

While the original proposal of the EU AI Act certainly didn't mention General Purpose AI systems, such as those from OpenAI or Aleph Alpha, the EU updated its proposal also in this regard during the negotiations. Above, we've seen that the risk classification depends on the use-case of the AI system, which is difficult to limit with a GPAI. The EU AI Act differentiates between two risk-classes: non-systemic and systemic risk, depending on the computing power required to train the model. While all foundational models will need to meet transparency requirements, those with a systemic risk have much stricter obligations. GPAI creators must provide relevant information to downstream providers who use these models in a high-risk application.

Publicly available open-source models can avoid stricter requirements if their license allows for access, usage, modification and distribution of the model and its parameters. This holds true as long as there is no relation to high-risk or prohibited applications or no risk of manipulation. Learn more about how the AI Act treats GPAI and GenAI systems in this article.

Conclusion

The EU AI Act proposes a risk-based approach to regulating AI systems, with four levels of risk: unacceptable, high, limited, and minimal (or no) risk. Each level is subject to different degrees of regulations and requirements. Additionally, the AIA differentiates between non-systemic and systemic risk when it comes to GPAI.

Unacceptable risk is the highest level of risk and covers eight main types of AI applications incompatible with EU values and fundamental rights. These applications will be prohibited in the EU.

High-risk AI systems are the most regulated systems allowed in the EU market and include safety components of already regulated products and stand-alone AI systems in specific areas. This level imposes significant burdens on organizations and requires AI systems to meet certain requirements before they can be put on the market and operated in the EU.

Limited risk includes AI systems with a risk of manipulation or deceit. These AI systems must be transparent, and humans must be informed about their interaction with the AI.

Minimal risk includes all other AI systems not falling under the above categories. AI systems under minimal risk do not have any restrictions or mandatory obligations, but it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.

GPAIs are subject to transparency obligations, which become stricter when a systemic risk exists, i.e. if the model is powerful.

If your use-case classifies as high-risk, you should start preparing for the regulation with its extensive documentation already today to make sure you stay competitive.

We have built a free-to-use EU AI Act compliance checker to help you identify the risk class of your AI system and the obligations you are likely to face under the AI Act.

At trail, we help you fully understand your AI development process to mitigate possible risks early on and generate automated audit-ready development documentation to minimize manual overhead. Contact us here to get started today or learn here how we can help you cope with the EU AI Act.

[Last updated after publishing of the final text in the Official Journal of the EU - 30.07.2024]