Make AI more responsible!
Wenn Sie auf „Ja“ klicken, erklären Sie sich damit einverstanden, dass Cookies auf Ihrem Gerät gespeichert werden, um die Navigation auf der Website zu verbessern und unser Marketing zu optimieren — so erfahren mehr Menschen, wie eine effektive KI-Governance aussieht. Weitere Informationen finden Sie in unserer Datenschutzerklärung. Möchten Sie uns helfen?

Regulating GenAI: Implications of the EU AI Act

The EU AI Act extensively regulates general purpose AI (GPAI), such as GenAI. This article explains how the EU classifies its risks and what obligations providers and integrators need to meet.

Regulating GenAI: Implications of the EU AI Act

The EU AI Act emerges as a pivotal legislation as it tries to regulate a technology that is rapidly advancing in its capabilities lately - and with it potential risks and harms. Initially, the 2021 proposal by the European Commission did not specifically address general purpose AI (GPAI) systems, like GenAI. However, recognizing the leaps in generative and foundation models, the Act was revised in late 2022 to encapsulate stringent rules for GPAI systems. This inclusion, met with broad endorsement from the European Parliament, underscored the trilogue negotiations. Yet, the GPAI regulation's stringent nature and technology-focused approach, instead of the original application-focused approach, elicited mixed reactions, notably from key EU Member States such as France, Germany, and Italy. In early 2024, all negotiating parties agreed upon the AI Act, including extensive regulations for GPAI systems akin to those of high-risk AI systems.

This article breaks down how the EU classifies the risks of GPAI models, as well as what responsibilities both model providers, e.g. OpenAI, Aleph Alpha, Mistral or those who modify models of such providers, and integrators, e.g. a finance company leveraging OpenAI’s GPT-4 for a customer-facing chatbot, have.

How the AI Act classifies GPAI models

The EU AI Act differentiates GPAI models based on their risk levels: systemic or non-systemic. This distinction is crucial as it dictates the regulatory scrutiny and obligations required from model providers, with systemic risk model providers facing heightened requirements.

What counts as systemic risk?

A GPAI model is deemed a systemic risk if it demonstrates “high capabilities”, i.e. the cumulative amount of the computing power used for its training is above 10^25 floating point operations per second (FLOPS).

Additionally, the European Commission may designate a GPAI model as having systemic risk based on various factors, such as the model's complexity of parameters, its input/output modalities, or its reach among businesses and consumers (refer to Annex XIII of the AI Act).

Can a GPAI model provider avoid the systemic risk class?

GPAI model providers can argue against a systemic risk classification by presenting "sufficiently substantiated arguments" to the Commission, a process mirroring that for high-risk AI system providers. Providers of systemic risk GPAI models can also request to have their model reevaluated in their risk classification at a later stage.

Obligations for GPAI Providers

All providers are mandated to disclose when users (i.e. natural persons) are interacting with an AI system (unless it's obvious), such as a generative AI chatbot, and to label all AI-generated outputs in a machine-readable format, which is referred to as transparency obligations.

Every GPAI model, including its training and testing process and the results of its evaluation, must also be well documented (see Annex XI). This documentation must be kept up-to-date and made available to downstream providers (i.e. those who want to integrate a third-party GPAI model into their products or AI systems) to assist them in adhering to their obligations under the AI Act (see Annex XII)

GPAI providers also need to have a publicly available and detailed summary of the content that was used to train the model. Providers who are sitting outside the EU and who place their model on the Union’s market need to appoint an authorized representative who is established in the EU and who is responsible for the verification of the technical documentation or for providing the required information to the authorities, for instance.

The documentation requirements of GPAI models (seen in Annex XI and XII)

Additional obligations for systemic risk GPAI

For models classified as systemic risk, additional obligations are laid down by the AI Act. Systemic risk GPAI providers must immediately inform the European Commission about their model for public database inclusion, similar to high-risk AI systems, and report any serious incidents, including corrective actions taken. Comprehensive risk assessments, cybersecurity, and infrastructure security measures, along with a code of practice to demonstrate AI Act compliance, are also mandatory.

The documentation requirements are also extended and need to include detailed descriptions of the model evaluation strategies and results, the adversarial tests conducted and the system architecture explaining how software components build on or feed into each other and integrate into the overall processing.

The obligations for systemic risk foundation models (seen in Article 53 and Annex XI and XII)

Avoiding GPAI obligations

While you will not be able to avoid the obligations completely, providers can offer non-systemic risk GPAI models under a free open-source license to reduce the regulatory burdens. This is only possible if the model, parameters, including weights, information on model architecture and information on model usage are publicly accessible and if the modification and distribution of the model are possible.

Providers of such models only require having a publicly available and detailed summary of the training content of the GPAI model.

What does the EU AI Act imply if I am using GenAI for my products?

Numerous organizations are integrating foundation models, such as generative AI systems, into their own products or in other AI systems instead of building their very own models themselves. That is what is called a deployer, distributor or downstream provider in the EU AI Act. But does this mean that you are subject to any obligations set down by the EU AI Act? It depends on how you use these third-party systems.

If you are modifying an AI system, including a GPAI system, (given that it is already in operation and is not classified as high-risk) in its intended purpose such that it becomes a high-risk application, you would be considered as a high-risk AI system provider, i.e. that you need to fulfill the obligations for high-risk AI systems.

In the case that you need to demonstrate your compliance with the AI Act to the authorities, the GPAI system provider will have to support you with necessary information and documentation about their system, for which a written agreement is essential. This means there is a shared responsibility to ensure putting regulatory compliant AI systems on the market. However, the AI Act also states that the initial provider of the system does not need to make documentation available if the provider expressly excludes the change of its system into a high-risk system (see Article 25).

It is yet unclear how Article 25 will be interpreted, as it also states that a significant change to a third-party AI system would mean that there is a change in who is considered to be the AI system provider. So it remains unclear if, for example, the fine-tuning of a foundation model is considered a significant change and if this changes the obligations of operators along the AI value chain.

If you are using the GPAI system in a limited risk setting, the transparency obligations (i.e. labeling AI-generated content and informing users about the interaction with AI) apply.

Non-Compliance with the AI Act’s obligations

Violating the rules on general purpose, high-risk or limited risk AI systems can result in fines of up to €15 million or 3% of the organization’s global annual turnover. Those who fail to supply correct and complete information or provide misleading information to the authorities can expect fines of up to €7.5 million or 1.5% of the global annual revenue. SMEs and start-ups may receive smaller fines.

What’s next

The list of obligations is very long for providers of GPAI or GenAI models, and this article is solely supposed to give you a first overview of them. Organizations face a significant task in aligning with the AI Act, necessitating an extensive revamp of existing governance frameworks and documentation practices to meet compliance requirements.

You can expect the newly set up European AI Board and the AI Office to hand out more information, examples and guidance on how certain scenarios will be handled under the AI Act. Who will be liable for what responsibilities will be the key question in that process. However, the end-user-facing side will definitely have some sort of liability and notification duties.

It is also clear that there is an urgency in fulfilling the Act’s requirements, as the AI Act enters into force in August 2024 and organizations have to comply with the regulation on prohibited AI systems 6 months after (by February 2025) and with the regulation on GPAI systems 12 months after (by August 2025).

Are you using an AI system in a high-risk use case, or are you building “traditional” AI models? Then feel free to read about the other requirements outlined in the AI Act here. We at trail want you to fully understand the whole AI development process, regardless of your background. Check out here, how we can help you with documentation, risk assessments, audits, and understanding your AI systems to be prepared for the EU AI Act.