trail supports you in meeting the high requirements of the AI Act and in passing technical audits. Ensure compliance of your in-production and in-development ML systems to stay competitive.
Learn below, how the EU AI Act will affect you or take our free self-assessment to identify your role and obligations.
The EU AI Act is the world's first legal framework designed to regulate Artificial Intelligence across the EU. It aims to ensure that AI systems are safe, respect existing laws on fundamental rights, and align with the EU's values.
AI applications that are incompatible with EU values and fundamental rights. They will be prohibited.
Highly regulated AI Systems that could cause significant harm if they are failing or misused, or that are safety components.
Applications that pose a risk of manipulation or deceit. They are less regulated, but have transparency obligations.
All remaining AI systems. While they have no mandatory requirements, transparency and ethical use are encouraged.
Schedule a call with us to assess how your organization is affected by the EU AI Act and which risk level your AI system is classified in.
Non-compliance with the requirements of the EU AI Act will result in high penalties:
The AI Act entered into force in August 2024. After 6 months, organizations that are putting AI systems into the EU market need to comply with the regulation on prohibited practices. After 12 months, obligations for general purpose AI models (GPAI) become applicable. All other obligations, including those on high-risk AI systems, become applicable after 24 months.
Organizations are expected to comply with the EU AI Act starting in February 2025.
Especially high-risk AI system providers, such as in the financial or medical sector, need to fulfil strict requirements to demonstrate the trustworthiness of their systems.
Includes risk mitigation and model tests along lifecycle, data governance, keeping detailed documentation and keeping logs.
Undergo self-assessment and third-party audits before putting the AI system on the market.
High-risk system providers need to provide information about their applications, accessible to the public.
[High-risk systems] must also be traceable and auditable, ensuring that appropriate documentation is kept, including of the data used to train the algorithm that would be key in ex post investigations.”
It takes time to meet the high requirements: traceable AI development and documentation, robust risk management and technical audits. Gain a head start on the EU AI Act by preparing now.
This is how we help you to comply with the EU AI Act:
Adopt the AI Act through documentation templates to your organization's workflows.
Transfer the regulatory requirements of the EU AI Act into actionable steps already during development.
Assess your risks and governance measures for each AI project before official audit.