The jungle of recent standards to apply on AI systems can be cumbersome to navigate, with overlaps and faint differences between them. In the following, we’re going to provide an overview of the standardization landscape of AI and explain, which standard you need in which situation.
The jungle of recent standards to apply to AI systems can be cumbersome to navigate, with overlaps and faint differences between them. In the following, we’re going to provide an overview of the standardization landscape of AI and explain, which standard you need in which situation.
Trust is a virtue that can never be taken; trust is always given. And while sometimes, companies and people receive baseline trust from the beginning on, when dealing with AI this is not the case. Since the beginning of AI, it has faced mistrust and critique — often rightfully so! Endeavors to gain back this trust included the development of both soft ethical guidelines and strict binding regulation, such as the EU AI Act. Standards aim at the same goal of ensuring consistency, quality, compatibility and interoperability. The trust advantage of standards stems from their development through consensus among independent experts. And although standards are not legally mandatory, once a company implements and becomes certified to a standard, it assumes binding obligations. Compliance is essential, as failure to adhere can lead to significant consequences, including certification revocation, reputational damage, breach of contractual agreements, and potential legal or financial liabilities. This grants users considerable leverage in addressing any violations.
While companies and individuals often benefit from an inherent baseline of trust, this has not been the case for Artificial Intelligence. Since its inception, AI has faced significant skepticism and criticism—much of it warranted. Efforts to rebuild and strengthen trust in AI have led to the development of both non-binding ethical guidelines and robust regulatory frameworks, such as the EU AI Act.
As an organization, buying and implementing any standard usually comes with the benefit of improved reputation. Having your product or service follow a certain standard gives you credibility, and hence, makes you and your offer trustworthy. Becoming certified against an official standard provides the tangible evidence of adherence by a third party (the auditor), which then can be valuable in client negotiations, acting as a selling argument. This, in turn, gives your organization a competitive advantage, especially in the space of AI technologies which is only partially regulated and faces trust issues. Furthermore, many standards provide invaluable guidance for setting up effective structures and processes, ensuring streamlined operations and consistent quality and performance across products. They also serve as a foundation for continuous innovation and improvement, enabling organizations to assess and enhance their performance systematically. Moreover, as standards are internationally recognized, they often align with regulatory frameworks, helping organizations meet compliance requirements more efficiently. This alignment reduces the risk of non-compliance and supports proactive risk mitigation while fostering organizational resilience.
Let’s take a look at the three main players, in the example of Germany on the national level, when it comes to AI standards:
All three levels work together and build up on each other's work to have a streamlined standardization landscape. For instance: ISO & CEN have signed the so-called Vienna Agreement in 1991, to guarantee a high level of convergence and alignment between the European and International standards projects. The equivalent between IEC & CENELEC was formalized in the Frankfurt Agreement.
There are further industry standard institutes, such as the European Telecommunications Standards Institute (ETSI), or the Institute of Electrical and Electronics Engineers Standards Association (IEEE), which are fostering knowledge assimilation in academia through their various conferences and publications.
The short answer is: there can’t be. The more extensive answer would include, that there shouldn’t be just one standard. Standards are not a one-size-fits-all solution, but rather balance staying broad and generic enough to cope with the rapid development, while still being tailored to the needs of the specific industry, process or product. Moreover, most institutes use working groups to allow specialization in different subdomains of AI. For instance, the ISO working groups are visualized below:
So far, in the area of AI, ISO/IEC has published 33 standards, has 35 standards under development, 41 participating members, and 25 observing members (As of: November 2024). An overview of the standards developed by the five main working groups are displayed below.
CEN-CENELEC works similarly to ISO in developing voluntary standards. However, in the case of the EU AI Act, CEN-CENELEC plays a special role: the European Commission issued a formal standardization request, tasking CEN-CENELEC with developing harmonized standards that align with existing EU regulations to support the Act’s implementation. These harmonized standards include:
In the process of creating the harmonized standards for the EU AI Act, CEN-CENELEC builds on already existing ISO standards as it is clearly highlighted in the dashboard of European AI Standardization published by Sebastian Hallensleben (Chair of JTC 21).
For instance, these three relevant, high-level standards for AI systems differ in focus, depth and role.
While all three standards aim to promote responsible and ethical AI practices and emphasize the importance of integrating AI considerations in organizational processes, they target different levels of AI implementation. The ISO/IEC 42001 focuses on the management of AI systems, encompassing various aspects of AI governance. The ISO/IEC 42005 and the ISO/IEC 23894 focus on the evaluation and management of impacts and risks after implementing AI; the former standard provides methodologies and guidelines for evaluating the impacts of AI systems, while the latter centers around risk management detailing processes for identifying and mitigating risks related to AI.
Most companies are currently starting to implement the ISO/IEC 42001 to set up their AI governance, to prepare for the EU AI Act and send positive signals to their customers. At trail we help these companies in establishing an effective AI management system as well as to connect them with competent and renown auditing partners to facilitate the certification process. Are you interested in implementing the ISO/IEC 42001? Then reach out to us here or learn more about the AI management standard in our last article.