Make AI more responsible!
By clicking “Yes”, you agree to the storing of cookies on your device to enhance site navigation, and to improve our marketing — which makes more people aware of how to govern AI effectively. View our Privacy Policy for more information. Do you want to help us?

To Standardize Or Not To Standardize - Which International AI Standards You Should Have Heard Of

The jungle of recent standards to apply on AI systems can be cumbersome to navigate, with overlaps and faint differences between them. In the following, we’re going to provide an overview of the standardization landscape of AI and explain, which standard you need in which situation.

To Standardize Or Not To Standardize - Which International AI Standards You Should Have Heard Of

The jungle of recent standards to apply to AI systems can be cumbersome to navigate, with overlaps and faint differences between them. In the following, we’re going to provide an overview of the standardization landscape of AI and explain, which standard you need in which situation.

Why Standardization is Important

Trust is a virtue that can never be taken; trust is always given. And while sometimes, companies and people receive baseline trust from the beginning on, when dealing with AI this is not the case. Since the beginning of AI, it has faced mistrust and critique — often rightfully so! Endeavors to gain back this trust included the development of both soft ethical guidelines and strict binding regulation, such as the EU AI Act. Standards aim at the same goal of ensuring consistency, quality, compatibility and interoperability. The trust advantage of standards stems from their development through consensus among independent experts. And although standards are not legally mandatory, once a company implements and becomes certified to a standard, it assumes binding obligations. Compliance is essential, as failure to adhere can lead to significant consequences, including certification revocation, reputational damage, breach of contractual agreements, and potential legal or financial liabilities. This grants users considerable leverage in addressing any violations.

While companies and individuals often benefit from an inherent baseline of trust, this has not been the case for Artificial Intelligence. Since its inception, AI has faced significant skepticism and criticism—much of it warranted. Efforts to rebuild and strengthen trust in AI have led to the development of both non-binding ethical guidelines and robust regulatory frameworks, such as the EU AI Act.

As an organization, buying and implementing any standard usually comes with the benefit of improved reputation. Having your product or service follow a certain standard gives you credibility, and hence, makes you and your offer trustworthy. Becoming certified against an official standard provides the tangible evidence of adherence by a third party (the auditor), which then can be valuable in client negotiations, acting as a selling argument. This, in turn, gives your organization a competitive advantage, especially in the space of AI technologies which is only partially regulated and faces trust issues. Furthermore, many standards provide invaluable guidance for setting up effective structures and processes, ensuring streamlined operations and consistent quality and performance across products. They also serve as a foundation for continuous innovation and improvement, enabling organizations to assess and enhance their performance systematically. Moreover, as standards are internationally recognized, they often align with regulatory frameworks, helping organizations meet compliance requirements more efficiently. This alignment reduces the risk of non-compliance and supports proactive risk mitigation while fostering organizational resilience.

Overview of the Standardization Landscape

Let’s take a look at the three main players, in the example of Germany on the national level, when it comes to AI standards:

  • International Organization for Standardization: ISO is an independent, non-governmental international organization. It brings global experts together to agree on the best ways of doing things. The technical committee ISO/IEC JTC 1 /SC 42 is  focused on standardization in the area of AI  and is split into multiple working groups (see below).
  • Comité Européen de Normalisation: CEN-CENELEC (ELEC = Electrotechnique) act on a European level, guiding AI with norms and standards in their joint technical committee (JTC) 21. They focus on producing standardization deliverables that address European market and societal needs, as well as underpinning EU legislation, policies, principles, and values.
  • Deutsches Institut für Normung e.V.: DIN is the national organization for standardization in Germany, most notably they published the German Standardization Roadmap. As DIN aims at fostering transparency and collaboration, participation through working groups is open to various stakeholders, for instance, industry representatives, associations, academia, NGOs, and public authorities. There are equivalent counterparts to this for other countries, for instance the danish counterpart (Dansk Standard) holds the secretary position at the CEN-CENELEC JTC 21 at the time of writing.

All three levels work together and build up on each other's work to have a streamlined standardization landscape. For instance: ISO & CEN have signed the so-called Vienna Agreement in 1991, to guarantee a high level of convergence and alignment between the European and International standards projects. The equivalent between IEC & CENELEC was formalized in the Frankfurt Agreement.

There are further industry standard institutes, such as the European Telecommunications Standards Institute (ETSI), or the Institute of Electrical and Electronics Engineers Standards Association (IEEE), which are fostering knowledge assimilation in academia through their various conferences and publications.

Overview of International and National Institutes for (AI) Standardization: ISO as Interational Organization, CEN-CENELEC as the European Regional Institute, and DIN as example for National Level Germany.

But why isn’t there just one AI standard?

The short answer is: there can’t be. The more extensive answer would include, that there shouldn’t be just one standard. Standards are not a one-size-fits-all solution, but rather balance staying broad and generic enough to cope with the rapid development, while still being tailored to the needs of the specific industry, process or product. Moreover, most institutes use working groups to allow specialization in different subdomains of AI. For instance, the ISO working groups are visualized below:

Diagram of the ISO structure and working groups focused on AI standards, data management, and trustworthiness in developing international guidelines

So far, in the area of AI, ISO/IEC has published 33 standards, has 35 standards under development, 41 participating members, and 25 observing members (As of: November 2024). An overview of the standards developed by the five main working groups are displayed below.

Overview of different ISO AI Standards that are already published or currently under development.

CEN-CENELEC works similarly to ISO in developing voluntary standards. However, in the case of the EU AI Act, CEN-CENELEC plays a special role: the European Commission issued a formal standardization request, tasking CEN-CENELEC with developing harmonized standards that align with existing EU regulations to support the Act’s implementation. These harmonized standards include:

  • Risk management for AI systems
  • Governance and quality of datasets used to build AI systems
  • Record keeping through built-in logging capabilities in AI systems
  • Transparency and information to the users of AI systems
  • Human oversight of AI systems
  • Accuracy specifications for AI systems
  • Robustness specifications for AI systems
  • Cybersecurity specifications for AI systems
  • Quality management system for providers of AI systems, including post-market monitoring processes
  • Conformity assessment for AI systems
  • Supporting standards, such as terminology.

In the process of creating the harmonized standards for the EU AI Act, CEN-CENELEC builds on already existing ISO standards as it is clearly highlighted in the dashboard of European AI Standardization published by Sebastian Hallensleben (Chair of JTC 21).

Spotlight ISO Standards

For instance, these three relevant, high-level standards for AI systems differ in focus, depth and role.

1. ISO/IEC 42001: AI Management System

  • Purpose: Establishes requirements and provides guidance for implementing an AI management system within an organization.
  • Scope: Applicable to any organization, regardless of size or type, involved in developing, providing, or using AI systems. It focuses on integrating AI governance into existing management structures to ensure responsible AI practices.
  • Key Components:
    • Integration of AI governance with organizational processes.
    • Risk management specific to AI applications.
    • Continuous improvement mechanisms for AI systems.
  • Relation to Other Standards: Serves as a foundational framework that can incorporate guidelines from other AI-related standards, including ISO/IEC 42005 and ISO/IEC 23894.

2. ISO/IEC 42005: AI System Impact Assessment

  • Purpose: Provides guidance on conducting impact assessments for AI systems to evaluate potential effects on stakeholders and society.
  • Scope: Offers a structured approach for organizations to assess the implications of AI systems throughout their lifecycle, from design to deployment.
  • Key Components:
    • Methodologies for assessing AI system impacts.
    • Considerations for ethical, legal, and societal implications.
    • Documentation and reporting requirements for impact assessments.
  • Relation to Other Standards: Complements ISO/IEC 42001 by providing specific tools for impact assessment, which can be integrated into the broader AI management system.

3. ISO/IEC 23894: AI Risk Management

  • Purpose: Offers guidance on managing risks associated with AI systems to ensure they operate safely and effectively.
  • Scope: Applicable to organizations developing, deploying, or using AI systems, focusing on identifying, assessing, and mitigating AI-related risks.
  • Key Components:
    • Risk identification and analysis specific to AI.
    • Implementation of risk controls and mitigation strategies.
    • Continuous monitoring and review of AI system risks.
  • Relation to Other Standards: Aligns with ISO/IEC 42001 by providing detailed guidance on the risk management processes that should be part of an AI management system.

While all three standards aim to promote responsible and ethical AI practices and emphasize the importance of integrating AI considerations in organizational processes, they target different levels of AI implementation. The ISO/IEC 42001 focuses on the management of AI systems, encompassing various aspects of AI governance. The ISO/IEC 42005 and the ISO/IEC 23894 focus on the evaluation and management of impacts and risks after implementing AI; the former standard provides methodologies and guidelines for evaluating the impacts of AI systems, while the latter centers around risk management detailing processes for identifying and mitigating risks related to AI.

Where to start?

Most companies are currently starting to implement the ISO/IEC 42001 to set up their AI governance, to prepare for the EU AI Act and send positive signals to their customers. At trail we help these companies in establishing an effective AI management system as well as to connect them with competent and renown auditing partners to facilitate the certification process. Are you interested in implementing the ISO/IEC 42001? Then reach out to us here or learn more about the AI management standard in our last article.