Make AI more responsible!
Wenn Sie auf „Ja“ klicken, erklären Sie sich damit einverstanden, dass Cookies auf Ihrem Gerät gespeichert werden, um die Navigation auf der Website zu verbessern und unser Marketing zu optimieren — so erfahren mehr Menschen, wie eine effektive KI-Governance aussieht. Weitere Informationen finden Sie in unserer Datenschutzerklärung. Möchten Sie uns helfen?

Writing an AI Policy: First Step Towards Effective AI Governance

Previously, we discussed the importance of AI governance, as it is key to build and use trustworthy, responsible and efficient AI systems. In this article, we focus on one integral building block of AI governance: the AI policy — which guides the use and development of AI systems within your company. In the following, you will learn why you should consider writing an AI policy, what should be part of a proper policy, and how you can create your own organizational policy. You will also learn how to align ethical requirements of AI systems with concrete action steps to make your AI safe, trustworthy and tangible.

Writing an AI Policy: First Step Towards Effective AI Governance

What is an AI policy and why does your company need one?

AI is transforming businesses by automating tasks, augmenting decision-making, and optimizing processes. According to Forbes, without an AI policy, companies risk privacy breaches, data exposure, copyright infringement, bias, and legal issues. Creating an organizational AI policy can help to mitigate these risks and to empower innovation, ensuring responsible use of AI, and positioning the company as a forward-thinking leader.

Organizational AI policies could be the solution, as they serve as a middle ground between previously established non-binding ethical principles, such as those published by the OECD, and current regulations, such as the EU AI Act. They are offering a binding framework to both establish individual “rules of the house” within a company, and to translate these rules into distinct tasks. Moreover, they formalize ethical guidelines into actionable steps, ensuring that employees follow core values and develop trustworthy AI systems. Defining these principles allows the tailoring of the policy to the organization’s own values and strategy. Hence, companies can regulate the use, development, and sale of AI systems, and align their strategy with their employee’s actions. Without these policies, organizations could face confusion, misalignment of employee behavior and organizational values, and increased risk of AI incidents, leading to reputational and financial damage. Aligning both ethical and regulatory parts within a company, and ensuring compliance, is a task of a company’s AI governance, which mitigates risks, promotes responsible innovation, and builds public trust. The AI policy is therefore a fundament of proper AI governance.

Understanding AI Governance
AI governance is driven by a strategic vision and involves these three key components:

1. AI Strategy and Vision

This is the overarching plan that outlines the goals and direction for AI implementation within an organization.

2. AI Policy

These are the specific rules and guidelines that dictate how AI should be developed, used, and monitored.

3. AI Governance System

This translates policy commitments into actionable sub-goals and processes, such as identifying all AI use-cases within one central registry. This system ensures that AI activities align with policies and regulations, integrating seamlessly with existing regulations like the EU AI Act or GDPR.
The AI strategy and vision, together with the AI policy and the appropriate management system (e.g. to manage AI risks), make up a proper and effective AI governance
A proper AI strategy, policy, and management system are essential to governing AI systems.

Some standards and guidelines on AI use (for instance, as described by the OECD) already exist, however, it is unrealistic to expect employees to be familiar with them. The AI policy provides a set of guidelines specific to the organization, offering clear behavioral expectations and consequences for violations. Policies can not only be used for guiding employees, but also to communicate responsible AI practices to clients, users, or partners, depending on the level of detail and technical specification that is needed.

What should be part of an organizational AI policy?

While policies can and should look unique, and differ in their content and focus areas, some key elements should be addressed in every AI policy. The AI policy typically follows a top-down manner from high-level definitions and statements to a distinct outlined process to ensure the statements can be met. As shown in the examples at the end, this distinct process can vary in the level of detail. It is detailed enough to avoid confusion while applying to a broad set of use cases, AI systems, and stakeholders at the same time.

These parts are elementary for an organizational AI policy:

  • Scope, Aim and Goal: The policy’s main stakeholders and affected parties (both internal and external) and their respective roles are defined, and the policy’s aim and intended use are disclosed. Furthermore, it can be distinguished between bought, built, or sold AI systems.
  • Definitions: AI policies should start by clarifying the definitions of AI systems and further terms used in the context of AI to ensure a coherent understanding of the terminology used within the organization. Already established definitions by NIST or the OECD, for instance, can be used.
  • Organizational Context: General organizational and business strategies and values can be defined to align with the principles and processes set up in the AI policy. A general risk appetite can be specified and contextualized in the risk environment of the organization. Main objectives for the use of AI are defined and serve as guidance for the selection of AI implementation or development choices.
  • Governing Roles and Processes: Executives’ roles are defined and important roles are equipped with dedicated responsibilities, such as tool owners or approvers. Here, a more general AI governance strategy could be further elaborated, defining a high-level board or steering committee with its roles and tasks. Furthermore, feedback channels and communication procedures can be defined for efficient reporting and accessibility, and the AI management system set up, such as trail.
  • Guiding Ethical Principles: Ethical guidelines, often simply adopted from institutions such as OECD, UNESCO or the HLEG on AI, are selected and further defined. Next to the key principles for responsible and trustworthy AI, an individual set of principles should enter the policy to align with the organizational values. These principles will influence the governance tasks. Our excursion below provides further information on ethical principles. Good to know: The AI policy builder of trail helps you translate these guidelines into actionable steps.
  • Specifications of AI Systems: All bought, developed, and sold AI systems that are or will be in use are defined according to their intended use and risk classification. Permitted, restricted, and prohibited systems within the organization are defined and justified.
  • Obligations and Requirements: Based on the content of the policy, specific measures and mechanisms are put forward. Moreover, AI incidents and processes in a case of an AI incident are defined.
  • Further Policies: If applicable, connections to other policies within the organization and legal requirements are established.
  • General Provisions: Non-compliance, exceptions and contact persons are defined.

The design process of a good AI policy starts on a high-level (e.g. by defining AI and scope), but can get quite detailed (e.g. when specifying the obligations of employees)

The policy should be as structured as possible and balance high-level, broadly applicable guidance with lower-level behavioral and procedural implications. Responsibilities must be clearly defined to avoid confusion. The AI policy answers questions, such as “Which AI system can I use and which ones are prohibited?”, “What rules do I need to follow when using them?”, and “Who’s responsible for a particular AI use-case?”. Therefore, a good policy should jiggle to be framed in a legally compliant and clear manner, while still ensuring that every employee within the company can easily understand and derive guidelines from it.

This shows, that a good AI policy not only defines the What's and the Why's, but — in combination with the specified AI governance processes — provides guidance on the How's as well. However, this can be time- and thought-intensive. As soon as the policy is adopted, managers, together with employees, have to define procedures and action steps to ensure adherence to the policy, and then implement them within their AI management. This implementation step is the most integral one, as it ensures coherence among the employees regarding daily AI interactions, and also differentiates the policy from mere statements of ethical principles. Nevertheless, ethical principles are an integral part of every AI policy as well, allowing organizations to adapt policies to own company values and give clear guidance on ethical behavior. It strengthens the internal culture and ensures that AI practices are guided by good intention and aimed at trustworthy AI systems.

Looking for an AI policy template or a policy review for your organization?

We offer a free review of your current policy and we help you to draft your individual AI policy aligned with industry standards and the ISO42001. Interested? Contact us here.

Excursion: Ethical and Trustworthy AI

Along with the rapid development of AI, concerns arose and questioned the ethical aspects of the new technology. Due to the disruptive nature of AI and the intransparency of most algorithms, public fear and mistrust grew, and hence, ethical and trustworthy AI concepts emerged, aiming for AI that adheres to ethical principles in the same manner humans do. Similar to the EU AI Act, such ethical principles should also be at the backbone of an AI policy, and guide decisions and behavior.

Nowadays, over 200 guidelines, principles, and recommendations have been published, and the concept has gained attraction from academia, governmental institutions, and companies. Following recent comparative research (Correa et al., 2023), the following core principles can be derived:

  • Transparency/Explainability/Auditability: defines both transparency of an organization, and transparency of an algorithm, and aims at making information in and about AI systems understandable to non-experts and useful for audits.
  • Safety/Security: requires AI systems to be protected against external attacks, including safety mechanisms throughout their lifecycle, and ensuring they function appropriately even under adverse conditions. This includes regular risk assessments and adherence to data security regulations.
  • Robustness/Reliability: requires AI systems to operate reliably, performing consistently according to their intended purposes while minimizing risks, and display technological robustness to misuse or external attacks.
  • Justice/Equity/Fairness/Non-discrimination: defines AI systems to be non-discriminatory and ensure bias mitigation, means that individuals should be objection to the same, fair algorithmic treatment regardless of their characteristics.
  • Privacy: prioritizes the individual's right to choose if and to what extent they want to expose themselves to the world, and relates to data protection concepts, such as anonymity and informed consent.
  • Accountability/Liability: aims at defining roles and responsibilities for the adherence of compliance with both own policies, and law, and holding them accountable for the impacts caused by the development or use of technologies.

The core principles of trustworthy AI systems (that should be part of any AI policy) include transparency, safety & security, robustness & reliability, fairness & non-discrimination, privacy and accountability.
The core principles of trustworthy AI systems

Additional principles, such as accessibility, beneficence, and sustainability, are equally important, but might not be crucial and equally applicable to every organization or AI use case. Hence, a long-listing of principles is key for forming an AI policy, although not all of them have to be selected, included and operationalized. Moreover, within the policy, it can be advantageous to group ethical principles in the manner above. Often, principles overlap semantically, and such grouping allows for distinct sets of guidelines. In turn, the actionable steps that accompany each set of principles can be vast and rich in detail, finding different tasks and procedures for the principles respectively.

Implementation is key

Corrêa and colleagues (2023) and other academic papers (Hagendorff, 2020; Jobin et al., 2019) critique the lack of technical implementation of the those guidelines, as only 2% of the research offers practical examples. The challenge of translating principles into practice remains.

Therefore, it is important to include sets of ethical principles as a guiding core of your AI policy, but translating them into tangible action steps is crucial to ensure effectiveness. This should be facilitated by an appropriate AI governance system.

Some Examples

What does a policy look like? As mentioned before, policies can differ in look and format. Typically, it is a multisectional document with a structured and intuitive layout (remember: a good policy should display a clear structure to reduce confusion and increase accessibility!), where each chapter maps to one of the points on the roadmap we presented before. However, there is still much variation possible.

Here are some exemplary snippets of a good policy:

Definitions of established institutions or legal frameworks can be used and implemented into one’s own policy:

“This policy follows the definition of AI of the EU AI Act (Art. 3 No. 1 EU AI Act, Recital 12): An AI system is a machine-based system that derives from the input received (e.g. data) how to generate outputs (e.g. predictions, content, recommendations, decisions, or algorithms) that can influence physical or virtual environments. AI systems can operate according to explicit (clearly defined) goals or implicit goals (derived from data). AI systems are characterized by varying degrees of autonomy, which means that they can act to a certain extent independently of human involvement and are able to operate without human intervention. Systems that are based exclusively on rules defined by natural persons for the automatic execution of actions do not fall under the definition of an AI system.”

Clear responsibilities should be embedded within the policy:

“The Digital Innovation team is responsible and accountable for identifying and introducing new AI applications throughout the organization. It also supervises the AI systems during the development, period of use and makes relevant changes to AI systems if necessary.

Furthermore, the IT Security Officer, Data Protection Officer, the Works Council, the management team and the Digital Innovation team form an AI governance board, which is responsible for complying with the applicable AI regulation and the implementation of AI governance processes in our organization.

The AI management systems (one of the main pillars of AI governance) can be defined in the policy. Such a system could be trail, which additionally, takes over some of the workload accompanying an AI policy:

“To ensure the proper governance of the AI systems used and developed, and the effective implementation of this AI policy, we use an AI management system (”trail”). The AI management system supports employees to effectively and systematically oversee the use and development of AI systems within the organization, including assessing risks, alignment with the guiding principles and the appropriate documentation and reporting mechanisms.”

Additional policy samples for your inspiration can be found here or here.

Conclusion

Designing an AI policy is a crucial element in AI governance. The structure and contents can vary. Yet, some key definitions, principles, and implementations are a requirement in every well-written policy. Ethical principles that are defined and implemented in the policy are of varying importance and can be integrated or left out depending on your company specifics or use case. The policy is closing the gap between “good intentions” and missing legislation, laying groundwork for employees using, buying, developing, or selling AI systems.

However, setting up a policy can be difficult and time-intensive. We have already worked with different companies on setting up their AI policy aligned with industry best practices or the ISO42001. If you are looking for help and expertise in drafting your own AI policy, contact us here. We can support you both in a workshop with our in-house policy experts and with our governance platform trail, that offers an integrated AI policy builder.

Sources:

Corrêa, N. K., Galvão, C., Santos, J. W., Del Pino, C., Pinto, E. P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., & Oliveira, N. de (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10), 100857. https://doi.org/10.1016/j.patter.2023.100857

Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2