Previously, we discussed the importance of AI governance, as it is key to build and use trustworthy, responsible and efficient AI systems. In this article, we focus on one integral building block of AI governance: the AI policy — which guides the use and development of AI systems within your company. In the following, you will learn why you should consider writing an AI policy, what should be part of a proper policy, and how you can create your own organizational policy. You will also learn how to align ethical requirements of AI systems with concrete action steps to make your AI safe, trustworthy and tangible.
AI is transforming businesses by automating tasks, augmenting decision-making, and optimizing processes. According to Forbes, without an AI policy, companies risk privacy breaches, data exposure, copyright infringement, bias, and legal issues. Creating an organizational AI policy can help to mitigate these risks and to empower innovation, ensuring responsible use of AI, and positioning the company as a forward-thinking leader.
Organizational AI policies could be the solution, as they serve as a middle ground between previously established non-binding ethical principles, such as those published by the OECD, and current regulations, such as the EU AI Act. They are offering a binding framework to both establish individual “rules of the house” within a company, and to translate these rules into distinct tasks. Moreover, they formalize ethical guidelines into actionable steps, ensuring that employees follow core values and develop trustworthy AI systems. Defining these principles allows the tailoring of the policy to the organization’s own values and strategy. Hence, companies can regulate the use, development, and sale of AI systems, and align their strategy with their employee’s actions. Without these policies, organizations could face confusion, misalignment of employee behavior and organizational values, and increased risk of AI incidents, leading to reputational and financial damage. Aligning both ethical and regulatory parts within a company, and ensuring compliance, is a task of a company’s AI governance, which mitigates risks, promotes responsible innovation, and builds public trust. The AI policy is therefore a fundament of proper AI governance.
Understanding AI Governance
AI governance is driven by a strategic vision and involves these three key components:
1. AI Strategy and Vision
This is the overarching plan that outlines the goals and direction for AI implementation within an organization.
2. AI Policy
These are the specific rules and guidelines that dictate how AI should be developed, used, and monitored.
3. AI Governance System
This translates policy commitments into actionable sub-goals and processes, such as identifying all AI use-cases within one central registry. This system ensures that AI activities align with policies and regulations, integrating seamlessly with existing regulations like the EU AI Act or GDPR.
Some standards and guidelines on AI use (for instance, as described by the OECD) already exist, however, it is unrealistic to expect employees to be familiar with them. The AI policy provides a set of guidelines specific to the organization, offering clear behavioral expectations and consequences for violations. Policies can not only be used for guiding employees, but also to communicate responsible AI practices to clients, users, or partners, depending on the level of detail and technical specification that is needed.
While policies can and should look unique, and differ in their content and focus areas, some key elements should be addressed in every AI policy. The AI policy typically follows a top-down manner from high-level definitions and statements to a distinct outlined process to ensure the statements can be met. As shown in the examples at the end, this distinct process can vary in the level of detail. It is detailed enough to avoid confusion while applying to a broad set of use cases, AI systems, and stakeholders at the same time.
These parts are elementary for an organizational AI policy:
The policy should be as structured as possible and balance high-level, broadly applicable guidance with lower-level behavioral and procedural implications. Responsibilities must be clearly defined to avoid confusion. The AI policy answers questions, such as “Which AI system can I use and which ones are prohibited?”, “What rules do I need to follow when using them?”, and “Who’s responsible for a particular AI use-case?”. Therefore, a good policy should jiggle to be framed in a legally compliant and clear manner, while still ensuring that every employee within the company can easily understand and derive guidelines from it.
This shows, that a good AI policy not only defines the What's and the Why's, but — in combination with the specified AI governance processes — provides guidance on the How's as well. However, this can be time- and thought-intensive. As soon as the policy is adopted, managers, together with employees, have to define procedures and action steps to ensure adherence to the policy, and then implement them within their AI management. This implementation step is the most integral one, as it ensures coherence among the employees regarding daily AI interactions, and also differentiates the policy from mere statements of ethical principles. Nevertheless, ethical principles are an integral part of every AI policy as well, allowing organizations to adapt policies to own company values and give clear guidance on ethical behavior. It strengthens the internal culture and ensures that AI practices are guided by good intention and aimed at trustworthy AI systems.
Looking for an AI policy template or a policy review for your organization?
We offer a free review of your current policy and we help you to draft your individual AI policy aligned with industry standards and the ISO42001. Interested? Contact us here.
Along with the rapid development of AI, concerns arose and questioned the ethical aspects of the new technology. Due to the disruptive nature of AI and the intransparency of most algorithms, public fear and mistrust grew, and hence, ethical and trustworthy AI concepts emerged, aiming for AI that adheres to ethical principles in the same manner humans do. Similar to the EU AI Act, such ethical principles should also be at the backbone of an AI policy, and guide decisions and behavior.
Nowadays, over 200 guidelines, principles, and recommendations have been published, and the concept has gained attraction from academia, governmental institutions, and companies. Following recent comparative research (Correa et al., 2023), the following core principles can be derived:
Additional principles, such as accessibility, beneficence, and sustainability, are equally important, but might not be crucial and equally applicable to every organization or AI use case. Hence, a long-listing of principles is key for forming an AI policy, although not all of them have to be selected, included and operationalized. Moreover, within the policy, it can be advantageous to group ethical principles in the manner above. Often, principles overlap semantically, and such grouping allows for distinct sets of guidelines. In turn, the actionable steps that accompany each set of principles can be vast and rich in detail, finding different tasks and procedures for the principles respectively.
Corrêa and colleagues (2023) and other academic papers (Hagendorff, 2020; Jobin et al., 2019) critique the lack of technical implementation of the those guidelines, as only 2% of the research offers practical examples. The challenge of translating principles into practice remains.
Therefore, it is important to include sets of ethical principles as a guiding core of your AI policy, but translating them into tangible action steps is crucial to ensure effectiveness. This should be facilitated by an appropriate AI governance system.
What does a policy look like? As mentioned before, policies can differ in look and format. Typically, it is a multisectional document with a structured and intuitive layout (remember: a good policy should display a clear structure to reduce confusion and increase accessibility!), where each chapter maps to one of the points on the roadmap we presented before. However, there is still much variation possible.
Definitions of established institutions or legal frameworks can be used and implemented into one’s own policy:
“This policy follows the definition of AI of the EU AI Act (Art. 3 No. 1 EU AI Act, Recital 12): An AI system is a machine-based system that derives from the input received (e.g. data) how to generate outputs (e.g. predictions, content, recommendations, decisions, or algorithms) that can influence physical or virtual environments. AI systems can operate according to explicit (clearly defined) goals or implicit goals (derived from data). AI systems are characterized by varying degrees of autonomy, which means that they can act to a certain extent independently of human involvement and are able to operate without human intervention. Systems that are based exclusively on rules defined by natural persons for the automatic execution of actions do not fall under the definition of an AI system.”
Clear responsibilities should be embedded within the policy:
“The Digital Innovation team is responsible and accountable for identifying and introducing new AI applications throughout the organization. It also supervises the AI systems during the development, period of use and makes relevant changes to AI systems if necessary.
Furthermore, the IT Security Officer, Data Protection Officer, the Works Council, the management team and the Digital Innovation team form an AI governance board, which is responsible for complying with the applicable AI regulation and the implementation of AI governance processes in our organization.”
The AI management systems (one of the main pillars of AI governance) can be defined in the policy. Such a system could be trail, which additionally, takes over some of the workload accompanying an AI policy:
“To ensure the proper governance of the AI systems used and developed, and the effective implementation of this AI policy, we use an AI management system (”trail”). The AI management system supports employees to effectively and systematically oversee the use and development of AI systems within the organization, including assessing risks, alignment with the guiding principles and the appropriate documentation and reporting mechanisms.”
Additional policy samples for your inspiration can be found here or here.
Designing an AI policy is a crucial element in AI governance. The structure and contents can vary. Yet, some key definitions, principles, and implementations are a requirement in every well-written policy. Ethical principles that are defined and implemented in the policy are of varying importance and can be integrated or left out depending on your company specifics or use case. The policy is closing the gap between “good intentions” and missing legislation, laying groundwork for employees using, buying, developing, or selling AI systems.
However, setting up a policy can be difficult and time-intensive. We have already worked with different companies on setting up their AI policy aligned with industry best practices or the ISO42001. If you are looking for help and expertise in drafting your own AI policy, contact us here. We can support you both in a workshop with our in-house policy experts and with our governance platform trail, that offers an integrated AI policy builder.
Sources:
Corrêa, N. K., Galvão, C., Santos, J. W., Del Pino, C., Pinto, E. P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., & Oliveira, N. de (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10), 100857. https://doi.org/10.1016/j.patter.2023.100857
Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2