Diligent Logo
Blog
/
Boards & Governance
Kezia Farnham Image
Kezia Farnham
Senior Manager

AI governance: What it is, why it matters and how to implement it

January 27, 2026
0 min read
Board members discussing ai governance best practices

AI governance is emerging as one of the most pressing strategic challenges facing boards today. According to the Q4 2025 Business Risk Index conducted by Diligent Institute and Corporate Board Member, 60% of legal, compliance and audit leaders now cite technology as their top risk concern — well ahead of economic factors (33%) and tariffs (23%). Yet despite this urgency, only 29% of organizations have comprehensive AI governance plans in place.

"Boards are racing to harness AI's potential, but they must also uphold company values and safeguard the hard-earned trust of their customers, partners and employees," says Dale Waterman, Principal Solution Designer at Diligent.

The challenge is clear: How do organizations accelerate AI adoption to support transformational objectives while managing the risks and opportunities it creates? The answer lies in effective AI governance.

Here, we’ll explain how to develop an AI governance approach that helps you harness innovation without exposing your organization to undue risk, including:

  • What AI governance is and why it’s important
  • AI governance frameworks
  • The value of technical standards for AI
  • Challenges in governing AI today
  • Ethical guidelines for responsible AI governance
  • AI governance policies (with a template)
  • AI governance best practices

What is AI governance?

Banner showing Ai governance Defination

AI governance encompasses the frameworks, policies and practices that promote the responsible, ethical and safe development and use of AI systems. It establishes the guardrails that enable innovation while protecting stakeholders from potential harm.

Boards will collaborate with key technology and risk stakeholders to set guidelines for transparency, accountability and fairness in AI technologies to prevent harm and bias while maximizing their benefits operationally and strategically. Responsible AI governance considers:

  • Ethical standards: AI governance policies should promote human-centric and trustworthy AI and ensure a high level of protection of health, safety and fundamental human rights
  • Regulations and policies: Boards should also consider compliance with applicable legal frameworks that govern AI usage where they operate, or intend to operate, such as the EU’s AI Act.
  • Accountability and oversight: Organizations should assign responsibility for AI decisions to ensure human oversight and prevent misuse.
  • Security and privacy: Chief technology officers, risk officers, chief legal officers and their boards must develop a governance approach that protects data, prevents unauthorized access and ensures AI systems don’t become a cybersecurity threat.

Why is AI governance important?

Corporate governance more broadly arose to balance the interests of all key stakeholders — leadership, employees, customers, investors and more — fairly, transparently and for the company's good. AI governance is similarly important because it prioritizes ethics and safety in developing and deploying AI.

“The corporate governance implications of AI are becoming increasingly understood by boards, but there is still room for improvement,” says Jo McMaster, Regional Vice President of Sales at Diligent.

Without good governance, AI systems could lead to unintended consequences, from discrimination and misinformation to economic and social disruptions. Having a strong AI governance approach:

  • Prevents bias: AI models can inherit biases from training data, leading to unfair hiring, lending, policing and healthcare outcomes. Governance proactively identifies and mitigates these biases.
  • Prioritizes accountability: When AI makes decisions, who is responsible? Governance holds humans accountable for AI-driven actions, preventing harm from automated decision-making. PwC’s Head of AI Public Policy and Ethics Maria Axente says, “We need to be thinking, ‘What AI do we have in the house, who owns it and who’s ultimately accountable?’"
  • Protects privacy and security: AI relies on vast amounts of data, a particular risk for healthcare and financial organizations handling sensitive information. Governance establishes guidelines for data protection, encryption and ethical use of personal information.
  • Prepares for AI’s environmental, social and governance (ESG) impact: Generative AI has a significant environmental impact, requiring massive amounts of electricity and water for every query. It also reshaped job markets and corporate operations. Governance helps create policies that balance AI’s opportunities with its ESG risks.
  • Promotes transparency and trust: Many AI systems are considered “black boxes” with little insight into their decision-making. Governance encourages transparency and helps users trust and interpret AI outcomes.
  • Balances innovation and risk: While AI holds immense potential for progress in healthcare, finance and education, governance weighs innovation alongside possible ethical considerations and public harm.

The board's role in AI oversight

Boards must balance competing priorities when overseeing AI: enabling innovation that drives competitive advantage while managing risks to data privacy, security and stakeholder trust.

"Have a candid assessment of what your board's capabilities are, what your C-suite's capabilities are. The board needs to apply an appropriate level of governance pressure to someone who's going to oversee the AI landscape, the risk exposure, the disruption and the opportunity," says Keith Enright, VP and Chief Privacy Officer at Google and Board Director at ZoomInfo.

Responsible AI governance requires boards to address five key areas:

  • Ethical standards: AI governance policies should promote human-centric and trustworthy AI while ensuring a high level of protection for health, safety and fundamental human rights.
  • Regulations and policies: Boards must ensure compliance with applicable legal frameworks governing AI usage across all operating jurisdictions, from the EU's AI Act to emerging state-level regulations in the United States.
  • Accountability and oversight: Organizations should assign clear responsibility for AI decisions to ensure human oversight and prevent misuse.
  • Security and privacy: Chief technology officers, risk officers and chief legal officers must develop governance approaches that protect data, prevent unauthorized access and ensure AI systems don't become cybersecurity vulnerabilities.
  • Business alignment: AI governance frameworks must support strategic objectives while establishing appropriate guardrails for acceptable use cases and deployment scenarios.

What does AI mean for the boardroom

Master the five factors influencing AI governance today to help your board navigate the complex interplay between innovation and risk.

Discover more

AI governance frameworks around the world

Global AI regulations currently lack harmonization, creating complexity for organizations operating across multiple jurisdictions. Some countries emphasize innovation and industry self-regulation, while others implement comprehensive legal frameworks with strict compliance requirements.

"During a time of regulatory uncertainty and ambiguity, where laws will lag behind technology, we need to find a balance between good governance and innovation to anchor our decision-making in ethical principles that will stand the test of time when we look back in the mirror in the years ahead," says Waterman.

Some significant frameworks around the world include:

1. European Union: The EU AI Act

The EU AI Act, which entered into force in 2024, represents the world's most comprehensive AI regulation. The law classifies AI systems into four risk categories:

  • Prohibited AI systems: Applications deemed unacceptable risks, including social scoring by governments and certain biometric identification uses
  • High-risk AI systems: Applications in critical areas like employment, education, law enforcement and critical infrastructure requiring strict compliance
  • Limited-risk AI systems: Applications with specific transparency obligations, such as chatbots that must disclose their AI nature
  • Minimal-risk AI systems: Applications with few or no regulatory requirements

2. United Kingdom

The UK published an AI regulation white paper in 2023, emphasizing a pro-innovation, sector-based approach. Rather than instituting a single comprehensive law, the UK encourages industry self-regulation of ethical AI practices while focusing on safety, transparency and accountability. Sector-specific regulators provide guidance appropriate to their industries.

Since 2024, the government’s formal response and roadmap have strengthened this model by creating a central AI regulation “function” in government, tasking key regulators to publish AI strategies, and signalling future targeted, binding requirements for highly capable AI systems rather than an EU‑style horizontal AI Act.

3. United States

The U.S. approach combines federal executive actions with state-level legislation. The Biden administration's 2023 Executive Order on AI reinforced safety concerns and established NIST's role in AI risk management.

However, federal AI legislation remains limited, leaving states to fill regulatory gaps. States including California, Colorado, Illinois and Utah have since adopted notable AI or automated‑decision laws, with Colorado’s 2024 comprehensive AI Act now delayed to June 2026 for implementation and California advancing detailed anti‑discrimination and employment‑AI rules.

4. China

China's New Generation Artificial Intelligence Plan represents one of the most detailed AI regulatory systems globally. It includes strict AI controls, safety standards and facial recognition regulations. The 2023 Interim Measures for AI Services require AI-generated content to align with Chinese social values and establish provider obligations for safety assessments and user protections.

Subsequent guidance and enforcement practice in 2024–2025 have focused on clarifying providers’ security‑assessment duties, content‑management responsibilities and the extraterritorial reach of the generative‑AI rules for services accessible in China, but the Interim Measures remain the core national framework.

5. Other jurisdictions

AI regulations around the world

See how the regulatory and governance response to AI’s opportunities and concerns has varied globally.

Discover more

Technical standards for AI governance

Beyond regulations, industry bodies and standards organizations have developed technical AI governance guidelines. While voluntary, complying with relevant technical standards can help your organization foster quality, safe and efficient AI-powered products, services and innovation.

Banner showing Quote from Dale Waterman

Most guidelines attempt to strike a balance between these easily conflicting interests. These include:

The NIST AI Risk Management Framework

The National Institute of Standards and Technology (NIST) AI Risk Management Framework offers a flexible, voluntary approach to AI risk management. It addresses key governance concerns, including bias, explainability and security, through four core functions:

  • Govern: Establish organizational culture and accountability structures for AI risk management
  • Map: Identify and categorize AI systems, their contexts and potential impacts
  • Measure: Assess and analyze AI risks using quantitative and qualitative methods
  • Manage: Prioritize and act on AI risks through continuous monitoring and improvement

ISO/IEC standards

The International Organization for Standardization has released comprehensive AI standards addressing data management, algorithmic transparency and security:

  • ISO/IEC 42001: The first management system standard for AI, providing a certifiable framework for governing AI across its lifecycle. Increasingly appearing in procurement and governance requirements, ISO/IEC 42001 offers an approach analogous to ISO 27001 for information security — enabling organizations to demonstrate AI governance maturity through third-party certification.
  • ISO/IEC JTC 1/SC 42: A comprehensive collection of 34 published standards with 40 more under development, covering AI concepts, trustworthiness, bias mitigation, robustness and governance considerations.

IEEE Standards Association

The Institute of Electrical and Electronics Engineers established an AI committee in 2021, developing technical standards for AI governance within specific sectors. These industry-led standards focus on interoperability, safety testing and ethical AI development practices.

International Telecommunications Union (ITU)

The ITU conducts focus groups assessing AI standards requirements for specific applications, including digital agriculture, natural disaster management, healthcare, environmental efficiency and autonomous driving.

Emerging challenges with AI governance

Despite the value of AI governance, getting it right can be difficult. Standards must evolve as rapidly as technology does and consider the distinct regulatory approaches across jurisdictions — not to mention ethical concerns.

Boards working to govern AI may also need to confront:

  1. Technological advancements outpacing regulations: AI is growing at an unprecedented rate, making it difficult for policymakers and regulators to keep up. With regulations one step behind innovation, organizations can easily expose themselves to the misuse of AI, a lack of accountability or unforeseen ethical dilemmas.
  2. Lack of consensus on AI governance: Different countries have varying perspectives on AI regulation, privacy and data security. The EU, for example, has taken a strict regulatory approach with its AI Act, while the U.S. leans toward industry self-regulation. These variations make it challenging to anchor governance to any universal standard.
  3. Limited explainability: It’s nearly impossible to understand how AI systems make decisions. Lack of transparency erodes trust in AI and makes it difficult to govern it. How do you know an AI system for healthcare, for example, is making fair and unbiased decisions based on the data available to it? Those developing AI governance frameworks must consider how to balance developing AI systems with public accountability.
  4. Unclear liability: Determining responsibility when AI causes harm is complex. Is the developer, the user or the organization responsible? Current legal frameworks don’t clearly define AI accountability, particularly in cases where autonomous systems make independent decisions.
  5. Data privacy, security and risk management considerations: AI systems require vast amounts of data, raising concerns about how personal information is collected, stored and used. This exposure to data also raised the stakes for cybersecurity. Non-Executive Director and Founder of C Squared Consulting, Caroline Cartellieri says, “So it’s almost like today boards talk a lot about cybersecurity. Just add that to the power of X because now the risks are becoming so much bigger because nobody quite understands what Gen AI does, its capabilities, and how powerful it can be.”

Ethical guidelines for responsible AI governance

AI governance should encompass more than specific processes for developing and using AI. Frameworks today should also consider five ethical principles to ensure AI is developed and deployed in a way that benefits society while minimizing harm. The principles below are also the foundation for emerging AI ethical guidelines.

  1. Fairness: AI systems should be designed to prevent discrimination and bias. This includes ensuring diverse representative training data, auditing algorithms for bias and implementing fairness-aware machine learning techniques. The OECD AI Principles are an intergovernmental AI standard promoting trustworthy AI that respects human rights.
  2. Transparency: AI models should be explainable and understandable to users. Organizations should disclose how AI systems make decisions, particularly in high-stakes areas like finance, healthcare and law enforcement. The EU AI Act is at the forefront of AI transparency, requiring certain disclosures for high-risk AI systems.
  3. Accountability: Determining who is responsible for AI decisions is challenging. Developers, businesses and policymakers should collaborate to ensure AI systems align with consistent ethical and regulatory standards. This is a pillar of the U.S. Blueprint for an AI Bill of Rights, which was released in October 2022.
  4. Privacy: AI systems must follow strict data protection regulations to safeguard users’ privacy. This includes requiring informed consent and robust security measures. Google’s AI Principles are a compelling example of guidelines for the AI development process that put humans first.
  5. Security: All AI systems should be designed to prevent vulnerabilities and cyber threats. Developers must implement safeguards against breaches, attacks and unauthorized access. The UK’s National Cyber Security Centre is an example of a standard that examines AI security closely.

What is an AI governance policy?

An AI governance policy clearly outlines what an organization considers the acceptable development and use of AI systems. These guidelines should be clear, easy for employees to follow and align with compliance and risk management measures.

What these policies mandate can vary by organization. Some may prohibit entering proprietary information into AI systems; others may specify which tasks AI can support and which it can’t. Whatever the requirement, though, AI governance policies are important because they:

  • Promote and help prove compliance with existing and emerging regulations or standards, such as the EU’s AI Act or the NIST Risk Management Framework.
  • Support ethical AI development, keeping in mind the five aforementioned principles.
  • Enhance public trust and confidence in AI-driven services that put responsible use first.
  • Drive business and innovation goals by balancing business interests, ethical considerations, and the need to adapt to the future of AI.

Template for an AI governance policy

Given the quick pace of AI evolution, writing a governance policy can feel daunting. What does it look like to manage AI proactively and ethically? Here’s a template to get you started:

Effective Date: [MM/DD/YYYY]

Last Updated: [MM/DD/YYYY]

Owner: [AI ethics and compliance team]

1. Purpose

This AI Governance Policy outlines the principles, guidelines and responsibilities for the ethical development, deployment and management of AI within [Organization Name]. We aim to promote the responsible, fair and transparent use of AI while aligning with legal and ethical standards.

2. Scope

This policy applies to all AI systems [Organization Name] develops, procures or deploys, including machine learning models, automated decision-making tools and AI-driven analytics in business operations.

3. Governance principles

[Organization Name] commits to the following:

3.1 Fairness and bias mitigation

  • AI systems must be designed to prevent discrimination based on race, gender, age or other protected attributes.
  • Regular audits will be conducted to identify and mitigate bias in AI models.

3.2 Transparency and explainability

  • AI-driven decisions must be understandable and interpretable by users.
  • Clear documentation on AI functionality and decision-making processes will be maintained.

3.3 Accountability and oversight

  • An AI Ethics & Compliance Team must monitor and manage AI-related risks.
  • Human oversight will be required for high-risk AI applications.

3.4 Privacy and data protection

  • AI systems must comply with GDPR and other applicable data protection laws.
  • Personal data collection must be minimized and anonymized where possible.

3.5 Security and risk management

  • AI systems must follow best practices for cybersecurity, including encryption and adversarial testing.
  • Incident response protocols will be in place to address AI-related security threats.

4. Compliance and legal standards

This policy aligns with the following regulatory frameworks:

  • EU AI Act
  • NIST AI Risk Management Framework
  • OECD AI Principles

5. Roles and responsibilities

  • AI Ethics & Compliance Team: Oversees AI governance implementation and compliance.
  • Data Science & Engineering Team: Ensures AI models adhere to ethical and technical standards.
  • Legal & Risk Management: Evaluates AI risks and ensures compliance with laws.
  • End Users & Customers: Report concerns related to AI ethics, bias or transparency.

6. AI risk assessments and audits

  • AI systems will undergo annual risk assessments to ensure ethical compliance.
  • Third-party audits may be conducted for high-risk AI applications.

7. Continuous monitoring and policy updates

  • AI governance policies will be reviewed and updated annually to reflect evolving regulations and best practices.
  • An internal AI ethics training program will be mandatory for all employees working with AI technologies.

8. Reporting and incident response

  • Employees and external stakeholders can report AI-related concerns to [Compliance Email/Portal].
  • An incident response team will investigate and address AI system failures or ethical breaches.

9. Enforcement and consequences

  • Noncompliance with this AI governance policy may result in disciplinary action, including termination or legal consequences.

10. Contact information

  • Please contact [AI Governance Team Email] for questions regarding this policy.

Implementing AI governance

Effective AI governance starts with an implementation strategy that unites stakeholders across the board, the executive team and operational functions. Organizations can follow these steps to embed AI governance into their operations:

1. Establish an AI governance framework

Define specific governing principles for AI aligned with your organization's values and risk tolerance. Consider how these principles connect AI governance with other essential functions like IT, legal and risk management. Determine how AI oversight fits into your overall governance structures at the board level.

2. Create an AI inventory and classification system

Organizations increasingly begin by creating a comprehensive AI inventory that identifies all internal and third-party AI systems, including shadow AI that employees may be using without formal approval. Classify systems by risk level, use case and jurisdictional requirements using frameworks like the EU AI Act's risk categories or internal risk ratings.

3. Define leadership responsibilities

Assign specific roles and responsibilities to avoid duplication and ensure comprehensive coverage:

  • Chief Technology Officer: Leads AI development, deployment and technical governance
  • Chief Information Officer: Implements data governance policies supporting AI systems
  • Chief Risk Officer: Conducts risk assessments and oversees AI risk management
  • Legal Counsel: Advises on AI compliance with local and international regulations
  • Board AI Oversight: Reviews and approves AI governance frameworks, monitors significant implementations

"Put AI in your risk register. No one's going to argue with that. Get an AI policy. Board should be asking management for a policy," says Richard Barber, CEO of MindTech Group.

4. Implement key AI governance policies

Roll out the pillars of your AI governance approach, including regular bias and fairness audits, reporting mechanisms for AI decisions, human oversight requirements for high-risk systems and data protection compliance measures.

5. Create an AI ethics and compliance committee

Though the board maintains ultimate AI governance oversight, creating a dedicated committee with representatives from technology, legal, risk management and leadership makes policies more rigorous. The committee can define review processes for new AI developments and create training programs for employees and stakeholders.

6. Operationalize governance with tooling

Move beyond policy documents to platforms that execute AI governance in day-to-day operations. This includes dashboards for monitoring AI system performance, workflows for approval and risk assessment, control libraries aligned with regulatory frameworks and risk heatmaps that surface emerging concerns.

7. Monitor, audit and improve

Lead regular risk assessments using frameworks like NIST AI RMF to identify emerging threats. Establish centralized dashboards for real-time AI monitoring. Review AI ethics and compliance updates quarterly to adapt governance policies as technology and regulations evolve.

8. Foster a culture of responsible AI

Train employees on AI ethics and responsible usage, engaging them in protecting the organization. Establish clear mechanisms for reporting AI concerns, and consider AI ethics advisory boards to provide independent guidance. The most successful AI governance programs make responsible AI everyone's responsibility, not just a compliance function.

AI governance best practices

Effective AI governance goes beyond ethical principles to require structured policies, operational controls and continuous monitoring. Organizations working toward best-in-class AI governance should consider the following practices:

Define success metrics

Paint a clear picture among the board and executives about what successful AI governance looks like. Then establish metrics to evaluate your program quantitatively and qualitatively. This could include fairness and bias metrics, scoring for AI output explainability, regulatory compliance rates and incident response effectiveness.

Craft lifecycle-specific governance

AI may need different governance at different stages. Consider how your governance framework specifically addresses risks and opportunities during development, testing and validation, deployment, ongoing monitoring and system retirement.

Establish incident response protocols

AI development, deployment and usage won't always go according to plan. Build fast-acting responses to model failures, security breaches, ethical concerns and other high-risk scenarios. User feedback loops help identify harms proactively before they escalate.

Foster cross-functional collaboration

Diverse perspectives strengthen AI governance. Engage regulators, industry experts and internal stakeholders across functions. The chief risk officer may identify considerations the chief technology officer hadn't recognized; this cross-functional dialogue ultimately strengthens governance outcomes.

Connect governance to board KPIs

Integrate AI governance metrics into board reporting, including the number of high-risk AI systems without owners, time to remediate AI incidents and percentage of systems with documented risk classifications. This visibility keeps governance accountable at the highest levels.

Promote AI literacy across the organization

The most comprehensive AI governance policy fails if employees aren't prepared to uphold it. Conduct AI ethics and governance training for developers, end users and leadership. Create transparency reports communicating the impact of governance efforts. The more you engage employees in responsible AI use, the stronger your governance posture becomes.

Govern AI Ethics responsibly

Join Diligent Institute's AI Ethics & Board Oversight Certification to navigate the complex landscape of ethics and compliance surrounding AI with confidence and integrity.

Discover more

How AI-powered platforms transform governance oversight

Manual governance processes struggle to keep pace with AI adoption's velocity and complexity. Spreadsheet-based policy tracking, email-driven risk assessments and document-based compliance reporting leave gaps that compromise oversight — often discovered only during audits or regulatory examinations.

"Technology risk is now the connective tissue across the entire risk register. We know that boards too are experimenting with new tech like AI tools to enhance oversight, yet relatively few organizations are leveraging AI-powered dashboards for risk monitoring. Closing that execution gap will separate leaders from laggards," says Kira Ciccarelli, Senior Manager of Research at the Diligent Institute.

Purpose-built governance platforms like Diligent eliminate this fragmentation, transforming reactive AI compliance into proactive governance excellence.

The Diligent One Platform unifies governance, risk and compliance functions into a single connected infrastructure — reducing the silos that allow AI governance gaps to go undetected. Within the platform, multiple solutions directly address the challenges that undermine AI oversight quality:

Diligent Boards

Diligent Boards streamlines board governance workflows and ensures AI oversight receives the strategic attention it requires:

  • Smart Builder synthesizes raw information into professional board materials with one click, reducing board prep time by 80% while ensuring consistent, high-quality documentation that supports AI governance disclosures and policy reviews.
  • Smart Risk Scanner identifies risky language and legal red flags before documents reach the board, helping organizations catch AI compliance issues during preparation rather than discovering problems during regulatory audits.
  • SmartPrep generates pointed discussion questions by topic with citations, ensuring directors arrive prepared with strategic questions that surface AI governance priorities requiring board attention.

"The AI enhancements will take that further. It's more automation and more insights — what can be drawn out of the information instead of just managing it," notes one customer in Diligent's Sagic case study, describing how AI-powered governance tools transform board operations.

Diligent ERM

Diligent ERM provides comprehensive enterprise risk management that integrates AI governance into your broader risk framework:

  • Risk inventory capabilities enable organizations to track AI systems alongside other enterprise risks, supporting classification by risk level and jurisdictional requirements aligned with frameworks like the EU AI Act and NIST AI RMF.
  • Risk heatmaps and dashboards surface AI-related risks alongside operational, financial and compliance risks — providing boards with unified visibility into enterprise-wide exposure.
Diligent's risk overview dashboard, which helps with company-wide AI-governance.
  • Moody's benchmarking integration enables organizations to compare their AI risk posture against industry peers, identifying gaps and demonstrating governance maturity to stakeholders.
  • Board-ready reporting translates technical AI risk data into strategic insights that directors can act on, connecting AI governance to board KPIs and oversight responsibilities.

For organizations building AI governance programs with resource constraints, AI Risk Essentials delivers AI-powered peer benchmarking and training tools that accelerate program maturity in as little as seven days. The solution provides a practical pathway to professional AI governance without hiring consultants or building frameworks from scratch.

Diligent IT Compliance

Diligent IT Compliance accelerates the certifications and frameworks that underpin effective AI governance:

  • Pre-built framework toolkits support 75+ compliance frameworks, including ISO/IEC 42001, NIST AI RMF, SOC 2 and ISO 27001 — eliminating the need to build AI governance documentation from scratch.
  • AI control suggestions help teams without dedicated compliance expertise implement AI governance requirements quickly, with the Common Controls Framework enabling reuse across multiple certifications.
  • Automated evidence collection streamlines external audit processes, demonstrating AI governance maturity to regulators, investors and customers.

These integrated capabilities ensure that AI governance moves from policy documents to operational reality — providing the accountability, transparency and oversight that regulators and stakeholders increasingly demand.

Whether you're establishing your first AI governance framework, preparing for EU AI Act compliance or demonstrating AI oversight maturity to investors, integrated governance technology provides the accuracy and efficiency that manual processes cannot match.

Book a demo to see how Diligent helps organizations transform their AI governance processes.

Frequently asked questions about AI governance

Who is responsible for AI governance in an organization?

AI governance is a shared responsibility across multiple functions. Typically, a chief compliance officer, general counsel or dedicated AI governance team provides oversight, while the board retains ultimate accountability.

Chief technology officers lead technical governance, chief risk officers conduct risk assessments and legal counsel ensures regulatory compliance. All employees share responsibility through training and policy adherence.

What is the difference between AI governance frameworks like NIST AI RMF and ISO/IEC standards?

The NIST AI Risk Management Framework provides voluntary guidance for AI risk management through four core functions: govern, map, measure and manage. ISO/IEC standards like ISO/IEC 42001 provide certifiable management system requirements that organizations can use to demonstrate governance maturity through third-party audits.

Many organizations layer both approaches — using NIST for risk management methodology and ISO for certification-ready governance structures.

How does the EU AI Act affect AI governance programs?

The EU AI Act requires organizations to classify AI systems by risk level and implement governance requirements proportionate to that risk. High-risk systems require conformity assessments, technical documentation, human oversight and incident reporting.

Organizations operating in EU markets or serving EU customers must align their AI governance programs with these requirements by August 2026 or face significant penalties.

What should boards ask management about AI governance?

Boards should ask management about the organization's AI inventory, how AI systems are classified by risk level, what controls are in place for high-risk applications, how incidents are detected and reported, the compliance roadmap for applicable regulations and who holds accountability for AI governance outcomes. Regular AI governance updates should be a standing board agenda item.

Schedule a demo to see how Diligent's integrated platform transforms AI oversight and improves company-wide governance.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2026 Diligent Corporation. All rights reserved.