Lead the AI era of GRC at Elevate 2026 — Join us April 22–24 in Atlanta Register nowarrow_forward
Diligent Logo
Diligent Logo
Products
arrow_drop_down
Solutions
arrow_drop_down
Resources
arrow_drop_down
Diligent AI

Why mission-driven organizations can’t afford to ignore AI governance

March 20, 2026
4 min read
AI governance
Ellen Glasgow

Ellen Glasgow

General Manager, Mission Driven Organizations

AI is transforming how mission-driven organizations operate, but with opportunity comes risk. In a recent episode of the Leading with Purpose podcast, Dominique Shelton Leipzig and Dr. Andrea Bonime-Blanc unpack what responsible AI governance really looks like for nonprofit and public sector boards, and where many organizations are still exposed.As boards increasingly adopt AI tools, the need for future-ready governance has never been more urgent.


Gaps in AI governance

A recent survey by Diligent of mission-driven leaders reveal a concerning gap: while many mission-driven organizations are embracing AI, over 60% lack a formal AI policy, 77% have not addressed ethical AI use, and nearly 90% have not provided board training on the topic.This leaves organizations exposed, not just to operational and reputational risks, but also to missed opportunities for impact and innovation. As Dominique Shelton Leipzig, CEO of Global Data Innovation, notes, “Trust is a major issue, hallucinations, accuracy, that is why it is so important to pay attention to governance right now.”


Three governance principles every board must get right on AI

The central challenge is clear: how can boards and executives ensure that AI is deployed responsibly, ethically and in alignment with organizational values? Three core ideas emerge:

1. AI governance must be anchored in trust and accountability

The tone from the top is critical. Boards and CEOs must set clear expectations for transparency, fairness and accountability, not just in policy documents, but in daily decision-making. Dr. Andrea Bonime-Blanc, founder and CEO of GEC Risk Advisory, emphasizes, “If you cannot have a CEO or president… who sets the right tone from an accountability, transparency, fairness standpoint… the rest of it falls apart.”

2. Lifecycle oversight and human-in-the-loop safeguards are essential

AI systems are not “set and forget” technologies. Continuous testing, regular audits, and clear human oversight are necessary to catch bias, inaccuracies, and drift in AI models. As Shelton Leipzig explains, “Generative AI does move and change over time and degrade over time. Accuracy rates are anywhere between 29% and 79% in the AI models… you just want to make sure that you have a way to be alerted for when inaccuracy and bias occurs.”

3. Privacy, cybersecurity, and vendor diligence cannot be overlooked

Mission-driven organizations often handle sensitive data. AI amplifies both the potential and the risks. Boards must ensure compliance with privacy laws, ring-fence sensitive data and rigorously vet third-party vendors. Bonime-Blanc advises having someone in the organization to keep the organization up to date on all the applicable privacy and other regulations relating to AI for the jurisdictions in which they doing business.


How boards can operationalize responsible AI oversight

For boards and governance leaders, this advice translates into actionable imperatives:Embed AI governance into the organization’s DNA, not just its documentation.Prioritize board and staff education on AI fundamentals, ethics and oversight.Treat the AI policy as a living document—review and update it regularly as technology and regulations evolve.Insist on transparency from vendors and require robust contractual safeguards.Maintain a “human in the loop” for all high-impact decisions, especially those affecting health, safety, or vulnerable populations.


AI governance needs to keep pace with use

AI offers mission-driven organizations unprecedented opportunities to advance their goals — but only if governance keeps pace. As Dominique Shelton Leipzig puts it, “Get clear on your point of view on AI and have a framework within which to ingest management reports and really have an opinion on this because ultimately your organization will be responsible and the fiduciary duties that you have as nonprofits are just as critical as they are for for-profits.”Listen to the full Leading with Purpose podcast episode with Dominique Shelton Leipzig and Dr. Andrea Bonime-Blanc to hear how boards can pressure-test AI decisions, challenge management assumptions, and build governance that keeps pace with real use.

Podcast

· Feb 4, 2026

· 1 min read

How to create an AI policy for your mission-driven organization

By Jill Holtz

In this episode, Dominique Shelton Leipzig, CEO of Global Data Innovation, and Dr. Andrea Bonime-Blanc, founder and CEO of GEC Risk Advisory, break down what mission‑driven organizations need to kn...

Guide

· Dec 18, 2025

· 1 min read

Your essential starter kit for responsible AI oversight in public governance

Establish effective AI oversight with our SWOT template and board meeting agenda to drive responsible AI governance.

Guide

· Dec 19, 2025

· 1 min read

A no-nonsense guide to AI risk, governance and implementation for public boards

Responsible AI guide for public boards: understand risks, build governance, follow a clear roadmap and adopt AI transparently across schools and local government.