
AI is transforming how mission-driven organizations operate, but with opportunity comes risk. In a recent episode of the Leading with Purpose podcast, Dominique Shelton Leipzig and Dr. Andrea Bonime-Blanc unpack what responsible AI governance really looks like for nonprofit and public sector boards, and where many organizations are still exposed.As boards increasingly adopt AI tools, the need for future-ready governance has never been more urgent.
A recent survey by Diligent of mission-driven leaders reveal a concerning gap: while many mission-driven organizations are embracing AI, over 60% lack a formal AI policy, 77% have not addressed ethical AI use, and nearly 90% have not provided board training on the topic.This leaves organizations exposed, not just to operational and reputational risks, but also to missed opportunities for impact and innovation. As Dominique Shelton Leipzig, CEO of Global Data Innovation, notes, “Trust is a major issue, hallucinations, accuracy, that is why it is so important to pay attention to governance right now.”
The central challenge is clear: how can boards and executives ensure that AI is deployed responsibly, ethically and in alignment with organizational values? Three core ideas emerge:
The tone from the top is critical. Boards and CEOs must set clear expectations for transparency, fairness and accountability, not just in policy documents, but in daily decision-making. Dr. Andrea Bonime-Blanc, founder and CEO of GEC Risk Advisory, emphasizes, “If you cannot have a CEO or president… who sets the right tone from an accountability, transparency, fairness standpoint… the rest of it falls apart.”
AI systems are not “set and forget” technologies. Continuous testing, regular audits, and clear human oversight are necessary to catch bias, inaccuracies, and drift in AI models. As Shelton Leipzig explains, “Generative AI does move and change over time and degrade over time. Accuracy rates are anywhere between 29% and 79% in the AI models… you just want to make sure that you have a way to be alerted for when inaccuracy and bias occurs.”
Mission-driven organizations often handle sensitive data. AI amplifies both the potential and the risks. Boards must ensure compliance with privacy laws, ring-fence sensitive data and rigorously vet third-party vendors. Bonime-Blanc advises having someone in the organization to keep the organization up to date on all the applicable privacy and other regulations relating to AI for the jurisdictions in which they doing business.
For boards and governance leaders, this advice translates into actionable imperatives:Embed AI governance into the organization’s DNA, not just its documentation.Prioritize board and staff education on AI fundamentals, ethics and oversight.Treat the AI policy as a living document—review and update it regularly as technology and regulations evolve.Insist on transparency from vendors and require robust contractual safeguards.Maintain a “human in the loop” for all high-impact decisions, especially those affecting health, safety, or vulnerable populations.
AI offers mission-driven organizations unprecedented opportunities to advance their goals — but only if governance keeps pace. As Dominique Shelton Leipzig puts it, “Get clear on your point of view on AI and have a framework within which to ingest management reports and really have an opinion on this because ultimately your organization will be responsible and the fiduciary duties that you have as nonprofits are just as critical as they are for for-profits.”Listen to the full Leading with Purpose podcast episode with Dominique Shelton Leipzig and Dr. Andrea Bonime-Blanc to hear how boards can pressure-test AI decisions, challenge management assumptions, and build governance that keeps pace with real use.