Diligent Logo
Podcast
/
Education & Government
Jill Holtz Image
Host
Jill Holtz
Senior Content Strategy Manager

How to create an AI policy for your mission-driven organization

In this episode, Dominique Shelton Leipzig, CEO of Global Data Innovation, and Dr. Andrea Bonime-Blanc, founder and CEO of GEC Risk Advisory, break down what mission‑driven organizations need to know about building an effective AI policy. We explore why AI governance is rising to the top of board agendas, common pitfalls organizations face, and why trust, accuracy and continuous testing must anchor any AI program.

Our discussion covers key risks of deploying AI without a framework, the importance of tone from the top, lifecycle oversight, cybersecurity considerations, and how boards can embed fairness and accountability into everyday decision-making. We also talk about privacy, bias, human‑in‑the‑loop safeguards and third‑party oversight — and why AI policies should evolve as fast as the technology itself.

Tune in for practical, expert guidance to help your board shape responsible, future‑ready AI governance.

If you enjoyed this episode, please rate and review the podcast to help others discover it too.

Guests
Andrea Bonime-Blanc Image
Andrea Bonime-Blanc
Founder & CEO, GEC Risk Advisory, Board Director, Strategist, Author
Dominique Shelton Leipzig Image
Dominique Shelton Leipzig
Founder & CEO, Global Data Innovation; Former Partner, Cybersecurity & Data Privacy

More about the podcast

Dominique Shelton‑Leipzig is CEO of Global Data Innovation and Dr. Andrea Bonime‑Blanc is founder and CEO of GEC Risk Advisory.

In this episode, we explore why AI governance has become such a pressing priority for mission‑driven organizations, we look at where organizations tend to go wrong, and why trust, accuracy and continuous testing must sit at the heart of any AI program.

We also talk about the risks of deploying AI without a framework. We discuss the importance of tone from the top, lifecycle oversight, strengthening cybersecurity, and how boards can embed principles like fairness and accountability into daily practice—not just policy documents.

Listen now as we explore privacy considerations, bias in AI systems, human‑in‑the‑loop safeguards, third‑party oversight, and why AI policies should be living, breathing documents.

And stick around to the end for Dominique and Andrea’s excellent advice for mission‑driven boards for creating AI policy.

Resources mentioned during the episode

Further resources on creating AI policy for nonprofit boards

Further resources on creating AI policy for public facing boards

Transcript for How to create an AI policy for your mission-driven organization

Welcome to the Leading with Purpose podcast, where we share practical advice for purpose-driven work and board leadership and mission-focused organisations. I'm your host, Jill Holtz from Diligent, and this episode is all about how to create an AI policy for your mission-driven organisation. In this episode, I talk to two leading voices at the intersection of AI, governance, and organisational risk.

Dominique Shelton Leipzig, CEO of Global Data Innovation, and Dr. Andrea Bonime-Blanc, founder and CEO of GEC Risk Advisory. We explore why AI governance has become such a pressing priority for mission-driven organisations. We look at where organisations tend to go wrong, and why trust, accuracy, and continuous testing must sit at the risks of deploying AI without a framework.

We discuss the importance of the tone from the top, lifecycle oversight, strengthening cybersecurity, and how boards can embed principles like fairness and accountability into daily practice, not just policy documents. Listen now as we explore more about AI privacy bias in AI systems, and ultimately why AI policies should be living, breathing documents. And stick around to the end for Dominique and Andrea's excellent advice for mission-driven boards for creating AI policy.

Welcome, everybody. Today, I am joined by two guests with extensive expertise in the area of AI and governance. Firstly, I'd like to welcome Dominique Shelton Leipzig, who is CEO of Global Data Innovation.

Dominique is an award-winning attorney, and her organisation guides CEOs, boards, and executive teams to lead confidently in the face of AI, privacy, and cybersecurity risk. Her latest book is called Trust, Responsible AI, Innovation, Privacy, and Data Leadership. That's published by Forbes Book, and has recently won an International Book Award for business impact. So welcome, Dominique.

Dominique Shelton Leipzig: Thank you for having me, Jill. Excited to be here.

Jill Holtz: And then I'm also delighted to be joined by Dr. Andrea Bonime-Blanc, who is founder and CEO of GEC Risk Advisory. Andrea is a former C-suite executive at four global companies. She's board director and advisor, and a global expert on future-proofing organisations. And she's also author of several books, and her newest one is launching this February. It's entitled Governing Pandora, Leading in the Age of Genitive AI and Exponential Technology, published by Georgetown University Press. So welcome, Andrea.

Andrea Bonime-Blanc: Thank you so much.

Jill Holtz: So just to give a bit of context as to why I invited you to join us today on the podcast, last fall we ran a survey of non-profit and public board leaders to ask them about AI and AI governance. And the responses were really interesting.

It turned out that while many organisations are adopting AI, over 60% of organisations do not have an AI policy in place, 77%, so that's over three quarter, haven't really addressed ethical AI use, and most worryingly, perhaps almost 90% have not arranged any training for their board members on the topic. So that's really why I wanted to invite you today to share your advice and expertise to help mission-driven organisations. So you're both very welcome, really thrilled to have two people with such great expertise join us to just really explore this practical topic of how to create an AI policy for your mission-driven organisation.

Why is AI governance a priority for mission-driven boards?

So to kick us off, Dominique, from your perspective, why is AI governance becoming such a priority for non-profit and public sector boards right now?

Dominique Shelton Leipzig: Thank you, Jill, for putting this together and thanks everybody for listening. The reason this is a priority is some of the headlines that we saw in the past couple of years, 95% failure rate in AI deployment, and that was the MIT study, but multiple other studies, McKinsey, Gardner, PWC, have come out with similar results. So there is that impact.

The other thing, as we're taping, we're in the middle of Davos and there were two reports, the World Economic Forum's risk report that came out last week on January 14th, identifying AI risk as both important for short-term and long-term risk. And then we also have the trust barometer report that came out by Edelman PR just a couple of days ago. And although this was a general trust barometer, apparently all anybody wanted to talk about in that report was AI.

So, trust is a major issue, hallucinations, accuracy, that is why it is so important to pay attention to governance right now.

Jill Holtz: And I think particularly for nonprofit and public boards, facing boards, trust by their community, their stakeholders is so important. So, Andrea, Dominique mentioned risk there. What are the biggest risks you see when organizations start using AI tools without a formal policy in place?

Andrea Bonime-Blanc: Well, I think the greatest risk is that we as humans are not doing our homework. In other words, we have not prepared ourselves properly to understand what AI is and how the tools intersect with each of us in terms of our daily work, in terms of the larger picture of the footprint of the organization. And nonprofits and public agencies are just like corporations.

Everybody needs to have some form of organizational framework, policy, practices. What I like to think about frequently, given my background, because I held the role of Chief Ethics and Compliance Officer for many years in several companies, is that you need to have something akin to an effective ethics and compliance program turbocharged to include all the new technologies that are affecting us. And so at the end of the day, it is the responsibility of the board and the C-suite, and especially the CEO, to set the tone with how are you going to deploy AI in your organization.

So if you don't have a policy, and you don't have a framework, and you don't have education, and you don't have the resources, both internal and external, that will help you navigate for your organization, you are really risking a lot, not only in terms of downside risk, but the ability to really get things done in a productive, constructive, turbocharged way that might make sense for your organization.

Drafting your AI policy

Jill Holtz: Yeah, and the risk of falling behind, say, you know, in the nonprofit space, you're competing with other nonprofits for donors and strategic opportunities as well. So risk has opportunity side as well. Dominique, let's talk a little bit about if you don't have an AI policy, and you need to start, get one in place ASAP, you know, when they're drafting an AI policy, what should organizations be thinking about including in the purpose and scope section, just to kick us off?

Dominique Shelton Leipzig: There are really kind of two policies here. At the CEO and board level, it's important to understand what should be happening in the organization, and the key areas that have had the biggest impact on outcomes that are favorable and outcomes that are negative. And so when we're counseling clients, we try to provide to them a clear roadmap and clarity as to where there have been consistent problems with organizations in the past, so that in building and deploying the AI within the organization, the CEO of the board can ensure to ask the right questions so that those known risks are being anticipated and dealt with.

In our case, with our proprietary database with over a million unique weighted attributes, we can guide clients in that regard. On the ground, in terms of the management team, is being aware of those same five things, right? Being aware what has caused risk to other companies and organizations, and then how to get ahead of that.

There's not a uniform risk barometer. What's happened to nonprofits and the healthcare providers, for example, is different than, say, the National Eating Disorder Nonprofit, where the CHAPA was encouraging people with bulimia and anorexia to go on diets, for example. These are the types of things that are really bespoke to every organization, and making sure that that policy covers what is important to minimize risk and maximize opportunity.

Jill Holtz: That's really good. I suppose what you're really talking about is being on top of where AI is being used in the organization, assessing it, understanding the risks, asking the right questions as board members about that for good oversight. Andrea, from your perspective, then, what guiding principles like transparency, fairness, accountability, what do you think is essential here when they're putting this together?

Andrea Bonime-Blanc: I think all of the three that you mentioned. I think it starts with what Dominique talked about, about trust at the beginning of our conversation. The trust factor from leadership is a be-all, end-all, in my opinion.

If you cannot have a CEO or president, managing director, who sets the right tone from an accountability, transparency, fairness standpoint, and really provides not just the tone from the top, but also the deployment, the resources, the budget that is necessary for the people within the organization to deploy themselves accordingly, be able to acquire tools, get educated to use those tools and software and chatbots, et cetera, that are relevant to their organization.

If you don't have that tone being set by the CEO or president, and also having the board hold that person accountable on the promises, the resources, the budgets, not just the talk, but the walk. If you don't have those things being deployed from the very top, the rest of it falls apart. Again, I go back to the ethics and compliance paradigm, which is maybe a little bit old and it needs to be revised and revived and restructured, but there's an essential framework there that can help guide us through a culture of trust, which then helps us figure out how do you do the accountability, how do you do the fairness, how do you do the transparency?

How can boards ensure AI governance is embedded in technology and decision-making?

Jill Holtz: And also managing reputational risk. I mean, that's so crucial for mission-driven organizations. If you think about non-profits and the donors looking at them and thinking, do I want to donate to this organization? And then on the public sector side, you know, citizens, parents, say of K-12, that they can trust the reputation of that organization. So as a follow-on question to both to that point, how can boards ensure that those principles aren't just words on paper, but that they're kind of embedded in technology and that they're actually applied in daily decision-making? Andrea, if you could answer that first for me.

Andrea Bonime-Blanc: Sure. So I think stemming from what we've just talked about, some of these lofty principles, there has to be something where the rubber meets the road. And so there, you need to have some form of framework.

I like to call it a lifecycle approach to embedding technology into the intake, into the development, testing, deployment, release, and also, you know, sunsetting of your products and services. So if you're a non-profit, you may have a different kind of set of products and services than a manufacturing company. I serve on the board of a theater company here in New York City.

And there we cater our biggest stakeholders, our students in high schools in New York City, who are our most important stakeholder and they deploy AI for some of their writing of plays, scenario creation, and so on. But we want to make sure they continue to be the human in the loop and that they are foremost the creators of this creative endeavor and mission that we have in this particular non-profit. So that's just one tiny example, but I think we all have to look at that lifecycle approach of embedding the technology with humans in the loop and with the mission of the organization first and foremost.

Jill Holtz: First and foremost, I love that. Dominique, anything further that you want to add to that point?

Dominique Shelton Leipzig: Yeah, I just love what Andrea just said there. So embedding in the technology is key. At our company, when we're counseling clients, we focus on data-driven results to figure out what that framework should look like.

When you look at some of the frameworks out there, NIST, if you're international, EUAI Act, sitting in London right now, UK advisory, or if you're looking at the different states in the U.S., there are different KPIs, some of them hundreds, thousands of KPIs. So for the board and the CEO of a non-profit to focus on the areas that are going to have the highest impact. Our research has found, like I said, our proprietary database with over a million unique weighted attributes has pointed to five areas that have been most critically aligned with success, AI deployment that is successful.

And then if any one of those five things is missing, we found that that is in the sort of kind of ground zero for the source of problems. We've embodied that in what we call a trust framework, but it's driven by data and actual real-life incidents that happen to multiple companies and organizations across the globe. And just that rigor around this, so that it doesn't turn into a loosey-goosey ethics or responsible AI.

Everybody really knows what that means. And just focusing on things like accuracy, following the company's standards. That's what accuracy comes from.

And really just titrating down on the fault lines where the data has gone awry because tracking or testing was not in place. So just making sure it's embedded in the tech is key.

How AI impacts boards when it comes to privacy and data protection

Jill Holtz: Yeah. So let's touch a little bit on privacy considerations because non-profit and public sector, they handle, the boards are discussing sensitive data. They might be talking about school students. They might be talking about service clients, clients that take their non-profit service. So Dominique, how does AI change the way we think about personal and sensitive data within a mission-driven environment?

Dominique Shelton Leipzig: First of all, AI brings all of this at scale. One thing to keep in mind is that non-profits, it can be very harmful to the brand if there are privacy issues associated with your deployment. We have so many regulators and also just public opinion around the world in terms of protecting privacy and deploying AI.

So you definitely want to pay attention to that. In California and also in multiple jurisdictions around the world, there's not a distinction on this issue for non-profits versus for-profit entities. And so individuals have data subject rights and they can say, I want to remove my data from training, for example, or I want to make sure that my data doesn't come out as output.

And although this is within the non-profit realm, you can imagine how brand tarnishing it can be. We had one major quick service food restaurant where 64 million employment records were disclosed because an AI-fueled energetic issue, data breach. That's the kind of fee for that public company, stock dropped by 7%, but just imagine what donors, that impact.

Jill Holtz: The impact that that would have had, yeah. So is that about being really, really conscious about where data is being put? Which AI tools, are they ring-fenced? Is it that kind of consideration? Andrea, based on your compliance experience, what should boards understand about complying with privacy laws and regulations when using AI?

Andrea Bonime-Blanc: It's just an extension of what they were supposed to do already. And again, it goes back to what is the footprint of the organization? Is it a small local theater non-profit or is it the American Cancer Society or the Red Cross or some other major multinational global organization?

And so just like with companies, I think there's a spectrum of best practices, but at the end of the day, someone, and hopefully the board is attuned to this, and hopefully the president, MD of the organization is proactive about this, has to keep up with the changing laws, the regulations, the rules, and the norms of best practice that apply to privacy in an age of AI. And so what we need to do is have people on the board who are attuned to these issues, who have the skills, make sure your skills matrix includes people with technology experience and AI savvy, people who are curious and into continuous education, because this is the era of continuous education. No matter how much education we get, we'll never even scratch the tip of the iceberg on this one.

But our curiosity and our responsibility are critically important aspects of who we are when we're serving on a board, when we're a part of management, whether it's for the corporation or the public agency or the nonprofit. And so I would counsel folks to have someone in the organization, a pro bono legal counsel. I mean, this is what we use a lot, a law firm that is willing to do pro bono work, who's going to keep me posted and the organization posted of all the applicable privacy and other regulations relating to AI for the jurisdictions in which I am doing business as a nonprofit or a public agency.

And so have that resource who's constantly feeding you the most important material, things that you need to know about, and then you can act accordingly and prepare the rest of your organization accordingly.

Addressing AI ethics and bias

Jill Holtz: So that's one thing that we've done recently at Diligent is we've published some new regulation outlook guides that cover AI regulation. So what do people need to be aware of? And I'll put the links in the show notes. And I think also just to go back to the thing we mentioned a minute ago about not putting data into sort of open LLMs is that within our Diligent software, Diligent community and board effect, the AI tools that are in there are ring fence. So they're not sharing anything kind of with an open AI public situation. So I think people want to have that trust in the tools that they use.

I mentioned at the beginning that a huge proportion over three quarters of organizations haven't addressed AI bias and ethics. So let's touch on that. Dominique, why does bias occur in the AI systems, even when you think of AIs as being very positive things, and even when organizations intend to be fair in their use? Talk to me about your experience of that.

Dominique Shelton Leipzig: So, when we think about AI being a generative AI being brought into an organization, often, especially for nonprofits, they're building applications sitting on top of the large language models that are provided by the large tech companies. And what the large tech companies are doing are just really facilitating bringing the whole wide internet to your organization, which is fantastic. But as we all sit here and we're listening, we know that the whole wide internet has an accuracy in it, has a bias, for example, in it.

That's why the governance that we're talking about is so important to balance that out. A lot of bias is also just plain inaccurate information, right? There's times where you type in things about the number of judges that are women, and it has inaccurate information, other things.

So accuracy, I see as the bias issue. So if organizations can think about for a particular AI use case, say it's for hiring, or it's for delivering services, what bias metrics do you have already in the organization? Most organizations have a blogger, but we want to make sure that we get our applications out to everybody in the community, or we want to make sure that we're hiring fairly.

That needs to be coded into the AI itself. And I call those the standards or no-nos or guardrails, but whatever it is that is not acceptable to do in your organization as far as fairness bias goes, that needs to be translated in terms of code and grafted on to the application that they're working with, because it won't know on its own. The last thing I'll say to you is that if you train and get your AI perfect, you put in your standards, just know that AI will drift.

Generative AI does move and change over time and degrade over time. Accuracy rates are anywhere between 29% and 79% in the AI models that were provided last year by the large language model experts. So you just want to make sure that you have a way to be alerted for when inaccuracy and bias occurs so that your organization can step in and fix the model.

Jill Holtz: Great. So just to recap, I heard, you know, making sure those guardrails and guidelines are built into the AI, checking for accuracy, and making sure again that you're not, you know, where possible, you're not putting any sensitive data into kind of open training. Andrea, what practical steps can non-profits or public agencies take to, again, ensure the use of tools is ethical?

Andrea Bonime-Blanc: Building on everything that Dominique just shared with us, I think it's very important to extend the mind frame that we have for pre-AI, pre-gen AI world, pre-agentic AI world to this world, but really kick it up a few notches. And by that, I mean, making sure that your frameworks and testing and auditing and resources, both internal and external, are savvy to these new developments and these activities that are very different from what we have experienced pre-gen AI and agentic AI, and that they know how to conduct some of those audits, some of those algorithm audits that we need to have done, data audits that we need, and then doing some change management audits too to see that all the people in the places that, you know, whether they're in internal controls or risk management or some other form of testing that takes place in the organization, that those people are savvy too, that they have been educated and engage in continuous education in their areas of expertise. And so, this is a continuous learning challenge for all of us, I think, and that also extends third party, third party and vendor assessments.

And one other thing I just want to add, which we haven't talked about, which I think is critically important in this space, is turbocharged cybersecurity. Because at the end of the day, you talked about ring fencing and not sharing outside of the organization, but if your cyber walls and tools and protections are not powerful, you are in a lot of trouble because agentic AI and generative AI is really just weaponizing cyberattacks in a way that we've never seen before. It's all very daunting, but we can survive. We need to navigate through this. Just extend the tools you have, test them, and test the people for their knowledge as well.

Jill Holtz: And I think as part of that is kind of documenting that testing, and then where it's suitable to transparently show that, again, to your stakeholder groups, “this is how we manage this process. This is how we build trust in the AI tools.” It's kind of a life cycle approach, isn't it?

What decisions should never be delegated to AI

And we've mentioned always having a human in the loop. And I think we all agree that no matter what AI tool you use, you need to have a human look. Dominique, in your view, when it comes to human oversight, what decision should never be fully delegated to AI?

Dominique Shelton Leipzig: Anything that really could have a significant harmful impact to physical or emotional well-being of an individual. Think about health, financial, employment, children, education. Those are areas that it's really important to have processes where if you're going to be making a decision about, say, expelling children from school, which is something that happened in South Florida, underlying that the AI was misidentifying audio and video and associating that with violent tendencies when, in fact, the kids were singing happy birthday to each other.

It actually has led to over 18,000 students that were A and B students that are on a pathway of prison right now. It's been going on for the past three years. So these are really serious things.

And the way to address it, and I love what you both talked about in terms of testing, how does the human get into the loop? Well, when you're talking about AI at scale, humans looking at every decision is going to be tough. That's why the testing, the technology, the continuous every second of every minute of every day on these sorts of critical issues that I was talking about, that's the testing that needs to run.

Once a quarter, once a month, once a week, not cut it for technology that can move and shift into inaccuracy at any second of any minute of every day. I'm excited for that continuous testing to be in place so that your no-nos, they're testing against your no-nos, like they want to make sure that we're not recommending diving to people that have eating disorder, those sorts of things. And if it comes out that that is happening, then humans can be alerted, not over alerted with things that don't matter, but your clear no-nos are in guardrails in the technology.

And so the human incident response team can snap into action that SWAT team immediately and have the things that they need to correct the model, which means technical documentation, logging data and metadata of all the testing, and also all the training that was done before so that they can quickly correct. We cover all this in our trust framework, and I am really excited it could be put into the show notes, our AI trust leadership certificate program. I talked about it with some other diligent events, goes into this to get the boards and our CEOs the tools they need to know what to ask for specifically so that they don't end up a vulnerability.

Jill Holtz: That's great, Dominique. I love that. I love the idea of a SWAT team as well on the regular testing. Andrea, to that point, how can boards and organizations clarify human roles and responsibilities for reviewing or approving AI outputs?

Andrea Bonime-Blanc: Going to sort of the bigger picture, Dominique really detailed a lot of the tactical things that I think are so important. Going back to sort of the role of the board and the C-suite in an agency, public agency or nonprofit on this, is they have to be sufficiently educated and have a sufficiently open and proactive culture about this that they can set that tone for the risk management program that includes the AI risks, for the ethics and compliance program that deploys the culture of being sensitive to all of these tech issues and AI issues.

The most important point here is that we have people talking to people about what the role is in technology. So the frameworks I've used in the past when I was an executive in a technology company, I was a head of risk audit and corporate responsibility and a few other things, and we had an enterprise risk management team of 14 people all the way from the CFO and the GC down to experts in export controls, that kind of thing. Having those people periodically meet and talk about how do you test these things for the gen AI, for the agentic AI, what are the better tools, constantly benchmarking about what are the better tools to be used.

As Dominique said, you cannot do what an AI does in terms of testing, but you do want to have an experienced auditor, tester, red teamer working with those tools and understanding the outputs and understanding when things are going awry. It's always that combination of a good framework of culture and people, but then having the right tools that you can interpret back again and say, this doesn't seem right or this is very useful.

The importance of board training on AI

Jill Holtz: Yeah. Human judgment is so important as well, isn't it? Speaking of board member training, because I was quite shocked in our survey that 90% of the organizations have not organized any training for their board members on AI. Maybe they don't know where to start, but Dominique, why is it important that board members and staff have training on AI ethics, on AI?

Dominique Shelton Leipzig: First of all, Diligent has some great tools and an AI certificate program, multiple videos in your portal and I just want to salute you and Dottie and everybody in terms of the resources that you've made available. Look, you cannot exercise oversight over something that you don't know and you don't understand. So the education is critical, not just on using the AI tools, I know a lot of our board members are learning about that but also AI governance and what steps you need to take as board members, as CEOs, as leaders, to exercise oversight. This is not a tool, it's incredibly powerful and incredible opportunities with this tool, you know, early detection of lung cancer. There's Dr. Fagenbaum who's redeploying existing drugs for different uses and doing some amazing work there. So there's groundbreaking opportunities, but it is not a set and forget technology. It is not like the cloud. It is not like your email. Governance and the human part of this to be able to ensure great results are key. Now the good news is we've already figured out it's not boiling the ocean. It's studying the four or five areas being aware of where 85% of the problems are so you can focus in and home in on what is important for the board and organization to focus on. And also your teams, right? Because it breaks my heart when I see a team that works really hard and like one of the states, the State of Tennessee spent 400 million on their algorithm only to have a judge rule that the algorithm could not be used to serve Tennesseans to deliver Medicaid and Medicare benefits because it was denying those benefits in error 92% of the time. So that's a situation where the State obviously didn't intend that, the consultants that were working for nine figures didn't intend that either. That's why the board's role is so important to ask these questions to make sure that everyone's on the right path.

Jill Holtz: It's not necessary for board members to be experts in AI, but they need to know the fundamentals of those frameworks and governance, and to be able to ask the right questions. Just in terms of, if you've got your AI policy in place or once you have it in place, Andrea, how often do you think that should be reviewed? What triggers should prompt an update as well?

Andrea Bonime-Blanc: I think it needs to be a living breathing document that is constantly looked at and potentially revised, and we do have the tools to do that nowadays. We don't have a piece of paper that needs to be retyped or something like that. So, I think there are no excuses for making a living breathing document that sits on a site where people have access to it, where the owners of the policy can go in and make changes, and then have that flow into education and communication to the extent necessary.

And there always needs to be a resource, a human resource, expert person that the policy refers to for more help, especially in smaller organizations. It's not so hard. In bigger ones also, there are many people. So having the name, the email, the phone number of the person who can help you interpret the policy, I think is super important. But the policy owners have to be these savvy people who know that they have to constantly keep abreast of what's happening and then maybe tweaking and changing the policy with the protocols in place to change a policy that are necessary. And I think we need that living breathing approach and we need a board and a president who favor this and who understand it not just as a concept. I think one of the most important things for “the brass”, so to speak, to engage in is actual use of the tools. And to me, that is one of the most important things. If you're not using these tools, don't. So I think we have to encourage “the brass” to get to work.

Jill Holtz: So I think that's a really good point. It is really a good responsibility of a board member of part their duty of care to their organization is to, you know, also to familiarize themselves to do that training. Thank you, Dominique, for mentioning our Diligent Education modules. In fact, I think both of you were contributors to the AI and ethics course that we have. The great thing about those is they're online, self-directed, which can be really useful for either volunteer board members or if you're on a school board, you're busy doing your day-to-day or family life and being able to do those in your own time. And we also have templates for AI policy and toolkits for boards that I'll add to the show notes for this episode. So, I think what I'm taking really is from you Andrea, kind of a recap is that you don't want to treat your AI policy as a set and forget document. It needs to be living, breathing, constantly reviewed. And when it comes to public sector, say for K-12 education, the boards is responsible for setting the policy, of voting in the policy. The staff will create the policy or come down from a regulation and they have to adopt that policy and be transparent about that for their community as well.

Asking vendors about AI tools they provide

Just to touch quickly before we finish off, like you mentioned third party and your vendors, know, any points, what should people be asking their vendors about the AI tools they provide? Andrea, will I start with you?

Andrea Bonime-Blanc: Sure. I mean, again, I fall back on the traditional best practices. You want to have a very good due diligence framework before you enter into any relationships that you're doing. You're doing your due diligence on the vendor. You're sending questionnaires to the vendor about their security, their privacy policies, all the data related things. And then how do they construct some of their products and services and ask those lifecycle questions so you can have a targeted questionnaire during your due diligence process, then you need to have a really good contract that allows you outs when violations have happened and or, you know, other kinds of testing and auditing that takes place and the ability to terminate if you have to, right? If there are breaches or lack of ethics or, you know, guardrails that they don't bring to their products that you have been incorporated into your system and organization. So it's a very typical thing but with all the bells and whistles of data, algorithm, software, know, review, etc.

Advice to mission-driven boards about AI policy and governance

Jill Holtz: Yeah. Yeah, thank you. So I'm really conscious of time and you're both very busy. So I'm going to finish off today. I'm going to ask you both the same question, which is a very simple one. Dominique, we'll start with you. What is one piece of advice you would give to mission driven board about AI policy and governance?

Dominique Shelton Leipzig: My one piece of advice is really get clear on your point of view on AI and have a framework within which to ingest management reports and really have an opinion on this because ultimately, whether it's the vendor or on the management side, your organization will be responsible and the fiduciary duties that you have as nonprofits are just as critical as they are for for-profits. And I know in the show notes we'll put in the AI Trust Leadership Certificate Program where we do go into depth about a proposed framework based on our patent-pending process.

Jill Holtz: Andrea what's your one piece of advice that you would give to a mission-driven board about AI policy and governance?

Andrea Bonime-Blanc: Get your hands dirty with the tools, understand how they work. The statistic that you mentioned that 90 % of boards have not gotten there yet is scary and dramatic. And that should not happen. And that also means that maybe the boards need to bring in some new talent as well. So, get your hands dirty with understanding the tech. Do it yourself. And bring in some of the people that you need on your board to help guide the future of your organization.

Jill Holtz: Excellent advice. Well, thank you both so much for taking time out your busy lives to join me today. I found your insights and advice really helpful and enlightening, and I'm sure our listeners will as well. So, thank you both of you.

Andrea Bonime-Blanc: Thank you so much.

Dominique Shelton Leipzig: Thank you, Jill. Thank you, Andrea.

Jill Holtz: Thank you for tuning in to Leading with Purpose today. I really hope you found today’s discussion useful, interesting and insightful. To learn more, you can download our guide and other resources at www.diligent.com/leadingwithpurposethat’s www.diligent.com/leadingwithpurpose and we will put that in the show notes.

For more boardroom intelligence, check out our sister Diligent podcast The Corporate Director Podcast, the voice of modern governance, where directors and experts share practical insights on governance, strategy, risk and digital transformation.

Finally I wanted to ask you a favor, if you enjoyed this episode then I’d really appreciate it if you would please take a moment to rate and review our podcast, it helps other people find it. And please share this episode with any colleagues who have oversight or input into AI policy for your organization.

I look forward to bringing you more practical advice for purpose-driven work next time.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2026 Diligent Corporation. All rights reserved.