Lead the AI era of GRC at Elevate 2026 — Join us April 22–24 in Atlanta Register nowarrow_forward
Diligent Logo
Diligent Logo
Products
arrow_drop_down
Solutions
arrow_drop_down
Resources
arrow_drop_down
Diligent AI

Human-centered AI: Practical insights for nonprofit boards

October 20, 2025
1 min read

Hosted by:

Jill Holtz

Jill Holtz

Senior Content Strategy Manager

With Guests:

Darian Rodriguez Hayman

Darian Rodriguez Hayman

Strategist, advisor, author and speaker

In this episode of the Leading with purpose podcast, Darian Rodriguez Heyman, speaker, consultant and best-selling author of AI for Nonprofits, shares how artificial intelligence is transforming nonprofit governance. With deep expertise in fundraising, governance and tech, Darian brings a global perspective from his work with mission-driven organizations.

We explore how AI can streamline board recruitment, boost meeting efficiency and improve impact measurement, while emphasizing the need for intentional adoption, data security and clear policies. Darian also highlights the importance of keeping the human element front and center.

Tune in for practical advice on overcoming skepticism, building trust and using AI as a thought partner. Plus, get actionable tips for board development, orientation and strategic oversight.

If you enjoyed this podcast episode, we would be grateful if you could please rate and review it to help others discover it too!

More about the podcast

In this episode, Darian shares practical insights on how AI is transforming nonprofit governance, from streamlining board recruitment and engagement to enhancing meeting efficiency and impact measurement. He emphasizes the importance of intentional adoption, data security and creating clear policies to guide responsible use of AI.

We explore how leaders can overcome skepticism, build trust and use AI as a thought partner to unlock efficiencies and improve effectiveness. Darian also offers actionable best practices for board development, orientation, and strategic oversight, all while keeping the human element front and center.

Listen now as we discuss how to future-proof governance structures, the role of co-intelligence and why mission-driven leaders must embrace AI with thoughtfulness and purpose.

Stick around to the end for Darian’s one piece of advice every nonprofit board should hear when it comes to using technology and AI for governance.

Resources for nonprofit boards on AI and technology

Transcript for interview with Darian Rodriguez Hayman

Jill Holtz: Hi everybody. I'm thrilled to be joined today by Darian Rodriguez-Hayman. Darian is a speaker and a consultant that shares his expertise in fundraising, governance and AI with mission-driven organizations across the globe. He's the former editor at Craigslist Foundation and a best-selling author. And he has just brought out a new book recently called AI for Nonprofits. So, Darian, welcome to the podcast.

Darian Rodriguez Heyman: Thank you so much for having me.

How is AI shaping the future of nonprofit governance

Jill Holtz: So, Darian, for this podcast episode, we're looking at the topic of the future of governance and how technology and AI can help boards of nonprofit and public facing organizations. So when I heard that you had a new book out called AI for Nonprofits, I thought, excellent, I'd love to pick your brains about and get your perspective on this. So to kick us off, your new book's subtitle is Putting Artificial Intelligence to Work for Your Cause.

So in your view, what are the most transformational ways that AI is shaping the future of nonprofit governance?

Darian Rodriguez Heyman: Well, I often think of good governance as ball bearings for your mission, similar to funding for that matter. And I think the tools are getting easier and more effective to engage people in what I think of as a low touch, high value way where we can kind of remove some of the friction from their service and really enable them to focus on contributing their greatest gifts. And that is a combination of having a clear set of roles and responsibilities which traditionally has been done in a board member agreement. AI has huge use cases and can really help with board member recruitment and selection, the whole traditional board matrix tool. It can really help with that. And then finally, in terms of running your meetings, so you're not busy taking the notes or trying to translate things or worrying about the tech sides of things, but also from a more strategic perspective, being able to really highlight and showcase what does our progress look like against our mission and against some of the goals that we've set? And how are we doing? And are things on track? Do we need to pay closer attention to certain things? Or can we focus on these big strategic matters? Sort of what I traditionally use a consent agenda to do and organizational dashboards. And AI can take all of that stuff to a completely different level, which is really, really exciting.

How should nonprofit boards futureproof their governance?

Jill Holtz: Yeah. And what should nonprofit leaders do today to, I suppose, think about future proofing that governance, their governance structures with technology and AI in mind? I know you've mentioned really great use cases there, but when you're kind of trying to think ahead and I know it's hard sometimes we don't know what we don't know what's coming down the tracks, but I think there's probably some basic principles that you've seen coming out of doing the book.

Darian Rodriguez Heyman: I would say that first and foremost, that historically every board felt like they needed a lawyer and an accountant. And that may still very well be true, but I would also add a technology executive to that list and having someone who really understands the strategic side of technology and AI. And that's an incredibly useful asset right now because they can help really oversee and facilitate that responsible deployment of these tools. So that's a big part of it. But then I also think that, you if you look at the state of the industry and nonprofits today, the vast majority of nonprofits have not organizationally committed to AI adoption. And so they basically are letting their employees fly without a net. If they do have a policy, it's don't use AI at all. And what we're seeing is increasingly we've got five, 10 % of people that are sort of getting more fluent in AI. They're using it for their daily lives, they're using it for their work, and they're gonna use it. It's sort of a BYO AI type environment now. And so it's really, really critical that organizations just simply take the time to write on a few pieces of paper what is important to them, what are the acceptable use cases, what are the sensitive data that you're working with, and how do we expect our team to navigate those? And if we put our heads in the sand and just let people do whatever they want, it's going to cause problems with data security and personally identifiable information and a range of other things. And it means we're just not taking an intentional approach. And this is a very powerful and robust tool that needs to be responsibly deployed.

How can leaders convince the board and team to embrace AI-driven governance?

Jill Holtz: Yeah, and I suppose as a follow up to that, so how can leaders convince the board and the team to embrace AI-driven governance and leadership, especially if there's some skepticism or fear of change or well, we've always done things a certain way?

Darian Rodriguez Heyman: Yeah, I mean, I think that that's very valid. And I think the first thing I would say is don't tell them they're crazy. Don't blow them off when they talk about fears of job loss or the Terminator future or any of the environmental effects of AI, data security. These are all valid concerns. And I think you have to tackle them head on. I do personally believe that AI is here to stay and it's not going to go away so it is incumbent upon us as leaders and as organizations to really be mindful and strategic about what this looks like and ensure just like we do with our board members when they enter or leave the board, the goal should be a graceful transition. The same thing needs to be true when it comes to AI and we have to be intentional about it. And so I think it's a combination of speaking directly to some of those concerns, which I'm certainly happy to do, but it's also about creating some guardrails and some solutions.

And then finally, it's about creating some strategies and policies that are going to guide your efforts forward. And I think when you do all of that, it creates a sort of framework and a safe space for organizations, leaders and boards to experiment.

Human-centred AI for governance

Jill Holtz: And actually experimentation is so important, isn't it, to see what the AI outputs, but to have that lens of you're never going to take what AI gives you and entirely trust it. You should always sense check. You should always have a human over it as well.

Darian Rodriguez Heyman: Well, frankly, yeah, no, I was just gonna respond to that because frankly what you're touching on is one of the most important things is that we don't wanna outsource our work in the community to the robots. And the point you're making, which is really, really important to make, is that you're absolutely right. We don't wanna do that. We're not here to automate ourselves and outsource the work to the robots. We're here to employ them and leverage them to unlock efficiencies, to increase effectiveness, and to enable our team to do their work more effectively, more efficiently, and serve more people in the community. And that's why we're here. We're mission-led organizations. So I think it really is, you know, it's a really critical consideration.

Best practices for AI in nonprofit governance

Jill Holtz: Yeah. So I was going to ask you next because your book really promises and I've had to look at it's great by the way, Dari and really useful, great kind of tips throughout. you you're offering tactics, practical tips and tools in the context of nonprofit governance. So thinking about the board, you know, in particular, what are two or three, you know, you've maybe mentioned them already because you've been so great at your answers. But what are two or three actionable best practices that you feel leaders can implement right away using AI.

Darian Rodriguez Heyman: Yeah, so I mean, I think even if you're flying without a net, you don't have a policy, any of those other best practices I just mentioned, when it comes to board development and engagement, the point is, you know, instead of those of you in the audience that are technical know what chat GPT stands for, right? It's a generative pre-trained transformer. just basically means it's guessing the next word you want to hear, just like when you're Googling something.

But I like to think of it as a general purpose technology. It's something like electricity to the internet that can help you do anything. And so again, it is really important to recognize it can help in pretty much every single place. As it relates to working with your board, that starts with recruitment. And so that looks like using AI to work on job descriptions and to be really thoughtful about what are you expecting from your board.

Ideally, you would also have a board member agreement that is sort of an abbreviated, very lay person friendly document on two or three pieces of paper that just say in plain English, as a board member, I commit to the following responsibilities annually and they sign it. AI can really help with that. It's a really powerful tool. Then when it comes time to actually going after applicants, AI can help you find new places to find those applicants. It can help you identify specific individuals.

And probably most importantly, it can help you create something that we don't often do as nonprofits, which is an evaluation rubric, right? In terms of promoting equity, what better way than to come up with an objective framework of what it is we're looking for, you know, and certainly leave room in there for a deep passion for the mission and a personal chemistry and connection. But there's probably also some tactical stuff. We really want people with fundraising and governance experience or a youth services group and want someone who's had experience with that, whatever the case may be, and you can really use AI to come up with a rubric that you can evaluate candidates against, and you can even feed it the transcripts of the interviews and ask it for its thoughts to read them. So now we're equitably looking at unearthing some of the best candidates. Once we've found them, then you can absolutely leverage it to create an orientation packet, which is something that most nonprofits especially smaller groups fail to do. So give them sort of their onboarding materials. Create a mentorship program, all those things, and use AI to identify those ideas and to support all of that. And then there's when you're in the actual meetings, like I was talking about with transcription and real-time translation and tools where you can take polls and things like that. Really, really helpful to facilitate good governance.

Jill Holtz: Yeah, and that's one of things we love on BoardEffect is having survey and poll features and AI. We now have AI meeting minutes that kind of generate a lot and also kind of board packet summaries. know, the you know, often a board packet is huge with hundreds of pages and lots of reports. So being able to generate, OK, we still want you to read it all. But here are some key points that you need to come prepared for the meeting so that you're more engaged and more effective as a board member.

Darian Rodriguez Heyman: Yeah again, my mantra with board leadership is low touch, high value. These are busy people, they're important people, and even if they're not, they should be treated like heads of state. And so what does that look like? That looks like, you know, hello, ma'am, hello, sir, here's your dossier that catches you up on everything we've been up to. Here's a summary of the information in the packet you're getting. Here's, you know, the intended outcomes for each of our discussions. And as I mentioned before with the dashboards, here's an update on how we're doing, not just operationally and financially, but relative to our mission. How do we quantify success? And tracking that and having green, yellow, red indicators. Traditionally, I've done dashboards in an Excel spreadsheet, just sort of manually computing some of this stuff. Not only can AI help automate some of that stuff, but it can even create meta dashboards where, if in our strategic plan we talk about community empowerment or economic revitalization, we can start to really quantify what does that look like? And maybe it's not one thing, maybe it's a combination of five or six different metrics, and they can even be weighted, and AI can do that in real time and give you that flashing green indicator that says, okay, this stuff is taken care of, why don't you focus on the things you really need to talk about? Or actually, you know that stuff you really thought you needed to talk about? You've got a blinking red light over here, and maybe you want to take a detour and focus on this because something's coming off the rails.

AI and risk management

Jill Holtz: Yeah, and actually to that point, again, let's talk a little for a minute about risk management and compliance, because those are, you know, part of government governance to paraphrase our CEO Brian Stafford, he says, you know, governance is an exercise in risk management. Now, you have to be on things like data privacy, cybersecurity, all the risks that face shows the decisions you make as a board. Again, how do you see AI supporting that? And I saw some mentions of things in the book about this. I'd love to hear your perspective on that.

Darian Rodriguez Heyman: Well, mean, the two biggest things I would say are around policies and data security. And I mentioned the idea of policies before, but the bottom line is, you know, every organization should spend at least, you know, an hour or two meeting, starting the conversation about what does AI mean to us? What is comfortable? What is not? Why? What tools are, you know, acceptable and for what use? And putting that on a couple pieces of paper and making sure the whole staff is trained on that is step one to establishing solid guardrails. And that should be a dynamic and iterative process that evolves over time. But, you know, probably about 99 % of nonprofits don't have a policy like that yet. So it's really not that complicated to just at least start the conversation and put some stuff on paper. And then the other thing that that policy should speak directly to and is probably one of the top things to train your team on is data security. This is the number one reason why organizations are not adopting AI yet. And it is a very valid concern. What people don't recognize is that the way these large language models work is they just hoover up a huge amount of data and then they make sense of it. That's why they can predict what you want to hear. That's why they can look across a bunch of stuff and identify the threads, which is really helpful.

But the reality is all the projections show that the AI models are going to run out of data to absorb in the next year or two. There's nothing left. That's part of why we're probably seeing a lot of these unscrupulous things where they've unethically absorbed some data from publishers and Reddit and other stuff. And so what we need to realize is that every time you enter anything into a chat GPT or Gemini or Cloud prompt, that becomes part of its training data.

Jill Holtz: Yeah.

Darian Rodriguez Heyman: So you cannot, under any circumstances, put personally identifiable information in there, whether that's for your donors or your board, or if you have HIPAA compliant information, if you're doing healthcare type stuff, none of that can go up there unless you've set up the safeguards and created your own custom large language model or custom GPT, where not only will that data be secure, but that system will be trained on you, your needs, your mission, your programs and it'll speak your voice and it'll already know about you as an organization. So the outputs are gonna be much, much higher quality and you have that assurance of data quality and data security and you can set those up for a couple grand with the professional staff to help you do it. I have friends, the folks behind Causewriter at Whole Whale that are setting those up for nonprofits in a turnkey way, really inexpensively.

Jill Holtz: Yeah.

Darian Rodriguez Heyman: And so it's not a big cost barrier. And once you're able to address a couple of those really simple things, then all of a sudden people can work and play in that sandbox and rest assured that they're not compromising any of your clients or board member or staff's information.

Jill Holtz: Yeah, so really just to summarize what you said there was having policy, having awareness education and then, you know, ideally setting up a kind of a closed version of a GPT for yourself, which can be done relatively low cost and helps offset kind of other risks and costs. Let's chat for the kind of finish of the podcast about forward looking trends. What trends do you see coming next for nonprofit governance and technology?

Darian Rodriguez Heyman: Yeah, that's exactly right. Yeah, I mean, I think that the two things that are really starting to, that will be getting addressed soon, you know, that aren't right now, but they're gonna have manifold implications on all aspects of AI adoption, including with boards. One is those hallucinations you mentioned, right? I liken today's AI to a drunk frat guy at the bar who is equally confident whether what he's saying is accurate or completely made up and that's the way the technology works nowadays is you can't trust it. You have to double check its work and that's a pain. It also means that there's generally a human in the loop and that you are being foolish if you're just copying and pasting stuff and sending it out without looking at it. If you're letting AI send your newsletter or whatever it might be, you should be scared. That is irresponsible. Right? So that is going to get cleaned up and fixed and we will be able to trust these tools on a much higher level, which means we can put them to work in governance and in all other aspects of our lives and not have to look over the robot shoulders every five minutes. Right? So that's going to be really important. It's going to also revisit the conversation about co-intelligence and what does it look like for us to work in partnership with the robots and AI and not just outsource our thinking to them.

Jill Holtz: Using them as a thought partner.

Darian Rodriguez Heyman: Yeah, absolutely. And it's this Ethan Molyke who wrote a book called Co-Intelligence. It's the whole premise and it's really where the future is taking us. It's the same conversation we had when calculators came out and all the math teachers said, no, no, no, you can't use those in class. Rightfully so. You need to learn the math. And, you know, in the real world, we use calculators. And so over time, we need to learn both the actual skill and how to use the tool to make it easier. So, you know, the piece about hallucinations is about to get resolved. The other one that is sort of just on the horizon is agentic AI, like as in the word agent, like a travel agent. But basically, today's AI, you tell it to do something and it gives you the answer in about 10 seconds and then you're done. You can make it a conversation and ask it to change its outputs and refine the answers, but it's all one shot back and forth. And we're right around the corner from AI being able to you know, go on to Ticketmaster. You tell it that the next time Beyonce's in town, I want front row seats as long as they're not over $620 and it's gonna have your credit card information. It's gonna buy the stuff and just gonna show up with some good news every once in a while. And that is also something that can be a little scary and it's also another lack of control that is around the corner. And so once again, it's really important for us to think about what are the safeguards that are important to us so that those things don't come off the rails. And those guardrails are going to be absolutely critical, not just for governance, but in all aspects of AI.

Jill Holtz: Maybe inviting experts like yourself to come and do kind of chats to boards and share what you've learned in your book as well, obviously giving you a plug there, Darian.

Darian Rodriguez Heyman: Yeah, I appreciate that. What I would say is that, yes, I'm talking about some of the tactical solutions because I think those tend to be pretty daunting for nonprofits and they think that this is such a big deal and it would take so much time and money to get this right that we just can't afford it. And I focused in on a couple of the free and very simple and low cost, easy to implement solutions. But to your point about professional development, that is absolutely critical. This is a completely transformational technology. It is totally new. It doesn't work like what we're used to. And we're sort of all Iron Man meets Gandhi now. So what does that mean and what are the implications and how do we do that responsibly? And absolutely reading the books, you know, aside from my book, Beth Cantor and Afua Bruce both wrote books. Nathan Chappelle wrote a book, all of whom are in my book as well.

So there's some other great books out there. There's resources like Inten and TechSoup that have a bunch of articles. We actually launched a website at AI, the number four NP.org that is really becoming a clearinghouse for all things AI for good. So there's a huge amount of resources out there. And then there's also a whole ecosystem of experts. I'm working with a bunch of different nonprofits and funders creating cohort-based programs. We're launching a national one-day conference series of learning labs under the AI for Nonprofits and AI for Foundations banners. So we'll be in about a dozen cities all over the US in the coming year. That info is also at AI4NP.org. And then also talking with experts and whether it's bringing in trainings, I offer up to every listener of a podcast I'm on, every member of a, you know, audience in a keynote. I always offer up a free 20 minute pro bono coaching session. So if people want to reach me through helpingpeoplehelp.com, they can sign up for a free coaching session right on my website. And I just think it's really important that those of us that have some expertise are willing to share it with mission-led leaders to help them expand their effectiveness and unlock some efficiencies so that we can do more with less because it's a difficult world. There's a lot of instability politically, economically, and we need to build resilience. And this is a huge tool that can really help us do that.

Jill Holtz: So just to finish off in 30 seconds or less, what's one piece of advice that you would give to mission-driven leaders looking to future-proof their governance?

Darian Rodriguez Heyman: I would say get your heads out of the sand. You have access to a shiny new tool called artificial intelligence. It has myriad implications on all aspects of not just nonprofit governance, but operations, fundraising, marketing, and program delivery and evaluation. And you are being irresponsible if you're not giving it a good look and if you're using it without thoughtfulness and intentionality. And so put the tool to work.

It is something that you owe to your community because we are all here on their behalf to serve them as effectively as we can and this is a great new tool that can help you do that if you adopt it responsibly.

Jill Holtz: I love that, Darian. Thank you, Darian, so much for taking time out of your busy schedule and for sharing your expertise today. It's been a real pleasure to talk to you. Darian's book is published by Wiley and can be found at all good bookshops and online.

And if you're interested in getting further advice on using technology and AI to support efficiency, effectiveness and engagement for your board, then check out our newest guide available to download for free. I'm going to put those and links to Darian's book and the resources and conference he mentioned in the show notes. So thank you very much. And until next time, keep leading with purpose.

Human-centered AI: Practical insights for nonprofit boards