
Using AI for good at the board level: Are we ready for this?

Guest blog authored by John Horn, Head of Cybersecurity Practice, Datos Insights
Artificial intelligence (AI) has been around for a long time, of course. But modern AI capabilities have pushed AI to an elevated main stage of the market.
In its current forms (generative AI, or GenAI) and manifestations (agentic AI)AI brings many conversations, concerns, issues, and opportunities. Within the financial services industry which I serve, cyber risk management regarding AI capabilities has become a major executive concern.
Criminals are using AI-enabled tooling (often referred to as adversarial AI) for increasing success against users and financial services firms. Adversarial AI has quickly undercut base foundations of security principles which CISOs and other cybersecurity professionals have depended upon for years.
But implemented correctly, AI also holds tremendous opportunity to improve cyber defenses and risk management frameworks. Yes, some are hyping AI as a means to cure all previous ills. Almost everyone understands we are not there yet.
Rather, in these early days of “using AI for good” in cyber risk management, I hear many common-sense observations and conclusions. Leaders share “we need to make sure there’s a human in the middle, ”or the embedded skepticism of “I am waiting for AI to provide more security value before I deploy it.”
These are legitimate statements, for now. But this advisor believes many of us need to spend more time thinking about a more profound underlying concern, which is this: “Are we ready to actually deploy AI for good?” It's a kind of elephant in the room for risk management leaders right now, representing an inflection point between where we have been and where we are headed.
Achieving more operationally mature cyber risk oversight for the board of directors is one of today’s most important capabilities within financial services and other regulated industries. I lead advisory and research for Datos Insights in this domain. Market pressures and continued data breaches are raising the expectations for boards to become more transparent, more due diligent, and operate with greater quality and speed in overseeing the cyber risk (and other risk dimensions) of the corporation.
An important question presents GRC executives serving highly regulated industries is this: “Are we ready to begin using AI to improve board oversight activities?" Not is the broad market ready, nor my industry ready, but: "is our firm ready to take this next step with AI for good?"
In May 2025, I participated in a distinguished panel hosted by Diligent in New York City which explored these tensions, risks, and opportunities at the board level. It was a delightful experience and a rich, honest conversation which extended beyond the panelists and moderator to include many from the audience. I came away from this discussion with 4 conclusions to briefly share here.
1) Using AI to improve board oversight involves some risk, but it is manageable
Cyber risk is the “air we breathe” today within financial services as it is in other highly regulated verticals. Risk is unavoidable, seemingly everywhere. From this advisor’s perspective, if a risk leader today cannot operate and find ways to move forward in the context of risk in these times, they might want to consider some other profession. Using AI for good at the board certainly involves managing some clear data, security, and privacy risks.
Some view using AI as requiring a major leap of faith, like jumping into the middle of an ocean. But a closer look suggests a series of steps like first wading knee-deep into the water. Boundaries can be established regarding data and outcomes. Steps can be taken to strongly govern early-stage AI use and reduce risk (and business benefit) accordingly. We recently held two quarterly council meetings with financial institution and insurance carrier CISOs, discussing results of a Q2 survey about their concerns regarding AI cyber risk, comparing these to results from a 2024 Q3 survey along these same lines.
The comparison was remarkable. Concerns actually decreased significantly from 2024 Q3 to today. When asked what is driving lower CISO concerns, leaders in both meetings shared it was because they had begun working with AI. As such, they were beginning to hold more practical knowledge as their teams were actively working with AI capabilities.
This advisor believes stepping into using AI for improvements at the board has the same kind of path. Yes, there are risks, but they often amplify by what you do not know. Taking a step to actually use AI for board benefit, in some measured way, strongly governing risk, is going to lead to maturation, a more practical understanding of risk, and benefits for the board.
2) For many, using AI to improve board oversight requires a skilled partner
Some large organizations with significant talent and budget are capable of both owning AI benefit for the board while strongly managing risks. But this advisor’s view is most executives owning board GRC for their organization will require a robust vendor partner to achieve transformational outcomes for their boards. Diligent is certainly one of top solution providers in this market, which like other firms have built fit-for-purpose AI solutions with the board as customer. Vendors have inherent advantages over self-build in using AI for good.
There is a large “keep pace” challenge here. Market expectations for boards within highly regulated industries are advancing rapidly. This will not end anytime soon. Technological advances such as AI are well-suited for market providers to operationalize more quickly and deliver value to customers. Noting again that some organizations may be able to self-build LLMs for this capability, even these firms will likely require vendor partners.
For most, the vendor partner may be the most critical decision to deliver improved capability for their board. And in this context, a key criterion in selecting a vendor partner for board-level GRC should be their operational rigor to deeply understand and effectively manage AI risk. While it is a partnership between customer and vendor, desired GRC vendors should be expert in AI risks, assert proactive, principled risk management defaults, and carry most of the water.
3) Using AI for the board is driven by the need to speed up and improve quality at the same time
Datos Insights research published in March 2025 highlights heavy resource burden is the primary point for board-level cyber GRC practices. Self-built tools and manual integration tasks were cited by 60% of leaders as their top challenge. As market forces tell the board to speed up, existing manual processes become even more important to automate. A recent head of security shared the enormous amount of time it requires for his team to prepare for their annual cyber risk oversight meeting with their board. As we discussed moving this to a quarterly cadence, they could not imagine satisfying this frequency.
Boards also hold pain points with the quality of their oversight, citing narrow views of risk and immature quantification of risk. AI as an enabler to speed up and sharpen risk views available to the board have become a critical imperative to meet the needs of the market. Waiting to take initial steps with AI inevitably results in waiting longer to solve key pain points. As market expectations continue to rise, waiting to begin AI steps means falling future behind the market expectations and competitors.
Is your board ready for cyber risk?
Get the Datos Insights report to see why cyber GRC is a top boardroom priority for 2025 — and how CISOs and risk leaders are closing oversight gaps right now.
Read the full report4) AI is showing returns to improve board oversight
Especially for the cyber risk dimension, board oversight can be seen as a macro translation exercise for the business. At its roots, cybersecurity is very technical, operating with the complexities of the IT estate. The board requires stories, narratives of business risk they can understand and manage. This translation is heavily manual and resource intensive. A customer shared they were using an AI-fueled component which created the initial story board for the directors. Adding a human review and edits, this AI created considerable time savings. Board members and C-suite executives should expect more to come.
In this critical inflection point for board-level GRC, this advisor suggests strongly taking new steps to include AI in the board process. Likely through a vendor partner, secure it with serious default positions and govern it rigorously. A humble next step with AI is far better than taking no step at all. The market is speeding up and risk is elevating.
Learn more about leveraging AI in the boardroom.
More to explore

Organizations are ‘rapidly prioritizing’ cyber risk oversight — Datos Insights report
Discover why board-level cyber GRC is now a top priority for financial institutions and how integrated solutions drive resilience and compliance

The Cyber Leadership Playbook
Learn how to bridge the gaps between cybersecurity, legal and board leadership for smarter cyber risk management & governance. Download the guide today.

How to manage the perfect AI-assisted board meeting
Learn how to optimize board meetings with AI-powered tools that streamline preparation, enhance decision-making and reduce admin.