Blog
/
Boards & Governance
Keith Fenner Image
Keith Fenner
SVP & General Manager, EMEA

Governing AI and AI in governance: Balancing risks and rewards

July 8, 2024
0 min read
Professionals attending a panel discussion about balancing the risks and rewards of AI

“Governance has become an exercise in risk management”. That’s what a highly experienced board director told our CEO, Brian Stafford, recently when reflecting on the evolution of governance in the past decade. While boards still spend time discussing the business’s performance and strategic goals, risk identification and mitigation have shot up the agenda. It’s not hard to understand why.

The world is more volatile, uncertain, complex and ambiguous than ever before. The window of time between risks being identified and becoming reality has shortened dramatically, and external, hard-to-control factors are exerting growing pressure. Risk registers are broadening and deepening as we better understand the interdependencies inherent in our digitally, socially, environmentally and economically connected business landscape. Succeeding in business today is as much an exercise in understanding, managing, and mitigating risk as it is in capitalising on opportunities.

Enter AI

Into this febrile landscape comes artificial intelligence, which sits on both sides of the equation generating excitement and concern in equal measure. Boards and business leaders face a distinct challenge at this pivotal moment in technology evolution: how to successfully and safely govern AI adoption in their business and how to effectively leverage AI to govern their business.

On the first point, our recent panel session saw governance, ethics and technology experts share their views on implementing AI. They emphasised the importance of finding relevant use cases and not pursuing AI simply because of a fear of missing out.

Nevertheless, AI will be — and in many cases is already — being used in business. As a result, every organisation needs to know where it stands in terms of AI use and control — especially with regulation rapidly coming down the track. This brings us to the second point; using AI to enhance the practice of governance, risk and compliance (GRC) itself.

AI in governance technology — bringing clarity to chaos

At the highest level, AI offers the solution to the GRC information overload that has increasingly burdened boards and the business over the last decade. The digitalisation of business has created billions of data points, and among all that data is the information that tells leaders how the business is doing in GRC terms, where risks reside, and where opportunities lie.

The frustration has arisen around the mammoth task of extracting the relevant data from the siloes that an organic approach to tech adoption has created. On average, organisations use six different GRC tools alone, without including the valuable intelligence locked away in adjacent ERP, CRM and partner relationship management (PRM) software. Once data has been extracted from these disparate locations, it must be collated and sifted for insight. This takes humans a lot of time and resources to complete, but it is a task that is made for AI.

Nevertheless, before we place all our proprietary corporate data in AI hands, we should understand the risks. The most basic of these is the risk of putting sensitive data into public platforms. Just like you shouldn’t use WeTransfer to share your board book (that’s what Diligent is for!) neither should your team be asking ChatGPT to summarise your board book. Therefore, an important first step is ensuring that you have policies in place and training for teams so they understand where they can, and can’t use AI in their roles.

Diligent’s approach to AI

But having AI help you summarise meetings and documents does seem like a good idea, right? You just need to be able to do it without compromising security.

At Diligent we are integrating AI into our Diligent One Platform in the use cases where it has the most impact on productivity and insight. Our platform approach already helps customers rationalise the tools they use to manage GRC into a single source of truth delivering accurate, real-time oversight of the business. Now, we are leveraging AI in all those places where analysing data and surfacing insights quickly will provide a competitive edge.

As we do this, we have implemented our own principles to ensure you have full transparency about how you are engaging with AI in our platform. They are designed to meet the highest safety, security and ethical standards:

  1. Propriety: Your data is your own, and we make sure it stays that way. None of the data you entrust to Diligent is ever publicly exposed, and what our AI learns from your data stays with you. It is never aggregated or mixed with other organisations’ data.
  2. Transparency: All AI-generated content is clearly labelled, so you can see instantly what AI has generated, and what it hasn’t. You’ll never be using AI without knowing that you’re using it.
  3. Choice: We will always ask before implementing AI with your data and give you the option to opt out. This allows you to conform to your own organisational AI policies and be intentional about AI use.

GRC AI use cases

Diligent is piloting or developing a variety of AI integrations across our platform. Let’s go back to board book summaries: boards need specific information surfacing in any summary, and a generic LLM doesn’t have the guardrails to deliver it. We have taken our knowledge of how boards need to receive insight and trained a proprietary AI to deliver summaries that exactly meet director needs.

But what about retrospective analysis? Did what the board agreed on actually happen? We are adding capabilities that analyse historical board books to identify the outcomes of decisions and resolutions. This could prove an insightful tool for analysing board performance and identifying some of the typical issues boards face, such as uniform thinking, director engagement and lack of challenge.

Ensuring data and analytics integrity within AI

Artificial Intelligence is only as powerful — and safe — as the data it engages with. Whether that is internal business and governance data or data from external sources, all information must be assured, validated and analysed properly if the business is to get insight and base decisions on it. This is a crucial part of digital transformation and an important part of Diligent’s focus on integrating the power of ACL Analytics throughout Diligent One Platform.

The applications of robust data and responsible AI have strong potential in the controls environment, for example, especially in the light of corporate governance reforms and new regulation such as the Economic Crime and Corporate Transparency Act (ECCTA). We are implementing tools that analyse compliance standards against the organisations control environment and call out the differences, ranking discrepancies in terms of those needing an immediate response, less significant issues and minor changes to look at over time.

Such tools can also be used to propose and ultimately make changes to your controls framework, removing any duplicative work or “bloat” and creating a lean controls environment that reduces the burden on the team.

AI for good governance

We are entering yet another critical period in the evolution of business and indeed humanity. We bear a duty to pursue a responsible approach to adopting the powerful capabilities of AI. At Diligent we firmly believe that AI can power good governance — including governance over AI adoption — balancing its risks and rewards.

Click here to discover more about Diligent AI.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2024 Diligent Corporation. All rights reserved.