Blog
/
Risk & Strategy
Phil Lim Image
Phil Lim
Director, Product Management

Navigating AI: Legal, cybersecurity and ethical considerations for boards and leadership

August 6, 2024
0 min read
Group of executives having a meeting in a boardroom

AI is quickly becoming part of business as usual, and regulators are paying serious attention to its possibilities and ramifications. This edition of the Diligent Minute, written by Phil Lim, Director of Product Management & Global AI Champion at Diligent, explores the active measures that boards need to implement to guarantee the responsible utilization of AI. Subscribe to the Diligent Minute here.

AI is revolutionizing various sectors, and it is imperative for boards of directors to oversee the ethical and legal use of AI technologies. The board's responsibilities extend beyond safeguarding the organization's assets; they must also maintain its reputation for compliance and ethical integrity — while still encouraging the safe use of AI to increase productivity and create value.

5 steps for boards to create a safe, ethical environment for AI usage 

1. Invest in AI ethics training and education 

Boards recognize the need for specialized knowledge in AI rules, regulations and compliance obligations to successfully oversee their organization's AI usage. As such, they should be considering their own collective expertise on the matter, upskilling with AI ethics courses and staying up to date on the latest developments. By fostering their ownAI knowledge, directors will be equipped to make informed decisions that align with legal standards and stakeholder expectations.

2. Bring in new perspectives 

Given the complexity and importance of AI ethics, many boards are looking at bringing on external experts to better oversee AI strategies. Be open to new insights and perspectives — both internal and external.While your chief technology officers (CTOs) and chief information security officers (CISOs) have valuable knowledge and familiarity with the organization's systems, which is key to understanding the risks and opportunities associated with AI, they may have blind spots. External consultants can provide an independent and objective perspective as well as best practices to help the board stay on top of responsible AI use.

3. Watch out for “AI-washing”

The recent boom in AI has led to instances of AI-washing — companies overstating or using vague or misleading language around their AI capabilities. AI-washing may be intentional or unintentional, as companies face pressure to attract investment or to quickly adopt AI. 

When companies falsely portray themselves as using advanced AI technologies without actually implementing them effectively, they may violate regulations that require transparency and accuracy in reporting technological capabilities. Not to mention, they risk damaging trust with customers and investors.

That’s why it’s important for a board to fully understand how their company is engaging with AI, as well as the various risks and opportunities. In addition to enhancing their knowledge through education and certifications, directors must incorporate AI into their organization’s bigger risk management picture. Additionally, by developing and maintaining policies around AI, boards can ensure adherence to regulations and maintain credibility, ultimately safeguarding their organization’s reputation and trustworthiness.

4. Address AI vulnerabilities in cybersecurity

Boards should consider AI risks in the context of cybersecurity, IT security and overall enterprise risk management. Given that 36% of board directors identified generative AI as the most challenging issue to oversee, it is crucial for boards to invest in specialized training and education to understand AI’s associated risks. Bringing in external expertise and regular meetings with chief information security officers (CISOs) can help uncover risks and vulnerabilities that internal teams might miss. Additionally, boards should establish dedicated committees focused on AI and cybersecurity, and their organizations should utilize advanced tools for monitoring risk holistically. Regular audits and evaluations are essential to ensuring AI doesn’t introduce unforeseen risks.

Boost your cyber defenses

Uncover the latest cyber risk trends and strategies at the Cyber Risk Virtual Summit. A free global event. Register now!

Secure your spot

5. Apply a principle-based approach to overseeing AI

Boards and leaders should acknowledge that the way people feel about AI is diverse; even the most dedicated AI evangelists may experience fear of the technology. With generative AI, it can be tempting to quickly create an “acceptable AI use policy” and paste it into your policy management system, check the box and call it done.

But this inevitably leads to the policy going unread or misunderstood by the general employee population. It’s far more effective to derive a core set of critical principles that lay the foundation for meaningful AI policies.

For example, at Diligent, we have come up with these principles that guide all use of AI across all Diligent apps:

Principle 1: Never put customer or organizational confidential data in any unapproved AI tool

UnauthorizedAI services — including free tools such as ChatGPT — may regurgitate confidential data entered into it; users should only use approved, protected AI tools that have been evaluated for security risks.

Principle 2: Humans are responsible and accountable for the outputs of AI

AI responses can be inaccurate or biased, so it is essential to thoroughly review and validate the information before relying on it for any decisions or actions.

Principle 3: Don't use AI for unethical or potentially harmful purposes

AI should not be employed in ways that could cause harm (e.g., generating spam) or in sensitive areas where the risks are too great (e.g., making personnel-related decisions).

To see the impact of ethical, purpose-built AI on your GRC processes, request a free demo of the Diligent One Platform today.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2024 Diligent Corporation. All rights reserved.