Blog
/
Lindsay Walker
Assistant General Counsel

Diligent Connections: Balancing AI innovation and ethics

September 17, 2025
0 min read
Professionals discussing AI and ethics at Diligent Connections, Sydney

It’s not every day you get to moderate a discussion that feels as timely and essential as our recent panel at Diligent Connections, Sydney, on “Navigating AI Governance: Balancing Innovation, Risk, and Compliance”. Artificial intelligence (AI) has firmly entered the mainstream, yet the pace of change leaves many of us wondering how to keep up while staying compliant and responsible.

Having worked in AI governance, I’ve seen firsthand how complex this space can be. Here are some of my key takeaways from our panel; insights I hope will help others begin or strengthen their own AI governance journey.

Why explainability and AI literacy matter

Throughout the panel, two recurring themes stood out: explainability and AI literacy.

Explainability is exactly what it sounds like. It’s the ability to clearly explain how an AI system works, including what data it uses, how it makes decisions and where human oversight fits in. It’s critical for building trust, ensuring accountability and meeting privacy and security requirements.

But explainability alone isn’t enough. You need to make sure people across your organisation fully understand the tools they’re using.

This is where AI literacy comes in. Whether you’re a board member, legal advisor or employee, you need to understand its different facets to use it responsibly. This includes determining whether a tool is designed for enterprise use or is more consumer-oriented, as well as its associated risks, benefits and limitations.

The takeaway here? Education is the foundation for transparency and compliance. People need to know when, where and how to utilise AI tools and be aware of their limitations. Without this shared understanding, even the best governance frameworks can fall short.

Getting started with AI governance: 5 practical steps

One of the most valuable parts of the panel was the practical advice on how organisations can get started with AI governance. Here are the five actionable steps that emerged:

1. Audit your AI use and literacy

Start by mapping where AI is already in use across your organisation, including any “shadow AI” tools that may have flown under the radar. “Shadow AI” refers to the use of AI tools or systems within an organisation without official approval, oversight or awareness from leadership or IT teams. Employees may use AI apps or platforms for convenience or productivity, but without proper governance, this can introduce significant risks around security, privacy and compliance.

Also, keep a close eye on how you purchase AI tools, conducting proper due diligence and oversight on your vendors. At the same time, assess your teams’ AI literacy levels and the risk appetite of your business.

2. Engage your customers

If you’re developing AI products, consult your customers early. Understand their concerns, expectations and willingness to embrace AI-driven features.

3. Bring your board along

Share findings from your audit and customer conversations with your board to get buy-in on your AI strategy and the ethical principles you’ll uphold.

4. Leverage existing frameworks

Resources such as the OECD’s framework for classifying AI systems, ISO/IEC 42001 and the Australian Government’s AI Ethics Principles provide excellent starting points for developing a governance framework.

5. Tailor your framework to your business

No two organisations are alike, and that’s especially true when it comes to AI governance. Every organisation faces different risks, operates with unique goals and adopts AI at its own pace. A one-size-fits-all approach simply won’t work in this space. Tailor your governance framework to your specific context, use cases and risk appetite.

Balancing innovation and compliance: A moving target

Striking the right balance between innovation and compliance is easier said than done. The key here is to develop a robust yet adaptable framework that can evolve in tandem with technological and legal changes. Here are some of the more practical strategies our panel shared, which any organisation can adopt:

  • Build AI ethics committees to evaluate new AI projects from multiple perspectives and cross-functional working groups to ensure compliant adoption
  • Ensure top-down support from executives and boards
  • Identify and empower AI champions throughout the organisation
  • Focus on continuous monitoring and auditing, as AI governance isn’t a one-and-done exercise

Remember: Innovation shouldn’t come at the expense of fairness, accountability or security. While your framework needs to be flexible, you shouldn’t bend to the point of compromising on these core values or failing to meet your adopted ethical principles.

A case for metrics and future-proofing

It’s not enough to simply set up an AI governance framework – you also need to measure whether it’s working.

During the panel, we explored some practical metrics that can help organisations track their progress and align their AI use with both business goals and ethical principles. Here are a few to consider:

  • Can you explain your AI models to customers in plain language on an ongoing basis over time as the technology develops?
  • Are your AI tools delivering as promised while maintaining privacy, security and compliance?
  • Are you regularly revisiting customer expectations and updating your practices?
  • Are you applying human-centred design principles to ensure AI tools are built around real user needs?

Looking ahead, I believe businesses must stay firmly focused on a human-rights centred approach to the ethics of AI. As AI technologies continue to advance rapidly, ethical principles such as fairness, transparency, safety, and accountability will always remain relevant and provide a guiding light.

By grounding our approach in strong ethical principles, we can better navigate uncertainty and change. We must prepare for emerging challenges, such as decision-making that does not require human involvement. It may sound overly futuristic now, but we’re already asking questions about accountability in highly autonomous systems. In time, this may require entirely new legal frameworks.

If there’s one thing I’d like readers to take away from this discussion, it’s this: Put ethics at the heart of your AI governance. Technology will continue to evolve. By prioritising fairness, privacy, accountability, transparency and the elimination of bias, we can develop AI systems that not only comply with laws but also serve our communities responsibly. That’s the kind of future we can all be working toward.

Make your next big move with confidence

Elevate your board’s AI literacy and ethical oversight. Discover how Diligent empowers directors and executives to govern emerging technologies with confidence, transparency, and accountability.

Book a demo today to see how your leadership team can set the standard for responsible AI governance.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2025 Diligent Corporation. All rights reserved.