Blog
/
Compliance
The Diligent team Image
The Diligent team
GRC trends and insights

Diligent Connections 2025: How AI is reshaping GRC practices in APAC

August 29, 2025
0 min read
Professionals discussing the implications of AI on business

Diligent Connections returned to Sydney, Singapore and Manila for its third year, bringing together APAC’s GRC leaders for a focused, real-world look at artificial intelligence’s (AI) growing impact. The event brought together expert panel members, including CISOs, legal counsels, policy advisors and board directors and provided practical and strategic insights into how AI is transforming the role of GRC professionals, from detecting risks to delivering faster and smarter decisions whilst balancing AI innovation with compliance and regulatory expectations.

As data volumes surge and compliance demands intensify, AI is becoming a powerful force in governance. Through expert panels and live demonstrations, attendees explored how to apply AI responsibly, align it with business goals, and keep human judgment at the centre of oversight.

Navigating AI governance: Balancing innovation, risk & compliance

Across all three Connections events, the panellists identified growing threats like ‘shadow AI’, which refers to AI tools used by employees without approval; and ‘AI creep’, where AI is integrated into legacy systems without oversight. A panellist highlighted that AI systems can sometimes operate with only 60% accuracy, producing erroneous results or ‘hallucinations’. This inaccuracy can obscure critical information from leaders, thereby introducing unforeseen risks and vulnerabilities for sensitive data.

Data sovereignty emerged as another critical concern. Australia’s ban on Deep Seek, a China-based AI tool, illustrated the danger of foreign-hosted tools that may mishandle local data. Similarly, panels in Singapore and the Philippines expressed concerns about the challenges of cross-border data transfers in the face of evolving regulations. To mitigate these risks, GRC teams across the region must ensure they have a clear understanding of where their data resides, how it is utilised, and the governing rights that apply to it.

The board’s role in AI governance

As AI risks escalate, governance has become a critical concern for boards. Senior executives must now prioritise AI governance, developing a comprehensive strategy that guides and controls AI use across their organisations.

Panellists emphasised that current laws already regulate AI use. For instance, in Australia, directors' duties under the Corporations Act, along with the Privacy Act and anti-discrimination and copyright laws, are applicable. Singapore's Personal Data Protection Act and Model AI Governance Framework promote responsible AI practices, while the Philippines' Data Privacy Act and AI advisories focus on data rights, transparency, and accountability.

Boards must help their companies keep up with changing regulations and prepare for new legal requirements. This involves not only understanding current laws but also anticipating future changes that may impact their AI governance strategies. By doing so, boards can help their organisations navigate the complex AI landscape effectively.

Establishing a robust AI governance framework

To build a strong AI governance framework, organisations must prioritise AI literacy. Currently, a significant gap exists, with 64% of organisations not offering AI training and 45% having no plans to do so in the near future.

Effective AI governance is rooted in ethical principles. Panellists advocated grounding AI Governance in the principles of FATE: Fairness, Accountability, Transparency, and Explainability. To uphold these principles, leaders should establish cross-functional task forces, implement top-down strategies, and oversee AI use within enterprise risk and ESG frameworks. By taking these steps, organisations can ensure responsible AI use and mitigate associated risks.

Reimagining integrated assurance with AI: From risk mitigation to strategic insight

Leveraging real-time insights for enhanced governance

Organisations are now exploring AI-powered tools for data-driven audits, continuous risk monitoring, and effective compliance. By leveraging hands-on demos and peer case studies, they are adopting a more practical approach to GRC. The shift towards real-time assurance and predictive analytics enables organisations to identify potential issues before they escalate. As noted by the panellists, auditors must continue working towards integrated assurance.

The integration of AI brings numerous benefits, including richer insights, faster audits, and more effective detection of anomalies and fraud. By adopting AI, organisations can reduce human error by 30%, compliance risk by 50%, review time by 60%, financial losses by 40%, and incident response time by 80%.

AI enables 100% population testing, where it reviews every transaction, rather than relying on limited samples. Traditionally, auditors reviewed 20 to 30 samples; now, they can test every entry in a dataset, revealing issues at scale. Additionally, AI links structured and unstructured data to identify trends and risks, empowering auditors to shift from retrospective reviews to predictive assurance and strategic conversations with management.

Evolving role of GRC professionals in the AI era

The future of GRC professionals is deeply intertwined with audit teams’ transition from a policing function to a consultative role as integrated risk partners. Panellists projected that by 2035, 85% of audit roles will require AI proficiency, making training a strategic imperative.

APAC companies are upskilling auditors across the entire lifecycle, from planning and risk assessment to reporting, often starting small with low-risk pilots to build momentum. A cultural shift is necessary, where teams begin to ask, ‘Why not use AI for this audit?’ This mindset change is crucial for embracing AI’s potential. However, adoption must be balanced with ethics and governance considerations, including copyright risks associated with third-party content and the rapid obsolescence of AI tools. GRC professionals must choose tools that align with their data strategy and legal obligations.

Importantly, GRC professionals now wear two hats – they apply AI to improve their work, and they assess how management uses AI. They must champion AI, but at the same time, act as professional sceptics. This dual role requires regular documentation of how AI applies judgement and how human oversight validates it.

Responsible AI is a strategic imperative

AI is a force multiplier if governed proactively. Organisations must adopt a Responsible AI approach that touches every part of the business, shaped by an understanding that AI strategy and data strategy are intertwined. Organisations need to know what data trains their model, what data monitors it, and how it interacts with legacy systems.

As panellists noted, AI governance is about being prepared, not perfect. Organisations that embed AI governance gain a competitive edge and avoid blind spots that can lead to legal or reputational damage.

See the difference responsible AI can make

Discover how Diligent AI streamlines board reporting, drive audit efficiency, and elevate governance.

Request your demo today.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2025 Diligent Corporation. All rights reserved.