Lead the AI era of GRC at Elevate 2026 — Join us April 22–24 in Atlanta Register nowarrow_forward
Diligent Logo
Diligent Logo
Products
arrow_drop_down
Solutions
arrow_drop_down
Resources
arrow_drop_down
Diligent AI

IN-DEPTH: How can investors best oversee AI-related risk?

September 26, 2025
6 min read
Investors assessing AI-related risk
Antoinette Giblin

Antoinette Giblin

Editorial Manager

This article first appeared on Diligent Market Intelligence's Voting newswire. To register for a demonstration and trial of the product, click here.

An interview with Caroline Escott, head of investment stewardship at Railpen, on the launch of a new framework to guide investors in addressing the growing risks and opportunities posed by artificial intelligence. Railpen's new framework on AI oversight was rolled out in early August. What drove the initiative? The framework builds on Railpen’s longstanding focus on responsible technology as a system-wide and financially material issue. This specific initiative was driven by the growing recognition that AI’s disruptive potential presents both significant opportunities and considerable long-term uncertainties, and that good AI governance practices are essential. As stewards of capital, we have a responsibility to our members to proactively identify and manage the emerging risks and opportunities that could impact our portfolio companies, as well as broader financial markets. While our work is driven by what will make the most difference to financial outcomes for members, our decision to act on this topic is also driven by rising member interest in the use of emerging technologies, with insights from our 2024 member survey confirming that responsible technology has become a higher priority for many. A study cited in the report found that over 60% of S&P companies believe they face material risks related to AI. Can you expand on some of these risks? The AI value chain introduces a wide spectrum of risks, spanning from the inputs that feed AI systems to the outputs they generate. Our report identifies 10 key risks to companies developing or deploying AI tools, including infringement of intellectual property rights, litigation due to adverse generated content, and liabilities arising from poorly overseen AI-driven decisions. Beyond these direct risks, AI can also amplify systemic threats. For example, AI’s integration into core business functions is significantly expanding companies’ cybersecurity attack surfaces, while the use of AI by malicious actors can enable new and more sophisticated attack methods. The consequent risks of AI model exploitation and data breaches underscores the need for robust governance and technical controls. It's important to note that as AI continues to evolve, so too will the nature and scale of the risks it presents. The report references different branches of AI including the fast-moving space of generative AI with only 9% of companies feeling prepared to manage any potential threats. What challenge does this pose for investors? As the long-term capabilities of AI are virtually unknown, it is particularly important for companies to establish effective governance frameworks from the outset in order to prepare for these uncertainties. However, in a landscape where generative AI is advancing at pace, investors are confronted with a reality that many companies remain unprepared to manage its associated risks as well as harness its opportunities. Investors face a challenge of identifying where AI risks and opportunities are most financially material for their portfolio companies, and therefore where they should prioritize stewardship efforts. In our report, we share an approach to prioritizing stewardship efforts by assessing AI significance. Key factors include the companies’ role in the AI value chain, their operational dependency on AI, and how their sector uses AI. How does the framework set out to support investors in assessing their portfolio companies’ approaches to AI risk management? To support investors in assessing the materiality of AI risks, this framework translates high-level principles of responsible AI into practical expectations across four key pillars: governance, strategy, risk management, and performance reporting. It is designed to be applicable not only to large technology firms but to a wide range of AI developers and deployers. Companies with medium or high AI significance should be assessed against these pillars, with more advanced expectations for those with greater exposure. While we recognize that not all companies will be in a position to meet every expectation immediately, implementation should reflect organizational maturity and the nature of AI deployment. Investors can reasonably expect to see a phased and deliberate approach to embedding these practices over time, as companies progress in their AI governance journey. You have called on fellow investors to engage with their portfolio companies on AI risks. What type of risk controls can be put into place for the responsible use of AI? The AIGF outlines a comprehensive framework for internal risk controls spanning governance, strategy, risk management, and external reporting. From a governance standpoint, companies should ensure senior and board-level oversight of AI risks alongside investment in training to build organizational capacity around AI principles, responsibilities, and emerging threats. Strategically, firms are expected to assess how AI may reshape their business models, identifying both risks and opportunities. In terms of risk management, robust controls involve mapping AI-related risks across operations and developing targeted action plans to address the most material concerns. Finally, external reporting plays a vital role in transparency - companies should disclose the steps taken to mitigate AI risks and report incidents in a timely manner, alongside regular engagement with shareholders as governance experts. Are there any sectors that you see as particularly vulnerable? High AI significance exists in certain sectors, including IT, healthcare and finance. This is due to the extensive use of sensitive data in AI applications, alongside the heightened impact of decisions made based upon AI-driven insights. For other sectors, AI risks can’t be generalized as dependence on AI varies considerably between companies, meaning all sectors could have companies with high, medium or low AI significance. This underscores the need for a nuanced approach to engagement, one that considers company-specific exposure and preparedness rather than relying on sector-level assumptions. As regulation has yet to keep up with the pace of growth in AI, how can investors work to bridge the gap? Effectively managing systemic risks like AI requires collective action. Railpen’s AI Governance Framework is designed not only to strengthen our internal stewardship practices but also to foster broader industry alignment. We encourage investors to use the framework as a foundation for assessing and engaging with companies at the forefront of AI development and deployment. The report offers practical tools, such as example engagement questions and decision-useful disclosures already being published by companies, to support meaningful dialogue. As governance experts, investors can bring particularly useful and valuable insights into their conversations with companies, helping them work in true partnership. For more on the report, published in partnership with Chronos Sustainability, click here.