“FOMO is not a business strategy”: Navigating ethical, leadership and operational challenges in AI and governance
The path to adopting artificial intelligence (AI) raises numerous complex and interdependent challenges for governance, risk and compliance (GRC) professionals. Its sheer scale and potential demand close attention, while its ethical implications add further dimensions to decision-making. As a result, it is fair to say that AI is generating equal amounts of excitement and concern — often in the same people — as boards and business leaders explore the dual challenges of how to govern the deployment of AI in their organisation, and how they might use it to power governance activities.
This combination of excitement and concern was powerfully evident in the exclusive panel session offered to delegates attending the London leg of Diligent One Platform World Tour. Michael Lucas of Brave Within Consultancy was joined by his colleague Carolyn Clarke, an experienced board director and GRC expert, alongside Annabel Gillard, member of the International Advisory Council of the Institute of Business Ethics, Simon Persin of Turnkey Consulting, and Amra Mirza, Chief Auditor, DigitalX and Corporate Functions, NatWest Group.
Leading with a risk-based approach to AI adoption and governance
Every organisation will ultimately engage with AI, whether by directly developing AI-powered products and services, or by using products from third parties that utilise AI. Indeed, the accessibility of generative AI and the introduction of AI features across Microsoft and AWS enterprise products, for example, mean most companies are already using AI — they just may not know it at a corporate level.
With regulation on the horizon in the shape of the EU AI Act, ungoverned, unsanctioned and unmanaged AI use represents a key corporate risk. It is therefore prudent to consider the ethical and compliance aspects of AI deployment at the point of adoption, building AI into the organisation’s risk management framework in the same way as any other risk.
Carolyn Clarke advises: “Governance by design is important — it is the same principle of thinking ahead as with all good governance. Often, with big technology developments and implementations, the thought process about how to embed it in a way that is safe, efficient, and appropriate tends to come after the enthusiasm for the ‘new shiny thing’.”
Annabel Gillard agrees, noting that businesses may be so preoccupied with the perceived urgency to implement AI — because all their competitors seem to be doing it — that they don’t adequately consider important ethical aspects. As she memorably put it: “FOMO [fear of missing out] is not a business strategy.”
These ethical aspects stem from inherent features of AI, including the significant risk that bias in the data used to train it can result in prejudices being sustained and amplified within AI-based decision-making. Gillard advises businesses use the OECD’s five principles for trustworthy AI development as a starting framework when designing a governance environment for AI use in the business. These state that AI use and development should promote:
- Inclusive growth, sustainable development and wellbeing
- Human-centred values and fairness
- Transparency and explainability
- Robustness, security and safety
- Accountability
Incorporating these principles, in combination with the terms of the EU AI Act, will help organisations understand where guardrails and controls must be established during AI deployment.
Looking beyond the hype towards realistic AI adoption
AI is the hottest topic in business right now, but amid all the excitement it is easy for companies to get carried away — potentially in the wrong direction. Amra Mirza advises: “You’ve got to look at AI not as a trend, but in terms of what value you can extrapolate from it. Don’t just do it because everyone else is doing it. Instead, think about how to make it meaningful for the business.”
Simon Persin agrees, warning that: “Innovating for innovation’s sake is a fool’s errand. You end up with half-baked vanity projects that are more likely to fail. And if it doesn’t fail quickly people get overly indoctrinated in one (wrong) approach, and that’s also problematic. You need to know when to call a halt.”
Intentionality, culture and information challenges on the AI journey
In light of the above, how should businesses tackle what, for most organisations, is a highly exploratory phase?
From an AI governance perspective, Clarke advises organisations to take an intentional approach that starts by identifying what they are trying to achieve with AI through the lens of how the organisation operates. Leaders must then determine an approach within the context of the risk appetite of the wider organisation, analysing how AI influences and interfaces with that appetite. From there, the business must identify what it needs to document and record, and where controls should be embedded and monitored.
As organisations continue their AI adoption journey, they are also likely to encounter barriers in diverse areas, from legal and regulatory issues to practical and cultural challenges.
These cultural challenges, in particular, have the potential to derail AI deployments. Mirza advises that boards and leadership teams have a major role to play in managing AI implementation, saying: “Foster a culture of AI adoption. The more familiar the employees are with AI, the more open they’ll be to adoption […] Leaders need to role model this and embody some of behaviours around collaboration and trust. The more leaders talk about it and provide transparency and education around it, the more staff will buy into it.”
Clarke points out that delivering this leadership represents a challenge if the board is not getting all the information it needs, saying: “We have to come back to the fact that as board directors we are responsible. This often gets lost in big technology evolutions, but whatever happens, the board has to take responsibility for the things that are most strategically important. That’s why we mustn’t get drawn into siloes around how AI is being implemented. There are multiple stakeholders from CEO and CIO to legal teams, all with different viewpoints. The risk is that AI becomes a black box where directors don’t fully understand the risks being taken that could have huge consequences for the business.”
A critical moment for AI integrity
Summing up the historical significance of AI adoption and the responsibility resting on the shoulders of business leaders, Gillard concludes:
“We are at a moment in time at the beginning of a big transition and transformation, as digital and AI become a major part of our lives and business. We know from case studies around the adoption of social media that the way technology is set up and incentivised commercially can have unintended consequences – for example the model for social media commerce is based on engagement, and this has driven technology addiction.
She continues: “How we set AI up will create commercial incentives for the kind of AI we get in the future. We need to raise our game and make sure human impact is at the heart of how we develop AI."