
Responding to AI’s changing regulatory landscape: Legal and compliance perspectives

The regulatory environment around the era-defining challenge of artificial intelligence is moving at pace. As countries and industries seek to balance rapid and much-needed innovation with the novel and complex risks of artificial intelligence, businesses are grappling with the need to leverage AI in a way that will meet governance, risk and compliance requirements — even as those requirements remain uncertain.
Antony Cook, Corporate Vice-President and Deputy General Counsel, Microsoft and Nasser Ali Khasawneh, Global Head of AI at Eversheds Sutherland recently us to share their perspectives and experience of helping organisations navigate the emerging AI era. We’ve captured some key insights below, and you can watch the full webinar on-demand here.
Setting the scene: Regulatory scale and complexity
The EU AI Act is the most high-profile example at present, but there is a raft of regulations in development worldwide. The OECD is currently tracking 61 countries in the process of developing AI policies. Beside these sit a wealth of sector-specific initiatives — with approximately 393 in progress — and governance authority programmes; the OECD is aware of 760 governance initiatives currently in the pipeline.
The scale, breadth and depth of regulatory attention is confirmation — if it were needed — of the importance being placed on ensuring the path to AI adoption has sufficient guardrails. Nevertheless, there is diversity in the way different jurisdictions are approaching the task.
Cook summarised the three main approaches:
- Safety first: The recognition that AI can be used to achieve malicious as well as positive outcomes is front and centre in some territories. The White House commitment on AI safety and the U.K.’s Bletchley Park AI safety summit are examples of this approach, bringing stakeholders together to discuss the threats of AI and how to contain them.
- Broad legislation: The EU AI Act is the most prominent example of an attempt to create broad legislation that covers a large number of issues that AI creates. It seeks to establish guardrails without compromising the progress that AI can deliver.
- Sector- or issue-specific approach: Countries without resources for broad legislative development, or those wishing to assess the likely impact of AI in their territory before legislating, are addressing specific issues. For example, Japan has amended its intellectual property always to address issues of copyright infringement in AI training data.
Some territories are employing a mix of approaches, or shifting between them as political leadership changes, such as in the U.K.
Harmonisation and international cooperation is a critical challenge
AI regulation is a jurisdictional, nation-state-focused challenge (or supranational in the case of the EU), but ultimately a degree of harmonisation will be essential to help multinational organisations operate a compliant approach, as Khasawneh explained: “We always have to work within jurisdictions, within national laws, but it's fair to say that AI knows no boundaries. It is a technology that flies across boundaries so the need for harmonization could not be greater as we consider various aspects of law that are affected by AI.”
He welcomes the U.K.’s initiative at Bletchley Park of bringing together a number of countries and organisations to work towards standardisation and harmonisation, wondering: “Will we move towards a global body that is the AI equivalent of the World Intellectual Property Organisation, for example?”
However, Nasser acknowledges that geopolitical issues will likely be a barrier to international cooperation preventing any kind of global treaty on AI.
Key AI themes emerging in legal departments
As Global Head of AI at Eversheds Sutherland, Nasser is well-placed to give an overview ofthe common themes and issues on which clients are seeking external counsel. These include:
- Operational and policy guidance: Clients want help devising governance policies guiding employees on the do’s and don’ts of AI use and how they are expected to minimise harmful consequences when employing or developing AI.
- Contracting support: How to structure contract terms with partners and suppliers taking into account the nuances of artificial intelligence and GenAI.
- Interacting with IP laws: Organisations interacting with and developing their own AI want to understand legal risk in terms of IP rights, copyright, and whether the platforms they use or develop might breach it.
- Interacting with data law: Similarly, businesses seek to understand the data privacy risks introduced by AI systems and providers, in order to avoid infringement and protect any proprietary data that is exposed to AI.
- Understanding bias and other risks in employment law: Companies want to unlock the benefits of AI to support employees while mitigating risks around worker rights and mitigating bias in applications such as employee screening.
These topics demand a wide range of expertise and, because few organisations have the depth of experience in this new and expanding area, they underline the necessity of seeking external advice. Alongside that advice on practical elements of AI adoption, businesses need to focus on developing their own framework for responsible AI governance.
Responsible AI governance: Microsoft’s approach
Cook shared how Microsoft responded to the challenge of responsible AI. The company’s approach was rooted in the realisation that while the engineers and developers that create AI systems and applications think about the technology through a certain lens, it is vital to go beyond these specific perspectives to establish globally applicable parameters for its ethical application and use.
Microsoft convened a multi-disciplinary and diverse set of stakeholders to explore responsible AI development and use. It included lawyers, humanists, sociologists and computing engineers with the goal of establishing how to ground technology development appropriately.
The result was a set of principles focusing on reliability, safety, privacy, security, accountability and transparency. These amount to an AI standard that is applied across the business and is operationalised through engineering practices, for example, that ensure the principle is put into practice.
Once principles and frameworks have been developed, the next challenge is implementation, and leadership is critical.
Leading on AI: Board accountability
The EU AI Act already includes obligations for AI literacy among boards and leadership teams. Nasser believes: “AI accountability is going to become an absolute requirement for boards to comply with, and for CEOs to lead with.” He has witnessed growing focus from boards: “One thing that's changed in the past 18 months is the attention that boards are putting on making sure they have the right approach to governance and that it reflects the implication of the technology across their organisation because this is a technology which is changing go-to-market, it's changing research and development, it's changing supply chain management, it's changing employee productivity and workforce development. So it has a very broad implication across organisations, which I think means that boards are just much more focused.”
Cook acknowledges the scale of assimilating all the information boards need to move forward on AI, but cautions that trying to figure everything out before moving forward is not a competitive approach: “The technology is so important to competitive differentiation and opportunity, so companies need to be involved in AI. The question is how do they do that appropriately?”
He advises boards to draw on the expertise of large companies that are spearheading AI, like Microsoft, and also trade associations: “There's a lot of the trade associations, which are creating the sets of materials you can leverage in order to be able to get yourself across the issues. Making sure that you're aware of what the technology is doing and how it's being used in your organization is a big way that you can manage the sorts of risks that you may be exposed to.”
Perhaps the greatest business risk around AI right now is the risk of doing nothing. You can decide how you'd like to approach it, but what I think every company needs to do is have a considered approach, decide what their ambitions are, and then start a journey, because sitting and watching and waiting, this is not a fad, it's not going away.
For more insights including the panel’s thoughts on the role of General Counsel, the interplay between AI and privacy regulation, and the issue of trust in AI, watch the on-demand webinar.