Diligent Q&A – Dominique Shelton Leipzig on compliance and trust
Dominique Shelton Leipzig, Privacy & Cybersecurity Partner at Mayer Brown, discusses why compliance is critical to driving trust, and how organizations can balance AI and ethics.
Tell us about Mayer Brown and your role there.
I am a member of Mayer Brown's Cybersecurity & Data Privacy practice, and I founded and lead the firm’s Global Data innovation Team. The GDI team is the first practice to be devoted to C-Level and Board advisory services for data leadership with emerging technologies. Mayer Brown is a global law firm, uniquely positioned to advise the world’s leading companies in tech, healthcare, life sciences and financial industries on their most complex deals and disputes. We help organizations future-proof their investments in tech by enabling them to see around corners, looking at legislative trends around the world to identify what they need to be aware of to make their tech program amazing.
You were recently recognized as a Modern Governance 100 leader in the Compliance & Ethics category. Tell us what this award means to you and why compliance is critical to driving trust?
To be recognized by Diligent as a Modern Governance 100 recipient is a transformational honor. The recognition aligns with my personal values and passion for giving companies the insights needed to drive real, measurable impact within their organizations.
Compliance can generally be seen as a check-the-box exercise, but I have a different take, as I discuss in my forthcoming Forbes book titled “Trust. Responsible AI, Innovation, Privacy and Data Leadership” that comes out on December 26th! Looking at the digital legislative frameworks, there is a huge opportunity for companies to build trust with their customers, shareholders and employees — particularly as it relates to AI, cyber and privacy. After all, our legislatures have gotten together with Large Language Model (LLM) experts to put the appropriate guardrails in place so that companies and people can use this technology safely and reliably. Trusted companies are proven to outperform their peers by 400%, so I’m excited to bring this message about data leadership through trust to our CEO and board community.
What is your advice to organizations looking to build a culture of ethics internally?
Keep a pulse on legislative trends around the world, whether or not these laws have become final yet. Again, this is especially true for emerging technology, like generative AI, where there is currently draft legislation across 6 continents and 78 countries. This is the perfect time to begin to conform operations to those trends ahead of time. There’s the old saying “you can either fail to plan or plan to fail,” and in this case, nothing could be more true. Companies that map to the future and look at legislative trends are some of the most successful in the world.
What do organizations and their leaders need to keep in mind as they explore AI and balance the ethics around those initiatives?
AI is going to bring $7 trillion to our global economy in the next 10 years — the opportunity for our society is enormous! On the other hand, AI has risks. The technology isn’t new (generative AI has been around for the past 10 years or so) but what is new is the commercialization of generative AI.
As I mentioned earlier, legislatures around the world have met with LLM experts, researchers and AI experts to determine the best way to deploy this technology safely. This has been very well codified in trustworthy AI legal frameworks. Uniformly, they call for companies to rank the AI according to the categories of prohibited risk — high risk, minimal risk and low risk. The meat of governance comes into play when you’re dealing with high-risk AI. There are over 50 examples of high-risk AI, but these generally include things that involve children, healthcare, life sciences, financial and other critical infrastructure, employment or consumer behavior.
Governance for AI is actually a great thing for businesses, because it allows them to discover if there are any issues with their operations from the get-go. If you are dealing with a high-risk AI, governments want companies, pre and post deployment of the AI, to embed continuous testing, monitoring, mitigation and auditing into the AI itself. We missed the opportunity to build our tech stacks in accordance with regulatory expectations for cyber and privacy with web 1.0 and 2.0, and we know now that hanging back and waiting doesn’t change the law one iota. Now, organizations have a golden opportunity to build AI from the ground up to align with forthcoming regulations and in so doing reap the benefits of being trusted digital partners.
What are some of the most high-risk areas organizations are overseeing today?
Aside from AI governance, which we have already discussed, number two is cyber. Organizations need to decide what poses a material risk to their operations, revenue and reputation in terms of cyber, well in advance of an attack. As well, we are seeing growing regulation and legislation around cyber — there are numerous cyber laws and regulations around the world like NIS2 and DORA in the EU and the SEC and NY DFS cyber rules in the US.In addition, we have 161 countries with data protection laws, similar to GDPR around the world. With these regulations, the board plays an increasingly prominent role in overseeing cyber risk. They need to have a data-driven view of cybersecurity performance, and to build their own competency around cybersecurity (as well as emerging tech, like AI) in order to have informed conversations with CISOs and management.
Learn about how Diligent can help you stay on top of regulatory compliance obligations, prioritize your response to changes in regulations and centralize compliance management here.