Blog
/
Compliance & Ethics
Phil Lim Image
Phil Lim
Director, Product Management

Addressing the challenges of AI and cyber resilience regulations

June 28, 2024
0 min read
Professionals discussing AI and cyber resilience regulations

Artificial intelligence (AI) and cyber resilience have become top-of-mind for regulators, who are calling on organizations to adopt new tools and systems in a responsible and secure manner. Protecting sensitive data and guarding against potential breaches are critical in today's high-risk business environment.

Diligent's 2024 midyear guidance provides a roadmap for board members and executives to navigate the complex and evolving landscape of AI and cyber resilience regulations. Our in-depth analysis of these regulations helps you understand the key issues and make informed decisions to ensure compliance and long-term success.

As technology continues to transform industries and introduce new risks, organizations must remain vigilant and proactive in addressing AI and cyber resilience challenges. Leaders have a duty to remain informed if they want to effectively mitigate risks, safeguard their operations and seize the opportunities presented by these emerging technologies.

AI: A double-edged sword of innovation and risk

In boardrooms across the globe, discussions on digital transformation inevitably focus on AI. Governments are monitoring AI's rapid development, aiming to balance its potential benefits and risks. According to Dale Waterman, Solution Designer, EMEA Market at Diligent, there is a critical need to create a regulatory environment that fosters AI innovation and commercial activity while ensuring its use is responsible, ethical and safe.

The EU has recently made strides in this area with the approval of the AI Act by the Council of the EU. This legislation seeks to position Europe as a leading global hub for AI excellence, aligning the development and deployment of AI technologies with European values and regulations. The Act establishes a unified framework for AI usage and supply within the EU and introduces a risk-based classification system for AI systems to assess potential risks related to health, safety or fundamental rights of EU citizens.

Additionally, the AI Act outlines specific regulations for General Purpose AI (GPAI) models, particularly those that present systemic risks. This pioneering legislation is expected to set a precedent for AI regulation in other global jurisdictions, guiding how AI technologies are governed to ensure safety and adherence to ethical standards.

There is a particular focus on data privacy and protection. The European Union’s General Data Protection Regulation (GDPR) is a leading example, setting strict standards for the collection, processing, and storage of personal data. Organizations that operate in the EU or handle the personal data of EU citizens must comply with the GDPR or face heavy fines and reputational damage.

A second key trend is the push for algorithmic transparency and fairness. Regulators are increasingly requiring that AI systems be transparent and explainable, to ensure that the decisions made by algorithms are not biased or discriminatory. This trend is driven by the need to protect consumers and ensure ethical AI practices.

Cyber resilience: A critical imperative in the digital age

It's well known that organizations are more exposed to cyber threats than ever before. Cyberattacks have evolved in complexity, aiming at critical infrastructure, sensitive data and financial resources. Regulators have taken a stand, implementing rigorous protocols to bolster cyber resilience and safeguard organizations from potential breaches.

One such significant legislative stride is the EU's Network and Information Security Directive (NIS2), which took effect in January 2023. NIS2 mandates stringent security protocols for essential and important entities within critical sectors such as energy, transportation and healthcare. Entities under the purview of NIS2 are required to uphold robust cybersecurity measures, regularly assess risks and promptly report security incidents.

Supply chain security has also emerged as a major regulatory theme. Given that cyberattacks often target supply chain vulnerabilities, regulators are underscoring the importance of assessing and mitigating risks from third-party vendors and suppliers. This is evident in regulations such as the U.K.'s Supply Chain Security Regulations and the U.S. Cybersecurity Maturity Model Certification (CMMC) framework.

What's next for AI and cyber resilience regulations?

Organizations must adopt a proactive approach to compliance and risk management, and staying informed in the first step. In our regulatory roundup of the first half of 2024, we discuss next steps for organizations who want to stay a step ahead of AI and cyber resilience developments and provide similar insights for other trending topics like corporate transparency, climate and supply chain.

Discover how to help ensure compliance and prime your organization to seize the to foster growth and prosperity.

Download Mastering regulatory compliance at midyear: Essential 2024 guidance for directors and executives today!

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2024 Diligent Corporation. All rights reserved.