Boards & Governance
The Diligent team Image
The Diligent team
GRC trends and insights

The EU Artificial Intelligence Act: How this pioneering legislation impacts your organization

March 19, 2024
0 min read
Professional researching details of the EU Artificial Intelligence Act

The European Union has taken a momentous step in shaping the future of artificial intelligence (AI) by introducing the EU Artificial Intelligence Act. This comprehensive legislation, approved by both the European Parliament and the Council, marks a significant milestone in the global regulation of AI systems.

While other jurisdictions such as the U.S., China and the UK are considering AI legislation, the EU is the first to pass a comprehensive regulation addressing artificial intelligence risks. It establishes a set of obligations and requirements aimed at safeguarding the health, safety and fundamental rights of EU citizens, as well as carrying with it a significant global impact on AI governance.

In this article, you'll discover:

  • What the EU Artificial Intelligence Act is
  • The implications of the Act for organizations
  • Steps to promote the responsible use of AI

What is the EU AI Act? 3 main components

The EU AI Act defines AI systems based on their potential to cause harm, following a risk-based approach. It ensures that AI technologies placed on the European market and used within the EU adhere to safety standards and respect human rights. By striking a delicate balance between fostering innovation and safeguarding fundamental rights, the Act sets a global standard for AI regulation.

The Act itself has three main components:

1. High-impact AI models and risk assessment: The Act covers high-impact general-purpose AI models that could pose systemic risks in the future. These models, if misused, could have far-reaching consequences. Additionally, the Act introduces a revised system of governance with enforcement powers at the EU level, ensuring effective oversight and accountability.

2. Prohibitions and safeguards: The agreement extends the list of prohibitions, addressing concerns related to AI deployment. Notably, it allows for remote biometric identification by law enforcement authorities in public spaces, subject to strict safeguards. Balancing security and privacy remains a priority.

3. Fundamental rights impact assessment: Deployers of high-risk AI systems must conduct a fundamental rights impact assessment before putting an AI system into use. This assessment evaluates potential risks to privacy, non-discrimination and other fundamental rights, underscoring the EU’s commitment to responsible AI deployment.

The EU Artificial Intelligence Act puts responsible use at the forefront for business

Organizations operating within the EU or dealing with EU citizens must comply with the Act’s provisions. They need to assess their AI systems, ensure transparency and implement safeguards. While regulations may seem restrictive, they encourage responsible innovation. Businesses that prioritize ethical AI will attract investment and gain a competitive edge. Moreover, the EU AI Act sets a precedent for other jurisdictions. Organizations should be ready for this.

The Act primarily focuses on high-risk AI systems, with stringent guidelines applicable to AI systems used in critical infrastructure, healthcare and transportation. Employers deploying high-risk AI at the workplace must ensure human oversight, emphasizing transparency and accountability to prevent unintended consequences. Additionally, the Act encourages thorough testing of AI systems in real-world scenarios to identify and mitigate potential risks before widespread deployment.

Users and consumers have the right to know when AI makes decisions affecting them, underscoring the transparency and accountability obligations of organizations. The Act emphasizes responsible AI use, requiring companies to be transparent about how their systems operate, including data sources, algorithms and decision-making processes. However, the treatment of biometric AI systems remains contentious, necessitating careful consideration of definitions and restrictions around biometric data.

Fostering a culture of responsible AI use

Navigating the complexities of AI governance will be tricky for most organizations, especially as we’re all trying to get to grips with what is likely to be an onslaught of new legislation in the coming months and years.

If your board or leadership does not have an AI framework in place, they should. This should include strategy as well as the policies. Get some training, get some help. Take a look at the NIST framework — it has 7-page playbook with an AI framework you can use as a starting point. — Richard Barber, CEO and Board Director, Mind Tech Group

With the passing of the EU Artificial Intelligence Act, we recommend organizations focus on these critical areas to incorporate AI into their overall risk strategy:

  • Knowledge enhancement: To lead your organization toward sustainable and trustworthy practices, start by enhancing your own knowledge base. Consider enrolling in an AI certification course to enhance your knowledge about compliance frameworks, regulatory considerations, how to structure board oversight and more.
  • Secure collaboration: Collaborating on AI-related matters requires a centralized and secure approach. You can aid in providing security for your organization by avoiding sensitive topics over email chains or using personal devices for work-related matters. For even greater assurances, specialized governance software allows boards to collaborate in real-time, communicate securely and store files in a cloud-based environment.
  • Effective Policy Management: Boards are responsible for crafting internal rules governing AI use. Technology can streamline policy development and maintenance. Customizable workflows, access management and approval processes ensure transparency and compliance.

What's next: Embracing AI ethically

As technology evolves, more AI regulations are inevitable. Organizations should stay informed about emerging legislation, invest in AI ethics and compliance training and collaborate with policymakers and industry peers to shape responsible AI practices.

The EU Artificial Intelligence Act is a testament to the European Union’s commitment to shaping AI’s future responsibly. The universal goal is to unlock AI’s potential while safeguarding our societies and fundamental rights, and Europe is leading the way.

Get more advice on meeting the challenge of EU Artificial Intelligence Act compliance in our blog 'AI is here. AI regulations are on the way. Is your board ready?'


Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2024 Diligent Corporation. All rights reserved.