1

What is the EU AI Act?

The EU AI Act is a comprehensive guide to Europe's new AI regulations, risk categories, and business compliance requirements that will come into effect soon.

The European Union's AI Act, formally known as Regulation (EU) 2024/1689, is a landmark legislation establishing a comprehensive legal framework for artificial intelligence (AI) within the European Union.

Its primary purpose is to create consistent rules governing AI systems' development, marketing, and use. It essentially concentrates on protecting fundamental rights and safety while fostering innovation.

The historical context of the EU AI Act traces back to the European Commission's proposal in April 2021, which aimed to address growing concerns about the ethical implications and, of course, the risks associated with AI technologies.

This comprehensive legal framework focuses on making AI systems safe, transparent, traceable, non-discriminatory, and environmentally friendly.   

Regulations such as this are important and a catalyst for innovation. It helps mitigate the risks posed by AI systems, promotes and builds public trust, harmonizes regulations, and sets a global standard while fostering a fertile ground for new ideas and advancements.   

What are the EU AI Act risk categories?

Under the EU AI Act, AI systems fall into four distinct risk categories. Each category dictates specific regulatory obligations and implications for organizations. 

Here is a brief overview:

1. Unacceptable risk

This category includes AI systems deemed a severe threat to safety or fundamental rights, which are outright banned. For example, cognitive behavioral manipulation targets vulnerable groups and social scoring systems that classify individuals based on personal characteristics. 

Real-time biometric identification in public spaces with limited exceptions for law enforcement also falls under the unacceptable risk category.

2. High risk

High-risk AI systems used in critical infrastructure (e.g., healthcare and transportation) can significantly impact safety or fundamental rights. So, they are subject to stringent regulations and must undergo assessments before market entry.

Applications used in education, employment, law enforcement, and migration management also fall under this category.

3. Limited risk

In the limited risk category, AI systems have certain transparency obligations. However, they are less heavily regulated compared to high-risk AI systems. In this scenario, businesses must ensure users know they interact with AI (e.g., chatbots and deepfake technologies).

4. Minimal risk

This category encompasses most AI applications currently available today. This includes spam filters and AI-enabled video games, which are mainly unregulated. Enterprises must adapt their cyber compliance strategies based on the risk classification of their AI systems. For unacceptable risks, organizations must abstain from deploying such technologies.

High-risk AI providers must implement rigorous compliance measures, including technical documentation and risk assessments. Limited and minimal risk systems require less stringent obligations but still necessitate transparency to users.

Cybersecurity requirements under the EU AI Act

The EU AI Act enacts specific cybersecurity obligations primarily for high-risk AI systems to ensure their safe deployment. For example, organizations must establish comprehensive risk management frameworks to identify, assess, and mitigate risks posed by high-risk AI systems throughout their lifecycle. 

High-risk AI systems should incorporate security measures from the initial design phase to ensure robust protection against potential vulnerabilities and threats. Enterprises must also promptly implement protocols for reporting incidents involving high-risk AI systems to relevant authorities. This approach helps ensure accountability and transparency in case of a security event.

Why is it important to integrate cybersecurity measures in AI development?

Integrating cybersecurity measures is crucial to AI development. This is because they need to protect sensitive data and maintain user trust. By embedding security into the AI systems' design and operational processes, companies can better defend against potential threats that exploit vulnerabilities unique to AI technologies.

To comply with the EU AI Act, companies must undertake several key steps:

  • Map and document AI systems in use: Companies need to maintain an inventory of all deployed AI systems, categorizing them according to the risk levels defined by the Act.
  • Conduct periodic risk assessments: Regular assessments must be conducted to evaluate the risks associated with high-risk AI applications and ensure compliance with regulatory requirements.
  • Establish incident response plans and policies: Businesses must develop comprehensive incident response strategies to address potential security events effectively.
  • Leverage existing compliance frameworks: Organizations can utilize established frameworks such as GDPR and NIS2 to facilitate compliance with the EU AI Act's requirements.

Compliance challenges and considerations

Businesses may face several compliance challenges in meeting compliance requirements under the EU AI Act for the following reasons:

  • Complexity of regulations: Understanding and implementing the diverse obligations across different risk categories can be daunting for many, especially small and medium-sized businesses.
  • Resource allocation: Complying with stringent requirements may require significant investment in technology and personnel.
  • Ongoing training and awareness programs: Continuous training is essential to inform employees about compliance requirements and best practices related to AI technologies. 

The significance of the EU AI Act lies in its comprehensive framework for regulating AI while emphasizing cybersecurity compliance. As organizations prepare for these upcoming changes, proactive measures are essential to align with regulatory expectations and defend against potential security risks associated with AI technologies and AI-powered cyberattacks.

Enterprises are encouraged to map their current cybersecurity practices against the new requirements to ensure a smooth transition into this evolving regulatory landscape.

Key dates in the implementation timeline include:
  • February 2, 2025: Provisions related to general requirements and prohibited AI practices will take effect.
  • August 2, 2026: Most of the Act's provisions, including those governing high-risk AI systems, will become fully applicable.


nach oben