Press "Enter" to skip to content

EU AI Act is now in force – The world’s first Artificial Intelligence regulation

The world’s first artificial intelligence regulation, the EU AI Act officially came into force on 1 August 2024, setting a global standard for AI regulation.

With the introduction of the AI Act, the EU aims to strike a balance between fostering AI adoption and ensuring individuals’ right to responsible, ethical, and trustworthy use of AI.

The EU Artificial Intelligence Act was initially proposed to the European Commission in April 2021, then formally approved by the European Parliament in March 2024, and published in July.

AI systems are classified based on their potential risks in the AI Act with obligations and compliance requirements varying depending on the risk tier they fall under.

  • Minimal risk: Unregulated with no obligations
    • AI-enabled video games
    • Spam filters
  • Limited risk: Subject to transparency obligations (Inform & disclose)
    • AI chatbots
    • Biometric-categorization systems
    • Generative AI
    • Emotion-recognition systems
    • Systems generating ‘deep fake’ content
  • High risk: AI systems that pose a high risk to health, safety, the environment, or fundamental rights. They need to be assessed before they are deployed and certain high-risk systems need to be registered in the EU database
    • Credit scoring AI systems
    • Product safety components
    • Analyses of job applications or evaluation of candidates.
  • Unacceptable risk: All AI systems considered a threat to people are banned with exceptions for law enforcement purposes
    • Real-time remote biometric identification for law enforcement
    • Behavioral manipulation
    • Social scoring by public authorities.
    • Exploitation of vulnerable characteristics of people
    • Criminal profiling

The AI Act imposes strict obligations not only on the ‘provider’ of a high-risk AI system, but also on the ‘importer’, ‘distributor’, and ‘deployer’ of such systems.

Penalties for non-compliance with the AI Act range from €7.5 million to €35 million or 1% to 7% of the global annual turnover, depending on the severity of the infringement. 

To ensure proper enforcement several government bodies are set up specifically the AI Office, a scientific panel of independent experts, an AI Board, and an advisory forum for stakeholders.

Although the EU AI Act came into effect on August 1, compliance is staggered based on the following deadlines.

  • 6 months for prohibited AI systems
  • 12 months for General-purpose AI (GPAI)
  • 24 months for the majority of high-risk AI systems
  • 36 months for certain high-risk AI systems intended for use by public authorities
  • Codes of practice must be ready 9 months after entry into force