Home Insights

AI Act: Comprehensive…

Articles

AI Act: Comprehensive Legal Framework and Implementation 

Share This Article

On 12 July 2024, the Artificial Intelligence Act (AI Act) was published in the EU Official Journal, and will enter into force on 1 August 2024. 

The AI Act establishes the world’s first comprehensive horizontal legal framework for AI. Initially proposed in 2021, the AI Act classifies AI technologies into various risk categories: unacceptable, high, medium, and low, ensuring a structured approach to regulation. 

Implementation Timeline 

Following the European Council’s final approval on May 21, 2024, the AI Act was published in the EU Official Journal on 12 July 2024, and will enter into force after 20 days, on 1 August 2024. This date starts the clock for organisations within the scope of the AI Act to prepare for compliance. 

The exact time organisations have to comply with the relevant provisions under the AI Act will depend on the role they play under the AI Act, as well as the risk and capabilities of their AI systems. 

As a preliminary step, organisations should determine whether and how they fall within the scope of the AI Act in order to assess which timelines are applicable to them. While 2 August 2026 is the date of the AI Act’s general applicability, organisations may have a shorter compliance period, particularly if organisations constitute providers of general-purpose AI systems or use, or intend to use, AI systems that are deemed to have the highest risk under the AI Act. 

This ambitious legislative agenda positions the EU as a global leader in digital governance, unmatched by any other country or bloc in its thorough approach. 

Key Aspects of the AI Act 

Definition of AI System 

The AI Act includes definition of AI system, which “aims to be as technology neutral and future proof as possible”: 

Software which for a given set of human-defined objectives, generates outputs that influence the environments it interacts with by using one or more of the following techniques: machine learning, statistical, logic- or knowledge-based approaches. 

Risk-Based Classification System 

The AI Act includes a risk-based classification system, which categorizes AI technologies into four risk levels: unacceptable, high, medium, and low. Unacceptable technologies, such as social scoring by governments, are prohibited. High-risk applications, like AI in critical infrastructure and law enforcement, are subject to strict obligations and oversight, that includes: 

  • Adequate risk assessment and mitigation systems 
  • High quality of the datasets feeding the system 
  • Detailed technical documentation on the system and its purpose 
  • Record-keeping (logging of activity to ensure traceability) 
  • Clear and adequate information to the user 
  • Appropriate human oversight measures to minimize risk 
  • High level of robustness, cybersecurity and accuracy 
Safeguards and Consumer Rights 

The Act ensures that general-purpose AI complies with fundamental rights and ethical standards, imposes restrictions on biometric identification by law enforcement, and bans AI that manipulates user vulnerabilities or performs social scoring. Consumers can file complaints and receive explanations for decisions made by high-risk AI systems, enhancing transparency. 

Roles and responsibilities 

The AI Act applies extraterritorially to both public and private actors, meaning that it applies to all AI Systems that are either placed in EU market or its use affects people located in the EU. 

The obligations outlined in the AI Act affect all parties involved: the provider, importer, distributor and user: 

  • Providers must ensure overall compliance of high-risk AI systems with AI Act requirements: 
  • Mandatory requirements 
  • Ex-ante conformity assessment 
  • EU declaration of conformity 
  • CE marking of conformity 
  • Post-market monitoring system 
  • Importers/distributors must ensure that the high-risk AI system has been brought into conformity by the provider before making it available on the market. 
  • Users must use high-risk AI systems according to the accompanying instructions of use. 

Regulatory Framework 

The AI Act establishes clear requirements for AI developers and deployers, aiming to reduce administrative and financial burdens, particularly for SMEs. The European AI Office, established in February 2024, will oversee enforcement and foster collaboration, innovation, and research in AI across Europe. 

Penalties 

The AI Act defines non-compliance penalties. For prohibited AI systems, the fines can be as high as €35 million or seven percent of worldwide annual turnover for the preceding financial year, whichever is higher. 

Providers of general-purpose AI models may be fined three percent of their annual total worldwide turnover in the preceding financial year or €15 million, whichever is higher, for non-compliance. 

Union institutions, bodies, offices and agencies found to be non-compliant with the prohibition of AI practices will be subject to fines of up to €1.5 million. While any other non-compliance will be subject to fines of up to €750,000. 

Anyone who supplies incorrect, incomplete or misleading information to notified bodies or national competent authorities in response to a request, this can result in fines of up to €7.5 million or, if the offender is an undertaking, up to one percent of its total worldwide annual turnover for the preceding financial year, whichever is higher. 

Ieva Žilionienė, Consulting Business Lead at NRD Companies:

“While the AI Act tackles the crucial issues of transparency and ethical concerns in artificial intelligence, it also places significant regulatory burdens on AI developers and deployers, especially startups and SMEs. Although the Act attempts to alleviate some of these challenges with measures like regulatory sandboxes, the overall compliance requirements remain extensive. As a result, there is a risk that these stringent regulations could inadvertently push AI research and development outside the EU, making close monitoring of these dynamics essential. 

With the Act now approved and published, the EU’s focus will shift towards effective implementation and enforcement. This will necessitate a coordinated approach, including the strengthening of complementary legislation such as the AI Liability Directive, which is essential for addressing liability issues arising from damage caused by AI-powered products and services.” 

Share This Article