23/10/24

AI Act | Pioneering AI Regulation in Europe

On 12 July 2024, Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (the “AI Act”) was published in the EU Official Journal. As part of a wider package of EU policy measures, the AI Act forms the first comprehensive legal framework to regulate the use of artificial intelligence (AI) across Europe. Its purpose is to strike a balance between regulating the use of potentially harmful AI systems, while still allowing innovation in the sector to thrive.

1. What does the AI Act regulate?

The AI Act primarily (i) lays down harmonised rules for the placing on the market, the putting into service and the use of AI systems in the EU, (ii) prohibits certain harmful AI practices and (iii) sets out specific requirements for high-risk AI systems.

In addition, it also regulates the placing on the market of so-called general-purpose AI (“GPAI”) models, introduces various transparency rules for AI systems and contains rules on market monitoring and enforcement. Lastly, it also contains several measures to support innovation, particularly focused on SMEs.

Among several exceptions to its material scope, the AI Act does not apply to AI systems or models which are specifically developed for the sole purpose of scientific research and development (Article 2.6 AI Act) and equally does not apply to the obligations of natural persons using AI systems in the course of a purely non-professional activity (Article 2.10 AI Act).

2. What are AI systems?

An AI system is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (Article 3.1 AI Act).

3. To whom does the AI Act apply?

The AI Act introduces obligations for providers, deployers, importers, distributors and product manufacturers of AI systems, where the territorial scope of the AI Act will depend on their role(s) and the targeted market (Article 2.1 AI Act).

In the case of providers for example, the territorial scope of the AI Act extends to providers located in third countries when they place on the market / put into service AI systems or GPAI models in the EU (Article 2.1(a) AI Act). This differs for deployers of AI systems, where in principle only those deployers who are established in the EU fall within the scope of the AI Act, unless the output of their AI systems is used in the EU (Article 2.1(c) AI Act).

4. Prohibited AI practices

The AI Act bans several AI practices which are deemed unethical and harmful to society (Article 5 AI Act). Such practices include, for example, the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage (Article 5.1(e) AI Act).

There is however an exception on this ban for the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, provided that strict conditions are respected (Article 5, §2-6 AI Act).

5. High-risk AI systems

The AI Act introduces a risk-based approach to the classification of AI systems. AI systems which are deemed to be “high-risk” are subject to specific requirements, which also extend to the providers and deployers of such systems (Article 6 AI Act).

An AI system is considered high-risk if it:

  • is intended to be used as a safety component of a product (or as a product itself is already covered by harmonised EU safety and health regulation – see Annex I of the AI Act); or
  • it is listed in one of the 8 categories of Annex III of the AI Act (unless they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making). Examples of such high-risk categories include AI systems in the areas of biometrics, critical infrastructure and border control management.

There is however no exception possible for AI systems which perform profiling of natural persons, as those AI systems will always be labelled high-risk (Article 6.3 in fine AI Act).

6. GPAI models

A GPAI model is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market” (Article 3 (63) AI Act).

Providers of GPAI models must comply with several obligations, such as drawing up technical documentation of the model, putting in place policies on intellectual property rights and publishing sufficiently detailed summaries about the content used for training the GPAI model (Article 53.1 AI Act). Such obligations do not apply to providers of GPAI models that are released under a free and open-source licence, are modifiable and whose parameters are made publicly available (Article 53.2 AI Act).

The AI Act also contains specific rules on GPAI models which are classified as ‘GPAI models with systemic risk’. That is the case if the GPAI model (i) has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators (such as the cumulative amount of computation used) and benchmarks or (ii) is designated by the Commission as such (Article 51.1-2 AI Act).

Providers of GPAI models with systemic risk are obliged to notify the Commission without delay (and in any case within two weeks of the provider noticing) that its GPAI models qualifies or will qualify as a GPAI model with systemic risk (Article 52.1 AI Act). Providers of GPAI models with systemic risk also need to comply with specific obligations, such as performing model evaluations and ensuring adequate levels of cybersecurity protection (Article 55.1 AI Act).

7. Sanctions

Penalties for non-compliance with the AI Act vary greatly depending on the nature of the infraction and the relevant circumstances of each individual case. Non-compliance with the rules on prohibited AI practices is subject to administrative fines of up to EUR 35 million or up to 7% of the total annual turnover in case of an undertaking (Article 99.3 AI Act). Non-compliance with other provisions is subject to a maximum fine of EUR 15 million or 3% of total annual turnover, whichever is higher (Article 99.4 AI Act). Supplying incorrect, incomplete or misleading information to notified bodies or national competent authorities is subject to administrative fines of up to EUR 7.5 million or up to 1% of its total worldwide annual turnover, whichever is higher (Article 99.5 AI Act).

In case of SMEs and start-ups, the fines for all infractions are subject to the same maximum percentages or amounts listed above, but whichever thereof is lower (Article 99.6 AI Act).

8. When will the AI Act start applying?

The AI Act entered into force on 1 August 2024, whereby the bulk of its provisions will apply from 2 August 2026 (Article 113 AI Act).

Chapters I and II of the AI Act (general provisions, definitions and provisions on prohibited AI practices) will already apply from 2 February 2025.

Some other provisions (such as rules on GPAI models, governance and penalties) will for the most part apply from 2 August 2025. Member States will also have to designate one notifying and one market surveillance authority (as well as their single point of contact) by that date.

The rules on high-risk AI systems will apply from 2 August 2027, which is also the deadline for providers of GPAI models that were placed on the EU market before 2 August 2025 to achieve full compliance with the AI Act.

dotted_texture