The EU AI Act: the countdown begins – what you need to know

The EU AI Act: the countdown begins – what you need to know

Overview

The EU's Artificial Intelligence Act (AI Act) was finally published in the Official Journal of the EU on 12 July 2024. This means that the AI Act will now enter into force on 1 August 2024, 20 days after publication.  The AI Act will have implications for businesses around the world, not just in the EU. Most obligations under the AI Act apply from 2 August 2026, but different obligations come into play at different stages.  This briefing explains which obligations apply when and suggests some practical steps for businesses to take.

What and who is within the scope of the AI Act?

The obligations in the AI Act apply to providers (such as developers), deployers (users), importers, distributors, and product manufacturers of "AI systems" and providers of "general-purpose AI models".  The most onerous obligations are borne by providers. 

 What counts as an "AI system"?

"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

The definition is broad but with a focus on autonomy and inference capabilities to distinguish AI from conventional software operating by reference to predetermined rules.  We can expect guidelines from the Commission on the application of this definition by 2 February 2026.

In a similar way to the GDPR, the AI Act applies to entities established outside the EU, as well as to those within the EU, if they put AI systems on the market or into service in the EU or the output of the AI system is used in the EU. 

The obligations under the AI Act vary according to how the AI is categorised:

  • prohibitive risk – AI systems that pose an unacceptable risk and so are banned under the AI Act.

  • high risk – AI systems that create a high-risk to the health and safety or fundamental rights of individuals and so are subject to some of the most onerous obligations.

  • limited risk – AI systems that pose a transparency risk.

There are separate sets of obligations for general-purpose AI models and systems ("GPAI") – systems such as ChatGPT, Siri or Alexa, trained on large amounts of data that have a wide range of uses and are often integrated into other downstream systems and applications.  The two regimes are not mutually exclusive: a GPAI model can become part of a high-risk system subject to high-risk obligations in addition to GPAI obligations.  There are two tiers of obligations for GPAI: a set of obligations that apply to all GPAI models and an additional set of obligations that apply to a subset of GPAI models with "systemic risk".  Models with systemic risk are those which have high impact capabilities (which are presumed if the computational power exceeds a designated threshold) or they are designated as such by the Commission. 

Many AI systems will not fall into any of these categories and are considered low risk.  Organisations developing and using such systems are encouraged to adhere to voluntary codes and will be subject to an "AI literacy" obligation (in essence, to inform and train their staff/supply chain engaging with AI on the risks and impacts of AI, dependant on the relevant context) but the AI Act will not otherwise apply to such systems.  Implementing AI "Responsible Use" policies will play an important role in helping to meet the "AI literacy" requirement.

Other carve-outs from the AI Act's scope

Certain AI systems are not in scope.  For example, AI systems used solely for scientific research and development, or for personal use, or for research and testing prior to a system being put on the market, or those released under open source licences (unless they are classified as prohibited or high-risk).  

What happens and when?

The AI Act has a staged application, with different timings for the different categories of AI. The following sets out the timings of key obligations.

2 February 2025
Prohibited AI Systems will be banned and the AI literacy obligation applies

Prohibited AI systems are banned completely.  In-scope organisations should ensure before 2 February 2025 that no prohibited AI systems are being used, whether as a product offering or within the business.  The vast majority of organisations will have had no engagement with these banned activities, although perhaps a watch out for some may be the ban on emotion recognition in the workplace and in education.

Which systems are banned?

In summary, AI systems are banned for:

  • manipulating and distorting people's behaviour causing significant harm.

  • exploiting people's vulnerabilities in ways reasonably likely to cause significant harm.

  • social scoring.

  • predictive policing, based solely on personality traits and characteristics. (This does not apply individuals that have already been linked to criminal activities with objective and verifiable facts).

  • creating or expanding facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

  • emotion recognition in a workplace or educational setting, with the exception of those that are there for medical or safety reasons.

  • biometric categorisation to deduce an individual's race, sexual orientation, political opinions or other such sensitive data – with a carve out for a law enforcement context.

  • 'real time' remote biometric systems in publicly accessible spaces for the purposes of law enforcement, unless certain exceptions are met. 

The AI Literacy obligation also applies to providers and deployers of AI systems from this date.

2 August 2025
Rules for new GPAI models and systems

Requirements on GPAI systems that apply from 2 August 2025 will only apply to new GPAI models put on the market on or after that date. GPAI models on the market prior to that date will have a further two years to comply (i.e. until 2 August 2027).  GPAI models must meet additional transparency requirements and providers must maintain technical documentation recording the training and testing process of the underlying model, as well as documentation to supply to downstream providers which allows them to understand the capabilities and limitations of the system. Systemic risk GPAI models are subject to a further layer of obligations, e.g. providers must perform adversarial testing to identify and mitigate system risks, document these risks and track and report incidents to the EU AI Office.

The deadline for codes of practice for GPAI models (to help providers demonstrate compliance) is only a few months earlier - 2 May 2025.

2 August 2026
(Most of) the high-risk framework and the transparency risk obligations apply

The majority of the AI Act will apply from 2 August 2026. This includes the obligations on most high-risk systems.

High risk systems fall into 2 categories:

1. any AI system which is a safety component of, or is itself a product subject to EU product safety regulations and required to undergo a third-party conformity assessment pursuant to that legislation. The full list of legislation is set out at Annex I of the AI Act, covering medical equipment, toys, radios, PPE etc; or

2. any AI system that is specifically designated at Annex III.  Annex III sets out areas such as:

  • certain biometrics use cases.
  • critical infrastructure.
  • access to educational and vocational training.
  • certain employment contexts e.g. systems used in a recruitment or selection process, or to make decisions during the working relationships including performance monitoring, work allocation, promotion or termination.
  • credit checking and life and health insurance risk and pricing assessments.
  • various public sector applications, such as assessing eligibility for benefits, border control and asylum, law enforcement and administration of justice and elections.

Even if a system falls into the Annex III areas, it may be possible to demonstrate that the system is not in fact high-risk by reference to certain criteria, for example, if the use case is limited to performing a narrow procedural task or involves refining the results of tasks previously completed by a human.

 

The 2 August 2026 enforcement date does not apply to all high-risk systems:

  • There will be no enforcement in respect of any high-risk system in the private sector on the market prior to that date, provided it is not significantly changed after that date;

  • Pre-existing high-risk systems intended for public authority use will have until 2 August 2030 to comply; and

  • High-risk AI systems under Annex I of the AI Act (products subject to product safety legislation) that are put on the market on or after 2 August 2026 will have until 2 August 2027 to comply. 

Providers of high-risk AI systems will be subject to a range of obligations, including:

  • conformity assessments.

  • registration of the system in a new EU AI database.

  • implementing detailed rules on areas such as human oversight, data quality and governance, transparency, accuracy and cybersecurity.

  • post-market monitoring system and incident reporting.

An organisation can also be deemed a provider if it puts its trade mark on, or substantially modifies, a high-risk system that is on the market or alters the purpose of an AI System (including a general-purpose AI system) so that it becomes high-risk.

 

Users of high-risk AI systems will have fewer, but still considerable obligations, e.g.:

  • using technical and organisational measures to comply with a provider's instructions for the use of the system.

  • assigning human oversight to competent, trained personnel.

  • ensuring relevant and representative input data for the AI system's intended purpose.

  • monitoring the operation of the system, keeping logs of its operation and reporting risks and incidents.

  • informing workers' representatives when using high-risk systems in the workplace.

The obligations relating to AI systems classified as a transparency risk will also apply from 2 August 2026.

Providers of AI systems intended to interact directly with people, or which generate synthetic audio, image, video or text content will be subject to transparency obligations. Users of emotion recognition or biometric categorisation systems or of AI systems to generate deep fakes will also be subject to transparency obligations.  Disclosing to end-users the fact that content has been generated or manipulated by AI and that they are interacting with an AI system will be key to compliance.

2 August 2027
High-risk obligations in respect of Annex 1 systems

High-risk AI systems under Annex I of the AI Act (products subject to product safety legislation) put on the market on or after 2 August 2026 must comply with applicable obligations by 2 August 2027.

Enforcement

The European Commission has established an Artificial Intelligence Office to enforce the AI Act in EU Member States. An EU AI Board, made up of representatives from these countries, will be established to help ensure a consistent implementation of the AI Act and Member States will designate national authorities to enforce the regulation.

There is a tiered approach to penalties with maximum fines of up to €35 million or 7% of worldwide group turnover (whichever is the greater) for breach of provisions related to banned AI, fines of up to €15 million or 3% of worldwide group turnover for certain violations relating to other systems and €7.5 million or 1% worldwide group turnover for certain false reporting breaches.

What's happening in the UK?

The previous UK government proposed relying on existing regulators and regulatory structures, rather than establishing broadly applicable AI-specific regulations or a dedicated AI regulator.  The new Labour government seems to have a greater appetite for legislating, on a targeted basis at least.  However, no specific AI bill was mentioned in the King's Speech on 17 July 2024, only that the new government will "seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models", so this does not seem to be an immediate priority for the new government.  It remains to be seen whether the UK will adopt a similar approach for these models as the EU's requirement in respect of GPAI models with systemic risk (drawing on Big Tech's compliance efforts to comply with the AI Act) or mark out its own path.

What practical steps can businesses take?

Businesses impacted by the AI Act are encouraged to start complying on a voluntary basis earlier than these deadlines. That said, many of the obligations are very high-level – in some cases, a product of uncomfortable compromises reached following the heavy negotiation of the text.  Secondary legislation, codes of practice, guidelines, standards and templates will help to clarify what practical and technical steps organisations are expected to take, but these are only likely to emerge some way down the track (e.g. the deadline for guidelines for example is 2 February 2026).

Businesses using AI should be taking steps to:

  • train staff on the implications of the AI Act and AI's risks.

  • review and risk assess current and prospective AI products and use cases against the requirements of the EU AI Act.

  • develop governance processes, documentation and policies, including a Responsible AI Policy or similar.

  • update contracts with suppliers and terms of business to address AI requirements and risks.

  • monitor for secondary legislation and guidance.

Remember that readying your business for compliance with the EU AI Act is only part, albeit an important part, of managing risks associated with AI.   You should also:

  • check that you are complying with existing legislation – in particular, the GDPR (as we explain further in this briefing).

  • be clear in staff policies and contracts with suppliers on how your data can be used and specify what can and cannot be inputted into AI systems to prevent the leaking of confidential and proprietary information and personal data of staff/customers.

  • check who owns the intellectual property rights in the output produced by AI system and whether your business is protected in respect of third party intellectual property infringement claims (as we explain in this briefing).

  • appropriately allocate responsibility for AI in your contracts with suppliers.

The Technology & Commercial Transactions team at Travers Smith, alongside experts from around the firm, are helping clients to address the complex legal challenges that AI technology poses. For more information about risks associated with AI, please listen to Travers Smith's podcast series, AI Insights.

Key contact

Read Louisa Chambers Profile
Louisa Chambers
Read Helen Reddish Profile
Helen Reddish
Back To Top