EU Agreement on the Text of a New AI Act

“In certain organizations, a sensible step may be to establish an internal role of ‘AI regulatory officer’ to lead the process of ensuring that all the business operates in a manner which ensures compliance with what will likely be contained in the AI Act in due course.”

EU AI ActOn December 8, 2023, provisional agreement was reached between the European Union (EU) Parliament and the EU Council on the basic content of the new AI Regulation (the “AI Act”) to be implemented as legislation in the EU. The text is still not publicly available as it is subject to certain further refinement over the coming weeks. However, there is information available in the public domain (including press releases issued by the European Union) as to the likely format of the AI Act. Additional background on the legislative process towards the AI Act is available here.

Prohibited uses of AI

In our earlier article, we had detailed that certain high-risk uses of AI were to be simply not permitted within the EU and this approach has been maintained by all accounts in the proposed AI Act. It is reported that banned applications of AI will include:

  • using facial recognition systems (otherwise known as “remote biometric identifications systems”) in publicly accessible spaces for law enforcement purposes although there will be some exceptions;
  • using AI to influence and overcome the free-will of individuals – or “cognitive behavioural manipulation”;
  • using AI in the workplace or education settings to establish the type of emotions that individuals are experiencing;
  • carrying out “social scoring” based on behaviour or characteristics of persons;
  • AI that exploits vulnerabilities in people such as their age, social economic circumstances or disabilities;
  • biometric categorising around sensitive characteristics such as political views, sexual orientation or philosophical beliefs; and
  • some types of predictive policing.

Untargeted scraping of facial images from the internet or CCTV for the purposes of producing facial recognition systems is also stated to be prohibited. However, while facial recognition systems are generally to be prohibited, there are likely to be certain exceptions relating to law enforcement in public spaces provided there has been judicial authorisation of its use and it is for the purposes of certain specific and very serious crimes. It is also likely to be permissible to use it for the purposes of preventing certain matters, such as imminent terrorist threat or investigating serious crimes.

High Risk AI

The AI Act is likely to regulate certain high-risk uses of AI, namely, uses that give rise to a high risk to safety, health, fundamental rights, the environment, democracy or the rule of law.

It is likely that “high risk” types of use will encompass the use of AI in the context of education, employment and recruitment, critical infrastructure, access to essential public and private services, law enforcement, border control, democratic process and administration of justice.

It appears likely that the AI Act will provide for:

  1. developers of AI systems in this category to carry out assessments to ensure that their systems meet key requirements for trustworthy AI, which will include ensuring that quality data is employed and the system is properly documented as well as safeguarding traceability, transparency, human oversight, accuracy, cybersecurity and robustness;
  2. an obligation to undertake a “fundamental rights assessment” in certain contexts – likely to be an assessment of the circumstances in which the AI system could be used, for how long and how often, the categories of people that may be affected, what risks of harm it gives rise to, how it will be overseen by humans, and what steps will be taken if the identified harms materialize;

iii. a right to complain about an AI system; and

iii. a right to receive an explanation about a decision taken by an AI system that affects a person’s rights.

General-Purpose AI (‘Foundation Models’)

The AI Act is likely to regulate “general-purpose AI models” which are trained on large amounts of data and able to undertake a wide variety of tasks and which can be integrated into a variety of downstream AI applications. Generative AI would be included in this category. This type of AI will be subject to two tiers of regulation that are distinct from the general controls that otherwise apply under the AI Act depending on the “risk” level of the application of AI.

At the first tier, providers of general-purpose AI will be required to:

  1. maintain technical documentation and provide information about their model so that downstream users can incorporate it into their systems to comply with their own AI Act obligations;
  2. have in place policies to ensure that EU copyright rules are complied with and in particular, to ensure that where copyright holders have opted out of allowing their data to be available for text and data mining, this is respected; and

iii. prepare and publish statements about the data used to train their general-purpose AI models.

Where general-purpose AI has a potential “high impact”, that is to say it possesses a “systemic risk”, then there will be further obligations – second tier ones – upon providers including:

  1. the need to undertake model valuation, including adversarial testing, to assess and mitigate possible systemic risk at the level of the EU;
  2. to monitor and report to the European Commission on serious incidents;

iii. to ensure adequate cyber security for the model and its physical infrastructure; and

  1. to report and assess the amount of energy consumed by the model.

The AI Act will contain a provision that will introduce a presumption that a system poses such a “systemic risk”, which will be based on the cumulative computing power that was used to train the system (which it appears will be set at a level to capture only the very largest models).

Free and Open-Source AI Systems

As matters stand, these will have a limited exemption from the scope of the AI Act.

However, in so far as they fall within the prohibited or high-risk category, or pose a risk of manipulation or are a general-purpose AI model with systemic risks, they will not be able to rely on the exemption.

Providers of such open-source AI systems will also have to comply with obligations in respect of copyright and transparency.

Enforcement and Penalties

While the European Commission will establish an “AI Office” to oversee general purpose AI in the EU, enforcement of the AI Act will likely be the responsibility of designated authorities at state level.

The AI Act remains likely to impose serious penalties for non-compliance with the requirements of the AI Act, with maximum fines now being raised to 7% of annual group turnover or €35 million (about USD 38.258 million), if higher.

Timings Towards Full Implementation.

As stated above, considerable further work needs to be undertaken to finalize and publish the actual agreed text of the draft legislation.

The view appears to be that it might take around 5-6 months before the final text is published in the Official Journal of the EU whereupon it will become binding legislation 20 days after its publication. Hence, it appears unlikely to come into force before the summer of 2024.

Even then, it is not likely to be the case that the entire EU AI Act will immediately come into force.

Rather, it looks likely that the prohibitions of certain banned categories of AI will come into force within six months of the AI Act becoming law, so perhaps towards the end of 2024.

After 12 months of becoming law, perhaps by summer 2025, the provisions which are concerned with high impact general purpose AI systems with systemic risk and the provisions on the obligations of high-risk AI will come into force. At the same time, in summer of 2025, we can likely expect the provisions of governance and conformity bodies to come into force.

Thereafter, all the other provisions will come into force two years after the AI Act becomes law, and so likely in or around summer 2026.

What Steps Should Businesses Take Now?

While the precise text of the EU AI Act is yet to be finalized, it is now apparent how the Act is likely to operate once it is in force.

Organizations that develop or use AI systems should begin to consider whether the Act is likely to extend to their business operations and, if so, should begin to consider the extent to which they will be likely to continue to operate as they are or whether steps will need to be taken to ensure compliance with the likely requirements. Issues relating to privacy, training data, and copyright will all likely come into play.

In certain organizations, a sensible step may be to establish an internal role of “AI regulatory officer” to lead the process of ensuring that all the business operates in a manner which ensures compliance with what will likely be contained in the AI Act in due course. An AI regulatory office will also ensure that the AI systems are safely used within the organization and include safeguards when sold to third-parties.

We will continue to monitor developments and report on these in due course.

Image Source: Deposit Photos
Author: phonlamai
Image ID: 207012400 


Warning & Disclaimer: The pages, articles and comments on do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of

Join the Discussion

2 comments so far.

  • [Avatar for Anon]
    January 4, 2024 02:14 pm

    Under 2. an obligation to undertake a “fundamental rights assessment” in certain contexts

    Will that include front end Fair Use / copyright considerations?

  • [Avatar for Anon]
    January 4, 2024 02:11 pm

    Good luck with “using AI to influence and overcome the free-will of individuals – or “cognitive behavioural manipulation”;”

    This rules out ALL marketing and sales uses.