The EU AI Act: What multinational organisations should know

11 September 2024
spotlight_insights_03.jpg
The European Union’s AI Act came into force 1 August 2024 and is a milestone of artificial intelligence regulation.

As the first legislation of its kind, the AI Act aims to balance promoting innovation with safeguarding public safety, transparency and ethical standards.

For multinational organisations with EU activities, understanding and complying with the AI Act is crucial for lowering financial and reputational risks. While the act is now in effect, some of its provisions will come into force over the next two years.

This article offers an overview of the AI Act, highlighting key components for multinationals.

Background and AI risk categories

The EU AI Act regulates AI systems, which the European Parliament defines as systems “capable of adapting their behaviour to a certain degree by analysing the effects of previous actions and working autonomously.”

The act applies to various AI applications and industries but does provide exemptions, such as for AI systems used for military, defence or national security purposes, or those developed and put into service for the sole purpose of scientific research and development.

A critical feature of the legislation is its classification of AI systems into different risk categories, as follows:

  • Unacceptable risk. The AI Act bans certain risky AI systems, for example those used in social scoring by governments or in toys that use voice assistance to encourage dangerous behaviour.
  • High risk. High-risk AI systems include those used in the following areas. (High-risk obligations are discussed in the next section.)
    • critical infrastructures, such as transportation
    • educational or vocational training, such as exam scoring
    • the employment and management of workers, such as resume-sorting software for recruitment
    • essential services, such as credit scoring
    • law enforcement related to fundamental rights, such as evidence evaluation
    • migration, asylum and border management, such as visa application examination
    • justice and democratic-process administration, such as searches for court ruling
  • Limited risk. These AI systems are subject to lighter transparency obligations relative to high-risk systems. Under the act, providers and deployers must ensure that end users are aware they are interacting with AI, for example that they are interacting with a chatbot as opposed to an actual person, or watching a deepfake as opposed to an unaltered video recorded from an actual event. In addition, AI-generated text “published with the purpose to inform the public on matters of public interest” must be labelled as AI-generated.
  • Minimal risk. These AI systems include AI-enabled video games or spam filters and are unregulated under the act.

High-risk systems: Provider and user obligations

Most of the language in the act addresses high-risk systems, and most obligations fall on system providers, also known as developers. Providers intend to place or put into service high-risk systems in the EU and can be based in the EU or in a non-EU country (referred to as third countries). Providers include those based in third countries whose AI systems’ outputs are used in the EU.

Under the act, high-risk providers must implement ongoing risk management protocols, document compliance, provide authorities with information to assess compliance, provide instructions for use to enable user compliance and take other measures to ensure compliance with the act.

Providers must ensure that AI systems mark AI outputs in a machine-readable format and ensure the outputs are detectable as AI generated or manipulated.

Users are persons who deploy an AI system in a professional capacity (as distinct from end users or consumers). Users include persons in the EU as well as in third countries when an AI system’s output is used in the EU.

Users of high-risk AI systems have obligations under the act, though fewer than providers. Most significantly, the act indicates that users of high-risk systems must monitor their system operation based on the provider’s instructions for use. If a user has reason to believe that using the AI system in accordance with provider instructions may result in the system presenting risk as defined in the act, then the user must inform the provider and stop using the system.

Users of AI systems that generate or manipulate images and other content that constitutes deepfakes must visibly disclose that the content has been AI generated or manipulated.

General purpose AI (GPAI) and transparency

The new legislation distinguishes general-purpose AI models (including large generative AI models such as ChatGPT) from other models. A GPAI model can be trained using large amounts of data, can perform a wide range of tasks and can be integrated with other systems or applications. A GPAI system is based on a general-purpose AI model and can be used as, or integrated into, high risk AI systems.

To promote transparency, GPAI model providers must disclose certain information to downstream providers. They must also have policies to comply with copyright laws when training the models, among other obligations.

To protect against systemic risks, the act also has a threshold for the cumulative amount of computing power used for training. The provider of a GPAI model that meets the threshold must contact EU authorities. GPAI models with systemic risk must also perform evaluations, assess and mitigate risks, report serious incidents and implement adequate cybersecurity controls.

Implementation timeline

The EU AI Act will be fully applicable 24 months after its enactment on 1 August 2024. The Act will roll out its provisions in stages. Here are some highlights:

  • Ban on unacceptable-risk AI systems: Effective six months after enactment.
  • Codes of practice: Effective nine months after enactment.
  • Transparency requirements for GPAI: Effective 12 months after enactment.
  • High-risk AI systems under Annex III, such as systems listed in biometrics, education and employment: Effective 24 months after enactment
  • High-risk AI systems under Annex I, related to areas such as machinery, toys and gas-burning appliances: Mandated 36 months after enactment

Penalties for non-compliance

Penalties for non-compliance will be set by individual EU member states, but the act provides thresholds that member states must take into account. These include up to €35 million or 7 percent of a company’s total worldwide annual turnover of the preceding financial year (whichever is higher) for engaging in prohibited practices or non-compliance with requirements on data.

The European Commission can also enforce fines on providers of GPAI models, taking into account thresholds of €15 million or 3 percent of a company’s total worldwide annual turnover of the preceding financial year.

Cross-border compliance and other considerations for multinational organisations

It will be challenging for multinational companies operating in different EU countries to ensure compliance with the AI Act. Even though the regulation aims for uniformity across the EU, local interpretation and enforcement of the rules may vary. These uncertainties represent risks that any multinational operating in the EU must account for.

In July, for example, Meta announced it would not release its AI model in the EU due to unpredictable regulators. In June, Apple decided to delay its release of an AI product in the EU due to regulatory uncertainties. That said, given the size and importance of the EU market, it’s inevitable that major AI developers and users will eventually implement policies and procedures that allow them to comply with the AI Act and other EU regulations that affect AI providers and users, such as the GDPR.

Multinational organisations must also account for any new, emerging and evolving AI regulations in countries outside the EU. These are sure to proliferate given ongoing, rapid advances in AI technology. In 2023, for example, US president Joe Biden issued an executive order establishing safety standards and requiring large AI developers to share safety test results and other information with the government. And this year, China released a draft of security requirements for AI service providers.

To return to the EU AI Act: Multinational organisations should understand that providers and users under the act represent a large and growing number of businesses. Clearly, AI developers must understand and follow the act’s provisions and fulfil its obligations if they have EU outputs. Just as critically, multinational employers that may not be directly involved in developing AI should understand any obligations they may have as users (i.e. deployers) of AI under the new regulation.

To take what should be a common example, a multinational employer using high-risk AI systems for recruitment in the EU must understand and follow the AI provider’s instructions for use. If the employer has reason to believe that using the AI system in accordance with provider instructions may result in the system presenting risk as defined in the act, it must notify the provider and stop using the system. It must also ensure human oversight of its recruitment processes, among other obligations.

Given the complexities of the AI Act, and the proliferation of AI regulations in other major economies, most multinationals will want to hire a third-party expert with a large global footprint to provide ongoing information and advice to lower related compliance and reputational risk in all countries of operation.

×