Exploring the legislative landscape for AI – the EU's AI Act
The commercial potential of artificial intelligence ("AI") is immense, and the impact it will have on how we live, work and interact is still unknown (other than it will be significant). It will also have a huge economic impact - research by the UK Government estimates that the AI market will account for £200 - £600 billion of spending by UK businesses by 2040.
As the reliability of AI systems improves and use cases grow and mature, legislative bodies around the world are seeking to establish regulatory frameworks in which AI technology can thrive and individuals are protected from potential harms. The rationale being that regulation will drive trust in AI, which will facilitate development and adoption.
In this insight, we consider the state of the proposed Regulation on Artificial Intelligence (the "AI Act")
Background and status update
On 21 April 2021, the European Commission (the "Commission") introduced the draft Artificial Intelligence Act ("AI Act"), with a goal to ensure that AI systems placed on the market in the EU are safe and respect existing laws on fundamental rights and values. Following various responses to this proposal, a compromise version of the AI Act was approved by the European Council on 6 December 2022, and on 14 June 2023 the European Parliament voted to adopt its own negotiating position on the AI Act.
The AI Act is now subject to negotiations between EU Member States and the EU institutions (the so-called "trilogue"), and a significant amount of effort and compromise will be needed to close out major differences.
Effective date
Currently unknown: Depending on progress through the EU institutions, the final AI Act may be adopted at the end of 2023 or early 2024 (ahead of the next Parliament elections). It will then likely have a 18-24 month lead-in period until it is in force (although some parties have been lobbying for up to 36 months). Realistically, it's unlikely to apply until mid to late 2025, early 2026.
Diverging approach – EU vs UK
On 29 March 2023, the Department for Science, Innovation and Technology ("DSIT") published a white paper 'A pro-innovation approach to AI regulation' (the "White Paper"), setting out the UK Government's principles based framework for the regulation of AI. This light touch approach forms part of the UK Government's National AI Strategy, the UK's vision for unlocking the potential of AI with a "pro-innovation regulatory environment". It relies on compliance with the following key principles:
- safety, security and robustness
- transparency and explainability
- fairness
- accountability and governance
- contestability and redress
However, whilst the White Paper does not currently envisage that the Government will look to regulate AI through the adoption of an Act of Parliament, with DSIT vocalising its preference to implement a decentralised framework under guiding principles, more recent announcements by the Prime Minster Rishi Sunak may result in a re-assessment of this strategy.
Irrespective of the path to be taken by the UK, there are separate regimes on the horizon for the UK, EU and the rest of the world. Therefore, it is critical that businesses developing and offering AI products and systems, and those implementing AI into their operations, are aware of how the proposed rules will apply. Understanding the diverging regimes will be important in developing commercial approaches and preparing for compliance.
Summary overview
The AI Act seeks to categorize and regulate AI systems based on their potential risk to humans. It plans to outright prohibit AI systems that pose an "unacceptable risk," while stringent regulatory requirements will be imposed on 'high-risk' AI systems. 'Low-risk' AI systems, conversely, would only be subject to minimal transparency obligations.
Interestingly, the Parliament's proposals also call for the regulation of 'foundation models' (defined by the MEPs to be AI models that are trained on vast data sets, and which are designed for broad applicability, and which can be adapted for a multitude of specific tasks). It proposes that additional obligations will apply to providers of 'generative foundation models', which are utilised in AI systems primarily to generate content (such as ChatGPT). Providers and users of these AI systems will need to clearly indicate when content has been created by an AI system rather than a human, and will be required to disclose a detailed summary of the training data used which may be protected by copyright.
Deep dive
What is its primary purpose?
The key aims of the AI Act are stated to be to prohibit certain practices, put requirements and obligations in place for high-risk AI systems, harmonise transparency rules, and provide rules on market monitoring and surveillance.
Who does it apply to?
The AI Act is set to impact a broad spectrum of stakeholders engaged in the creation, implementation, and application of AI systems within the EU. The primary obligations are mainly imposed on organisations placing AI systems on the EU market or putting them into service in the EU. However, manufacturers, importers, distributors, users, and other third parties will also be subject to compliance obligations.
What does it apply to?
Initially, the European Commission characterized AI through a variety of specific techniques. However, in response to worries that a comprehensive definition could inadvertently encompass traditional computer processes or software, the EU Council and Parliament have chosen to tighten the definition. The focus is now primarily on machine-learning capabilities, aligning with the definitions provided by the Organisation for Economic Co-operation and Development (OECD) and the U.S. National Institute of Standards and Technology (NIST): “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
Territorial scope
The AI Act has a wide "extraterritorial" scope, with a focus on whether the impact of the AI system occurs within the EU, regardless of the location of the provider or user. It applies to organisations that place or put AI systems into service in the EU (irrespective of where they are established), users of AI systems located in the EU, and providers and users outside of the EU where the output of the AI is used in the EU.
Key provisions / requirements
The key obligations under the AI Act concentrate on the following key themes:
- transparency: providing transparency information when the AI interacts with users;
- examination and testing: putting measures in place to examine, test and validate their AI systems at all stages of development; and
- recording compliance: making available to competent authorities specific compliance documentation to demonstrate adherence to the AI Act.
The AI Act categorises AI systems into different levels of risks, and regulates them accordingly. AI systems that are most harmful will be banned, while a defined list of “high-risk” AI systems will need to comply with strict requirements. Transparency requirements will apply to AI systems with limited risks, while minimal risk AI (such as such as AI-enabled video games or spam filters) will not be subject to any obligations.
Prohibited AI systems
The AI Act provides a list of certain particularly harmful AI systems that are prohibited and cannot be placed on the market or put into service. This category includes AI systems that use "subliminal techniques beyond a person's consciousness", exploit the vulnerabilities of a specific group of persons, or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).
MEPs also amended the text to ban other intrusive and discriminatory uses, such as real-time biometric identification systems (e.g. facial recognition) for law enforcement in public spaces unless it is strictly necessary within a number of limited exceptions.
The Parliament’s negotiating position on prohibiting real-time biometric identification in public spaces is likely to be a point of contention in the trilogue process, as certain member states in the Council (and the original Commission text) wish to allow it.
High risk AI systems
High risk AI systems are those where there is a risk of harm to health or safety or an adverse impact on fundamental rights, and are listed in Annex III of the Act. Examples of high risk AI includes systems already subject to harmonised EU safety requirements or conformity assessments (such as in transportation, vehicles or machinery), and specified applications including recruitment systems, biometric ID systems, and those used for critical infrastructure, performance appraisals, and credit checks.
These systems are subject to a broader and stricter range of requirements, reflective of the additional consequences which may result from its use. Key obligations include requirements to:
- ensure data sets (used to train, validate and test AI systems) are reliable. In particular they need to be "relevant, representative, free of errors and complete".
- Create and maintains a risk management system, mandatory requirements to maintain technical documentation and other records, and to log events.
- implement human oversight, so that a human can understand the AI and monitor for unintended/unexpected performance and so they can step in to override outputs and put a hold on the system if needed.
- Put in place additional transparency disclosures to users, including on the intended purpose, the level of accuracy, robustness and cybersecurity against which the AI system has been tested.
The EU Parliament has proposed that the scope of high-risk systems be limited to only where such AI poses a “significant risk” to an individual’s health, safety, or fundamental rights. Other changes include a requirement to conduct “Fundamental Rights Impact Assessments” for high-risk systems.
Low-risk AI Systems and minimal risk AI
The AI Act introduces transparency requirements for certain low-risk AI systems. These apply to AI systems that engage with humans, including chatbots, emotion recognition systems, biometric categorisation systems, and systems that generate or manipulate content to mimic existing people, objects, or locations, also known as 'deepfakes'. AI systems that pose minimal risk are essentially unregulated.
“General Purpose AI” and Generative AI
There is significant debate as to whether "general purpose" AI ("GPAI") should be captured within the high risk category, along with the degree to which foundation models and generative AI ought to be regulated.
Under the Parliament's stance, GPAI systems should not fall under the AI Act, and should only be subject to separate specific independent testing and transparency mandates.
The Parliament also suggests a framework, akin to that for high-risk AI, for foundation models. These models are defined as ones "trained on wide-ranging data at scale, designed for diverse output, and adaptable to an array of distinct tasks". For generative AI, more stringent requirements are proposed which would require providers to notify users when content is AI-generated, implement sufficient safeguards in training and design, ensure the lawfulness of generated content, and make public a "sufficiently detailed summary" of copyrighted data used in model training.
Regulatory supervision
Member States will be required to designate or create national regulatory bodies responsible for enforcing the AI Act, and the Commission will co-ordinate EU-wide issues, advised by a new AI Council. This is very similar to the supervision and enforcement module under the General Data Protection Regulation.
There is some current debate about whether more supervision and enforcement should be carried out centrally by the Commission and a newly formed European Artificial Intelligence Office. So, this is an area for possible movement as the AI Act is finalised.
Enforcement and fines
The most serious breaches, including in relation to data issues with high-risk AI, will be subject to fines up to the higher of 6% of worldwide turnover or €30 million. Other substantive provision will be subject to fines of up to 4% of worldwide turnover or €20 million, with a lower category of fines of up to 2% of worldwide turnover or €10 million for administrative failures in dealing with enforcement bodies.
However, the Parliament’s proposal increases the potential penalties up to the higher of €40 million, or 7% of annual global revenue.
Other regulatory developments to be aware of
A number of other regulatory developments in the UK, the EU and other jurisdictions will also affect the development, use and rollout of AI systems. Impacted businesses must therefore understand how these may affect their current and future AI endeavours. These developments include (but are not limited to):
- The UK's White Paper setting out the UK Government's principles based framework for the regulation of AI.
- The EU's Data Governance Act – which will provide more opportunities and structure in regard to data sharing, and also relevant parts of the Digital Services Act, Data Act and Cyber Reliance Act.
- The EU's Artificial Intelligence Liability Directive – an EU proposal that will provide uniformity in rules for non-contractual civil liability for damage caused with AI involvement.
- The UK's Data Protection and Digital Information Bill - set to change (among other things) the rules on how personal data is processed by automated systems automated processing regulation.Our data protection bulletin (published regularly on our hub here) is tracking the progress of this new legislation.
- Canada's proposed Artificial Intelligence and Data Act – which would establish common requirements for the design, development, and use of artificial intelligence systems, including measures to mitigate risks of harm and biased output. It would also prohibit specific use of AI systems that may result in serious harm to individuals or their interests.
- US State Bills and Federal AI laws – there are a number of laws and regulatory orders on the cards in the US, including the Federal Trade Commission's expansion of its rulemaking into AI enforcement. To date there have been over 40 bills introduced across US States that would regulate in some way the use or deployment of AI. In June 2023, U.S. senators introduced two separate bipartisan artificial intelligence bills amid growing interest in addressing issues surrounding AI technology.
AI Providers must also consider non-legal regulation which may influence AI Systems such as AI assurance frameworks, and regulatory guidance.
What you should be doing now in preparation
Whilst the regulatory frameworks in the UK and EU, and around the world, are yet to be finalised, there is sufficient information and common themes for organisation to implement steps to prepare for new requirements that lay ahead. In fact, making headway on these now will almost certainly ease the compliance burden down the line. Actions to consider include:
- develop an AI asset register – knowing where and how AI is used in your organisation will enable you to assess risks and implement policies and compliance requirements;
- implement policies that govern how AI should be developed or implemented. These should address how to manage risks such as bias, dataset integrity, and transparency;
- carry out and document risk assessments, in particular where the AI could present higher risks. Start building these into standard operating procedures (in the same way data protection impact assessments are implemented);
- establish an AI governance body – if AI is, or will, form a key part of your product portfolio, or internal operations, establish a governance structure to guide the business and act as the gatekeeper; and
- track legislative progress so you keep up to date with developments. When AI is becoming deeply embedded in your organisation, you need to plan well ahead.
Conclusion and our thoughts
AI is changing the way businesses operate, enabling them to save time and money through automation and identifying trends and patterns that have previously gone unnoticed. Businesses are already seeing these benefits, but they now need to keep in mind the likely regularity landscape when developing, procuring and implementing AI (and not lose sight of the current laws, such as GDPR, that already apply).
We continue to monitor the ongoing negotiations and developments of the AI Act and the fast-evolving regulatory landscape globally, and are well placed to advise clients on their compliance efforts. Although the proposed frameworks are still subject to further consideration, it is likely that the proposals will closely follow, in some form, the obligations imposed in the drafts discussed above.