The Neural Network – July 2024

The Neural Network – July 2024

Welcome to the first edition of the Neural Network. The Neural Network is Stephenson Harwood's monthly round-up of developments in AI, covering three key areas relevant to professionals working in the AI space:

  1. New regulation and public body updates;
  2. Enforcement and litigation; and
  3. Updates and case studies on the use of AI technology.

In this edition, we look at key AI developments from June and July 2024. We examine the final text of the EU AI Act that was recently published and will come into force on 1 August 2024, the UK AI Bill that will be included in this week's King's Speech (including the AI aspects of the Labour Party manifesto for clues on what to expect), and we provide updates on the suspension of Meta's proposed AI training updates across several jurisdictions, and Colorado's AI Act.

In AI enforcement and civil litigation news, Clearview AI settled its US privacy class action, the EU Commission prepared to launch an antitrust investigation into the partnership between OpenAI and Microsoft, two AI music startups were sued by top record labels for copyright infringement in the US and the Court of Appeal considered whether an invention incorporating an Artificial Neural Network was eligible for patent protection.

In technology developments, Apple debuted its new AI functionality but postponed EU rollout due to privacy concerns and OpenAI’s former chief scientist launched a new AI company focused on the development of safe AI.

AI developments from prior months can be found in our data protection newsletter.

Regulation and government Updates

EU AI Act to come into force on 1 August 2024

On Friday 12 July, the final text of the EU Artificial Intelligence Act (the "EU AI Act") was published in the Official Journal of the European Union, meaning the EU AI Act will come into force on 1 August 2024.

Majority of the EU AI Act's provisions will be applicable after a 24-month transitional period and thus from 2 August 2026 (general applicability). However, the implementation of the EU AI Act will follow a phased approach, with multiple deadlines in between and some extending beyond 2026.

Below are key deadlines:

  • 2 February 2025: The list of prohibited AI uses will apply. Prohibited uses include social scoring and certain biometric categorisation, biometric identification and facial recognition.
  • 2 August 2025: Requirements for general-purpose AI models will start to apply.
  • 2 August 2026: General applicability of the EU AI Act's provisions, including requirements on high-risk AI systems in Annex III.
  • 2 August 2027: A subset of high-risk AI systems in Annex I have until 36 months after the Act comes into force to comply with their obligations.

For more information on the EU AI Act, please see our AI Act Quick Guide.

UK AI Bill in King's Speech

A UK AI Bill is set to be announced this Wednesday during the King's Speech as the new UK Prime Minister followed through on Labour Party's manifesto published on 13 June.

Overall, the Labour manifesto suggested that Labour will attempt to promote a balance between regulation and innovation, with a strategy to encourage the development of AI in the UK, counterbalanced with targeted regulation.

The manifesto foreshadowed regulation for companies developing "the most powerful AI models". This was not expanded on further. In the manifesto, Labour also recognised that regulators are currently ill-equipped to deal with the rapid developments in AI. For example, the patchwork approach to dealing with AI where each agency has some oversight over AI regulation creates a lack of cohesion. In order to combat this, the Party proposed the creation of a "Regulatory Innovation Office" to help regulators across all sectors and industries update regulation and speed up approvals.

More details on the UK AI Bill to follow.

Meta's plans to use personal data of users for AI training purposes suspended in Brazil, EU and UK

On Tuesday 2 July, Brazil's national data protection agency ("ANPD") announced that it had taken the preventative measure of suspending Meta's updated privacy policy that covered the use of user data to train its generative AI system. The ANPD also found that the potential use of personal data found in children and teenagers' posts to train Meta's AI systems could be in breach of the country's data protection law. Meta was given five days to comply with the suspension or face a daily fine of R$50,000 (£6,935), with which it complied.

This follows the Irish Data Protection Commission's ("DPC") announcement, on Friday 14 June 2024, that Meta had decided to stop its plans to use users' posts on Instagram and Facebook across the EU and EEA to train its AI system. The decision came after the DPC raised concerns about these plans, following 11 complaints from noyb (Max Schrems' privacy rights organisation) and other European data protection authorities. Please read our article here for further insight on this topic. Similarly, the UK's Information Commissioner's Office ("ICO") requested Meta to pause and review its plans after sharing concerns from users of their service in the UK, to which Meta also agreed. 

DSIT announces expansion under new Government

Shortly after the Labour Party had formed a new UK Government, the Department for Science, Innovation and Technology ("DSIT"), the department with overall responsibility for AI regulation and policy, announced that it would be expanding its scope and size.

DSIT will bring in experts from other organisations such as the Government Digital Service, the Central Digital and Data Office and the Incubator for Artificial Intelligence to consolidate the "digital transformation" of public services under one Department.

With this expansion, DSIT aims to accelerate the digital changes needed to improve the public's experience of interacting with the Government, for instance by helping civil servants to use AI in their work. DSIT also aims to ensure that the Government keeps pace with recent technological developments, so that it can develop appropriate infrastructure and regulation.

Colorado AI Act passes

On 17 May 2024, Governor Jared Polis signed the Colorado AI Act into law (the "Act"), which establishes a cross-sectoral framework for AI governance. Most of the Act’s provisions will take effect on 1 February 2026, with ongoing legislative reviews and potential revisions anticipated before this date. The Act is among the first of many proposed US state laws that are targeted at regulating AI.

The Act focuses on automated decision-making systems, particularly those deemed high-risk. A system falls under this category if it significantly influences consequential decisions. A decision is consequential if it has a "material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of education enrolment or opportunity, employment or an employment opportunity, a financial or lending service, an essential government service, healthcare services, housing, insurance, or legal". Certain AI systems are exempt from this high-risk classification. These range from calculators and spreadsheets to antifraud technologies, unless they use facial recognition.

A central focus of the Act is preventing algorithmic discrimination. Both developers and deployers of AI systems are mandated to exercise reasonable care to avoid biased outcomes. This includes, the developer providing documentation on training data, intended uses, and potential risks of algorithmic discrimination to the deployer.

From February 2026, the Colorado Attorney General will have exclusive enforcement authority. Non-compliance can result in substantial penalties, with violations treated as consumer protection infractions. Each violation could incur fines up to $20,000. If a violation impacts multiple consumers or transactions, each one is considered a separate violation, increasing the fine.

Enforcement and civil litigation

Clearview AI settles US privacy class action case

Clearview AI ("Clearview"), a facial recognition company, has agreed to settle a class action lawsuit that accused it of violating the privacy rights of millions of Americans. The lawsuit, filed by Illinois residents in January 2020, claimed Clearview AI violated the Illinois Biometric Information Privacy Act (BIPA) by scraping billions of facial images from the internet and selling information without consent, breaching many websites' terms of service. Clearview denied any wrongdoing.

The proposed settlement is estimated to be nearly $52 million. This amount represents as much as 23% of Clearview’s value on the basis of the recent valuations.

A "unique structure" was created for the settlement. If Clearview goes public or is liquidated, the settlement fund would be based on a percentage of the company's value. Alternatively, a court-appointed settlement master could require Clearview to pay 17% of its revenue since the settlement approval date or sell the settlement rights to a third party.

This creative solution allows class members to share in any future profits from Clearview, thereby regaining some control over their biometric data. Please see here to read our article on the disputed enforcement action against Clearview in the UK.

European Commission set to launch antitrust investigation into Open AI and Microsoft partnership

The European Commission (the "Commission") has announced that it is preparing for an antitrust investigation into Microsoft’s $13 billion investment in OpenAI. The concern is that this alliance may be harming competition in the AI market. This follows the Commission's decision not to proceed with a merger review of the powerful alliance between Microsoft and OpenAI (together the "Companies") due to insufficient evidence that Microsoft controls OpenAI.

Margrethe Vestager, the EU’s competition chief, highlighted concerns about “certain exclusivity clauses” in the Microsoft-OpenAI agreement that could negatively impact competitors. Vestager also revealed that the Commission had previously sent inquiries to Microsoft and other tech companies in March to assess whether AI market concentration might prevent new entrants. This probe follows similar scrutiny from US and UK regulators, who are also examining the alliance.

The investigation into the relationship between the companies began after OpenAI’s board dismissed its CEO, Sam Altman, in November 2023, only to rehire him days later. This incident highlighted the close ties between Microsoft and OpenAI, as Altman briefly joined Microsoft as head of a new AI research unit.

Antitrust investigations typically last several years and focus on practices that might undermine rivals. Companies found in violation risk significant fines and may be required to alter their business practices. The Commission is also examining “acqui-hires", where a company acquires another primarily for its talent, as seen in Microsoft’s recent hiring from AI start-up Inflection.

AI music startups sued by top record labels for copyright infringement

Major record labels, including Sony Music Entertainment, Universal Music Group Recordings and Warner Records, are suing two AI music start-ups for copyright infringement in the US. These start-ups, Suno and Udio, both produce AI models which create songs based on prompts, with the lawsuits alleging that they used copyrighted music to train these AI models.

The complaints assert that these companies' software can generate vocals that are nearly indistinguishable from renowned artists whose music is copyrighted, such as Michael Jackson and ABBA. The record labels are seeking damages of $150,000 per song that was allegedly used, and an injunction to prevent any future infringement.

AI music generation is a hot topic in the creative sector, with YouTube recently entering into talks with major record labels to allow YouTube to train its new generative AI tool on their music.

These lawsuits highlight the ongoing tensions between AI innovations and artists working in the creative field, with many artists complaining that their protected work has been scraped by AI companies without their permission. For organisations developing AI models, these claims make it clear that it is vital to ensure that you have visibility over what data is being used to train models, in order to ensure that it is not subject to IP rights and restrictions.

Will Emotional Perception impact the patentability of AI in the UK?

On 14 and 15 May, the Court of Appeal heard the case of Emotional Perception AI Ltd v Comptroller-General of Patents.

The appeal arises following a decision delivered by the High Court last year (Emotional Perception AI Ltd v Comptroller-General of Patents [2023] EWHC 2948 (Ch)) where the High Court considered, for the first time, whether an invention incorporating an aspect of AI, namely an Artificial Neural Network ("ANN") was eligible for patent protection.  

The proceedings initially came before the High Court in the form of an appeal from an earlier decision of the UK Intellectual Property Office ("UKIPO"). The UKIPO had refused to grant a patent for an invention as claimed, on the basis that the invention was 'a computer program as such' (and made no technical contribution) and was therefore excluded from patentability in accordance with s1(2)(c) Patents Act 1977.

After detailed consideration of the authorities on excluded subject matter, and contrary to the position taken by the UKIPO, the High Court concluded that the statutory exclusion to patentability in s1(2)(c) Patents Act 1977 ought not to be invoked, either because the invention as claimed was not 'a computer program as such' or, if it was, because the invention as claimed demonstrated a technical effect.

Following the High Court judgment, the UKIPO updated its guidelines for examining patent applications relating to inventions involving ANNs. The updated guidelines confirmed that the computer program exclusion should not be invoked for inventions which involve ANNs implemented as physical hardware or emulated using software.

In practical terms, this means that pending the outcome of proceedings before the Court of Appeal, the UK approach to certain AI inventions now differs from the approach adopted by the European Patent Office.

It remains to be seen whether the Court of Appeal will overturn the judgment of the High Court or confirm it. If the Court of Appeal is minded to adopt the more permissive approach to the statutory exclusions to patentability in s1(2)(c), it will mark the beginning of a new approach to the patentability of ANN inventions in the UK and perhaps signal a fresh approach to AI inventions more generally.

Technology developments

Apple debuts new AI system but postpones EU rollout

On 10 June, Apple debuted "Apple Intelligence", a new personalised AI system. This new system includes a wide range of generative AI features, such as tools to help proofread text written by users and rewrite it to change the tone, summarise emails and draft responses to emails. Apple also announced that it would integrate ChatGPT's technology into a new version of Siri.

Apple's AI rollout has sparked data privacy concerns, particularly with regards to the safety of users' personal data that will be sent to ChatGPT.

In response to these concerns, Apple highlighted that it has approached AI with privacy at the forefront. For instance, it introduced Private Cloud Compute which will encrypt user data when an Apple device connects to one of Apple's AI servers and will then delete user data once the task is finished. Additionally, users will have to give permission before any data is shared with OpenAI.

Later in June, Apple announced that this new function would not be rolled out to iPhone users in the EU this year, due to the risks of "regulatory uncertainties" created by the EU's Digital Markets Act. Apple claimed that the interoperability requirements of this legislation would force them "to compromise the integrity of [their] products in ways that risk user privacy and data security". The company is currently working with the European Commission to find a solution to these concerns.

Former OpenAI chief scientist launches AI company

Ilya Sutskever, co-founder and former chief scientist of OpenAI launched a new company, Safe Superintelligence Inc (“SSI Inc”). This new company is dedicated to the development of a safe AI system. In a statement posted to X, SSI Inc stated that "SSI is our mission, our name, and our entire product roadmap, because it is our sole focus."

Sutskever left OpenAI in May 2024, several months after his failed attempt to fire CEO Sam Altman. Several other prominent employees left the company at the same time, in relation to the failed ousting of Altman, as well as diverging views on AI safety.

Our recent publications

If you enjoyed these articles, you might also be interested in our recent publications on AI and the outsourcing industry:

Current and future trends in, and implications for, AI in outsourcing

AI and outsourcing: What's the future for relationships and contracts?