The Neural Network – October 2024

The Neural Network – October 2024

In this edition of the Neural Network, we look at key regulatory, legal and market developments in AI from September and October.

In regulatory and government updates, the UK Government has indicated that it will intervene "by the end of this year" to resolve the impasse between AI developers and rightsholders of copyrighted works, California Governor Gavin Newsom has vetoed an AI safety bill despite overwhelming support from the state legislature, and the UN has announced new forums for AI governance work and plans to collaborate more closely with the OECD in the AI governance space.

In AI enforcement and litigation news, US privacy regulators have signalled an enforcement focus on consumer harms caused by AI-enabled devices, the Center for Investigative Reporting is alleging deliberate copyright infringement by Microsoft and OpenAI in the course of their AI model training, and LinkedIn has U-turned on by-default collection of UK users' data for AI training after an intervention by the ICO.

In technology developments and market news, OpenAI has raised $6.6 billion in a new funding round, valuing the company at $157 billion.

More details on each of these developments is set out below.

Regulatory and government updates

Enforcement and civil litigation

Technology developments and market news

Regulatory and government updates

UK Government will bring forward initiative to resolve impasse between AI developers and rightsholders "this year"

The UK's Minister for AI and Digital Government, Feryal Clark MP, has signalled that the UK Government will bring forward an initiative to resolve an impasse between rightsholders and AI developers over the use of copyrighted materials for AI training "by the end of this year".

It is not yet known what form the initiative will take, with the Minister (speaking at the recent Times Tech Summit in London) stating that "legislation or amendment to a policy" are both options that remain on the table.

The issue has been a contentious one for some time. Getty Images has brought a claim in the English High Court against Stability AI, developers of automatic image-generation platform Stable Diffusion, alleging among other things that Getty's intellectual property rights have been infringed by use of its images to train the Stable Diffusion AI model. This case is currently pending trial.

The UK Intellectual Property Office had previously convened a working group in an attempt to draw up a voluntary code of conduct to bridge the gap between protecting IP rights and ensuring availability of large, rich datasets on which to train AI models.

This working group included representatives from across the divide, with various industry associations including the Alliance for IP, the Publishers Association and the News Media Association, as well as major rightsholders such as the BBC and the Premier League, and AI developers including Stability AI, as well as Deepmind, Microsoft and others.

However, this working group drew to a close in February this year having not been able to reach agreement, following which the Government signalled that it would be intervening.

When it does, it will face a difficult balancing act. Lack of available data for AI training is seen as a key potential impediment to the UK's future international competitiveness in AI, with then-chief scientific advisor to the Government (now himself Minister for Science, Research and Innovation) Lord Vallance warning last year that delays in resolving the issue would put the UK at a disadvantage. Conversely, strong protection for intellectual property is seen as critical to the strength of the UK's creative, publishing and media industries, themselves an important component of the UK economy.

California Governor Gavin Newsom vetoes AI safety bill

Governor of California Gavin Newsom has vetoed a bill that would have imposed more stringent requirements on developers of AI models.

The bill, sent to the Governor after an overwhelming vote in favour by both houses of the California legislature, would have imposed liability for harms caused by AI directly on the tech companies that develop those AI models. It would have also mandated the inclusion of a human-operable shutdown feature – a so-called "kill switch" – available to be used in the event of a model being misused or "going rogue".

Newsom justified his decision to veto the bill by stating that although it was "well-intentioned", it placed too much emphasis on regulating the largest and most powerful AI models at the expense of properly considering "smaller, specialized" models which might prove "equally or even more dangerous", and that the bill risked unduly constraining AI innovation.

Co-author and introducer of the bill, California Senator Scott Wiener, wrote in response to the Governor's veto that "voluntary commitments from industry are not enforceable and rarely work out well for the public" and that the veto was therefore a "setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet."

The bill was heavily lobbied against by several notable major AI developers and trade groups representing them, on the basis that it would slow AI development, render it more burdensome to do so, stifle early-stage AI companies, and could even lead to a brain-drain from California in favour of other AI development hubs internationally.

UN adopts framework for AI governance and commits to future OECD collaboration

The United Nations has adopted a new "Global Digital Compact", including a series of new panels and working groups on AI and a requirement on member countries to take "concrete" actions in respect of data governance by 2030.

Coming as part of a broader "Pact for the Future", the new Global Digital Compact is described as the "first comprehensive global framework for digital cooperation and AI governance". Among other measures, it will introduce an "International Scientific Panel" and a "Global Policy Dialogue on AI".

Alongside this, the UN and the Organisation for Economic Co-operation and Development (OECD) have jointly announced "enhanced collaboration" in global AI governance. This will entail "regular science and evidence-based AI risk and opportunity assessments". Each organisation has also committed to "leverage their respective networks, convening platforms and ongoing work" on AI governance, with a stated aim of "help[ing] governments improve the quality and timeliness of their policy response to AI’s opportunities and its risks."

Enforcement and civil litigation

US federal, state privacy regulators place AI-related consumer harms as key priority for enforcement

Speaking on a panel at a recent IAPP conference, US officials have made clear that prosecuting those who use AI to cause harms to consumers is an enforcement priority for federal and state privacy regulators.

Concerns at Federal level

Maricela Segura, regional director for Western Region Los Angeles for the US Federal Trade Commission ("FTC"), stated that this is a particular concern at federal level.

Segura stated that “we’re going to address false claims people make about AI” as well as monitoring privacy concerns with AI and algorithmic discrimination. The FTC is also putting focus on data brokers, with Segura commenting “we’re focused very much on those data brokers who are selling very sensitive pieces of information and looking at whether or not doing so is deceptive or unfair”.

As an example of work being done by the FTC in this area, Segura spoke about the Commission’s enforcement action against the anonymous messaging app NGL. This took place after the company had been accused of, among other things, using misleading marketing language that falsely promoted its AI capabilities regarding content moderation. NGL settled with the FTC earlier this year, agreeing to pay $5 million and to implement age-gating to prevent under-18s from using the app.

Priorities at state level

Concerns around AI have also been highlighted as a priority in Texas. Speaking on the same panel as Segura, Tyler Bridegan, the director of data protection and security enforcement at the Texas Attorney General’s Office, stated that they would continue looking into cases entailing “tangible harms with AI”.

On 18 September, Texas announced a settlement with Dallas-based AI healthcare technology company Pieces Technologies, after the company was accused of making false and misleading statements as to the safety of its products, which were deployed at various hospitals in the state.

Like the FTC, Texas is also focusing on data brokers. It is currently preparing a data broker registry alongside other US states, following which the regulator will, in Bridegan's words, look at “what steps need to happen with data brokers”.

Also speaking on the panel was Jill Szewczyk, assistant attorney general for data privacy and cyber security at the Colorado Attorney General’s office. She highlighted that anti-discrimination in AI was a priority for the state, noting in particular the law on deepfakes passed in Colorado earlier this year.

Center for Investigative Reporting alleges large-scale copyright infringement by OpenAI, Microsoft for large language model training data

The Center for Investigative Reporting (“CIR”), a nonprofit news organisation in the US, has filed a lawsuit against OpenAI and Microsoft, claiming that these organisations have used its copyrighted works of journalism without permission or compensation to train their AI products, including ChatGPT and Copilot.

In the claim, CIR alleges that OpenAI and Microsoft have copied, abridged, and displayed thousands of its articles, which are original and creative works protected by copyright law, and that in the course of doing so, the two defendant companies have removed the author, title, copyright notice, and terms of use information from CIR's articles. These are forms of copyright management information that serve to identify the owner and the terms of use of the works. CIR argues that this removal has been done intentionally to facilitate and conceal the defendant companies' own copyright infringement and that of their users.

CIR is seeking damages and various injunctions, including one that would require the defendants to remove all copies of the “registered works”, and any derivatives, from their training sets and other repositories. CIR is seeking a jury trial.

This lawsuit is one of several that have been filed by news publishers against OpenAI and Microsoft in the US, raising complex legal and ethical issues about the use of copyrighted works to train AI systems. The outcome of these cases could have significant implications for the future of journalism and AI, and may also be relevant to how governments ultimately choose to address the broader issue of use of copyrighted works to train AI models.

LinkedIn will no longer collect UK users' data by default for AI training

Following a U-turn, the UK will now join the likes of the EEA and Switzerland as an area in which Microsoft-owned social media platform LinkedIn will not automatically gather user data for AI training.

This follows backlash from advocacy groups after the "on" setting for data collection for AI training was automatically selected for UK users of the platform on 18 September.

Following LinkedIn’s change of policy, the ICO’s executive director for regulatory risk, Stephen Almond, said “We are pleased that LinkedIn has reflected on the concerns we have raised about its approach to training generative AI models with information relating to its UK users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO. In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset."

Almond went on to emphasise that the ICO is keeping a watching brief in respect of "major developers of AI, including Microsoft and LinkedIn", and in particular the safeguards implemented by these developers to protect UK users' information rights.

Other social media platforms have also rolled out similar plans to collect user data for training purposes, with Meta recently announcing that it is resuming plans for a data-scraping exercise involving UK users of Facebook and Instagram. We covered this story in more detail in last month’s data protection bulletin, which you can read here.

Technology developments and market news

OpenAI raises $6.6 billion in latest funding round, reaches $157 billion overall valuation

ChatGPT developer OpenAI has raised $6.6 billion from investors in its latest funding round. Investors in the round, which was led by US venture capital fund Thrive Capital, included Microsoft (already a major stakeholder and to date the largest overall investor in the company), Nvidia and Softbank among others.

OpenAI has been undergoing significant internal upheaval in the past year. Chief executive Sam Altman was briefly unseated in November last year in the course of an internal power struggle and has recently faced criticism from some quarters over plans to remove the company's non-profit board and to restructure the company into a for-profit entity. The terms of the latest funding round afford investors a right to claw back their invested funds if this restructuring does not complete within two years and if a cap on returns to investors is not removed within that time.

A spate of recent resignations has hit the company, including former chief scientist Ilya Sutskever and former chief technology officer Mira Murati. Despite this, this latest funding round valued OpenAI as a whole at $157 billion.

Our recent AI publications

If you enjoyed these articles, you may also be interested in our recent article on the fine imposed on Clearview AI by the Dutch data protection authority for illegally gathering and using personal data for its facial recognition technology, which you can read in full here.