Neural Network - May 2025

Neural Network - May 2025

In this edition of the Neural Network, we look at key AI developments from April and May.

In regulatory and government updates, a new version of AI copyright amendments for the Data (Use and Access) Bill have been blocked by the House of Commons; an AI Privacy Risk and Large Language Model report was published by the EDPB; updated European Commission guidelines on the responsible use of generative AI in research were released; and China is set to introduce draft guidelines to protect minors from generative AI.

In AI enforcement and litigation news, Musk uses a lawsuit to obstruct OpenAI's conversion to a for-profit entity; a South Korea regulator takes action against DeepSeek; Italian Data Protection Authority, Garante hopes to enforce a fine against Clearview AI; and there is a possible $1 billion fine for the Taiwan Semiconductor Manufacturing Company.

In technology developments and market news, the Solicitors Regulation Authority approved an AI law firm; AI is set to write UAE's legislation; AI model safety testing is slashed by OpenAI; and the cost of being polite to ChatGPT is explored.

More details on each of these developments is set out below.

Regulatory and Government updates

Enforcement and Civil Litigation

Technology developments and market news

Regulatory and Government updates

New version of AI copyright amendments for the Data (Use and Access) Bill blocked by the House of Commons

Since the inception of the Data (Use and Access) Bill (the "DUA Bill"), MPs have been calling for the creation of a clear and enforceable licensing model to protect creative industries from the unchecked use of their works in training artificial intelligence ("AI") models. Celebrities such as Elton John and Dua Lipa have taken up the cause, urging the prime minister to give appropriate consideration to the protection of artists' copyright in legislation.

In a recent development on this topic, the Lords voted 272 votes to 125 in favour of amending the DUA Bill to, among other things:

  • require that the government make regulations ensuring AI companies "provide an effective mechanism" for rights-holders to identify which copyrighted materials they have used to train their AI models;
  • include a non-exhaustive list of the information AI companies need to reveal about bots (autonomous software applications that can interact with systems or users) used in information gathering including: the name of the bot, the legal entity responsible for it, and the specific purpose for which each bot is used;
  • provide the government twelve months to make the regulations implementing these provisions and make specific provisions for SMEs and UK headquartered AI companies; and
  • allow for the rules to be applied in a modified manner for SMEs and UK-registered companies (as opposed to foreign-registered companies).

The House of Commons rejected this amendment due to "charges on public funds" which means there is no budget to implement the new regulations. 

To provide some context to the "ping-pong" of copyright amendments, during its initial House of Lords stages, the Lords made amendments to the DUA Bill requiring regulations to be created by the Secretary of State to ensure that AI operators of web crawlers and general-purpose AI models are (i) compliant with UK copyright law regardless of where the copyright-relevant acts take place; and (ii) observe specific transparency obligations. These amendments also provided rights-holders with a right of action against an AI operator.

In March 2025, the House of Commons voted to reject these amendments, instead replacing them with reporting obligations which would only have committed the government to producing a report within a year of the DUA Bill's Royal Assent on key policy areas of debate regarding AI and copyright as set out in a recent consultation (the "Consultation"). The Consultation, which proposed to allow AI developers to be able to train on material to which they have lawful access, whilst also providing an "opt-out" for copyright holders, received 11,500 responses—mainly from creators—highlighting the scale of concern. MPs from all parties criticised the opt-out proposal and as expected, integrating the Consultation into the DUA Bill in this manner did not placate the Lords or creatives.

The House of Commons' removal of the initial amendments was premised on the argument that the DUA Bill is not the right vehicle for copyright reform. While this might be the case, it can be argued that any vehicle for enforceable copyright reform in relation to AI is preferable to none, given the industry's pace of change. The DUA Bill will return to the House of Lords next week, with a new amendment from the Lords expected to be tabled for further debate.

Our most recent article in the DUA Bill series can be read here.

AI Privacy Risk and LLM report published by EDPB

On April 10, 2025, the European Data Protection Board ("EDPB") released a report titled "AI Privacy Risks & Mitigations - Large Language Models (LLMs)" (the "Report"). The Report formulates an in-depth risk management methodology for LLMs' systems and aims to assess and mitigate the privacy and data protection risks associated with LLMs. The Report may be beneficial to developers and users of LLMs.

A key focus of the Report is on risk management, and it identifies privacy risks in data handling processes, for example, a lack of human intervention for processing can result in significant consequences for the data subject. The Report links the risk to various breaches under the EU's General Data Protection Regulation ("GDPR"), then offers risk mitigation recommendations to ensure robust data protection.

Importantly, the Report also outlines case studies on the application of the risk management framework in real-world scenarios, which comprise of the use of: (i) a virtual assistant (chatbot) for customer queries; (ii) an LLM system for monitoring and supporting student progress; and (iii) an AI assistant for travel and schedule management. Each case study is accompanied by potential data flows and a system architecture, followed by risk analyses, and finally, suggestions of practical steps to identify and address privacy risks effectively. The Report advocates for continuous monitoring and assessment during development, deployment, and operation phases of the AI lifecycle.

Finally, the Report offers a framework to identify and categorise potential threats to data protection and privacy, assessing the risk severity and likelihood of the threats. Through this framework, the Report clarifies stakeholder roles and responsibilities under the EU's Artificial Intelligence Act (the "AI Act") and GDPR, outlining legal and regulatory obligations to ensure compliance with data protection laws.

While data protection authorities will be provided with useful insights into the operation of LLM systems and their associated risks, the Report promotes methods for providers and deployers to then tackle these risks. Although the Report is only a guideline for organisations, it will still prove helpful as a reference point for good practice.

The Report can be accessed here.

Updated European Commission guidelines on the responsible use of generative AI in research

The European Commission (the "Commission") has updated its Living Guidelines on the responsible use of generative AI in research (the "Living Guidelines"). Initially published in March 2024 and updated in April 2025, the Living Guidelines are targeted at providing actionable guidance to facilitate the European research communities' adoption of generative AI ("genAI"), as the technology offers opportunities across sectors but also poses risks, including disinformation, data protection concerns, and unethical uses which can have societal and environmental impacts.

The current version of the Commission's Living Guidelines:

  • sets out common limitations that can be exhibited by genAI, such as prompt and training bias;
  • recommends that researchers aim to minimise the environmental impact of AI by evaluating which AI tool is best suited for the task;
  • recommends that research organisations facilitate training on genAI for those at all levels; promote an atmosphere of trust where the use of genAI is disclosed; and
  • recommends that research funding organisations use genAI transparently, with genAI possibly being used to improve internal processes for funders, but it cannot take a role in the assessment or evaluation of the scientific content of projects, and promote an atmosphere of trust where the use of genAI is disclosed.

These updates improve potential accountability measures for researchers and emphasise the need to appreciate genAI's growing environmental impact. The Living Guidelines create a single, comprehensive and updated source of information. This should prove useful for the research community as research, particularly scientific research, is one of the sectors that could be most significantly disrupted by genAI.

China to introduce draft guidelines to protect minors from genAI

The director of the Internet Law Research Center at the University of Chinese Academy of Social Sciences (the "Director") announced that China is preparing to introduce new guidelines (the "Guidelines") to protect minors in the realm of genAI services. The current measures mandate that providers take "effective measures" to prevent minors from becoming overly dependent on AI services. The Guidelines will attempt to refine the current measures, as they are not prescriptive.

Drafted by the Cyber Security Association of China at the Cyberspace Administration's request, the draft Guidelines emphasise adherence to "core socialist values", prioritising minors' best interests while balancing their protection with technological development. Key focus areas include content safety, data security, and personal information protection.

GenAI service providers, will, among other things:

  • be required to secure guardian consent for any personal data collection from minors and must align AI services with children's cognitive development;
  • be required to form dedicated working groups that focus on the protection of minors, appoint security officers, and establish robust systems for security evaluation and emergency responses; and
  • face increased responsibilities, including the identity verification of minors and implementing measures to curb online addiction.

It is clear that China is trying to set standards around the ethical use of AI because the nation does merely want to be viewed as an AI developer, but also as a rule-setter. From a geopolitical angle, in being an early adopter of regulatory norms, China can seek to exert soft power on other nations by influencing their regulation. This allows them to rival countries like the US, which has yet to adopt similar guidelines on a federal level.

It is expected that the Chinese public will be invited to comment on the draft Guidelines in June.

Enforcement and Civil Litigation

Musk uses lawsuit to obstruct OpenAI's conversion to a for-profit entity

Elon Musk is suing OpenAI, accusing the company of betraying its original mission of developing AI for the benefit of humanity. Musk's lawsuit alleges breach of contract, arguing that OpenAI's shift to a profit-driven entity contradicts its founding principles. Musk claims that OpenAI was founded as a non-profit with a commitment to creating artificial general intelligence in an open-source and safe manner. Musk was one of the co-founders of OpenAI in 2015, alongside current CEO Sam Altman – Musk stepped down from the board in 2018.

In a surprising turn of events, OpenAI recently reversed course, announcing that its non-profit arm would continue to control the company that develops ChatGPT and other AI products. Despite this, Musk's legal team argues that the revised governance structure still favours CEO Sam Altman and investors maintaining of control, with an emphasis on commercial interests.

OpenAI has dismissed the lawsuit as a "baseless" attempt to obstruct progress. The dispute is now set for a jury trial in California in March 2026, reflecting growing tensions between key personalities, transparency and control within the rapidly evolving AI sector.

South Korea regulator takes action against DeepSeek

DeepSeek, a Chinese AI service provider, has come under scrutiny from South Korea's privacy watchdog, the Personal Information Protection Commission ("PIPC"), for unauthorised data transfers and privacy lapses. In its initial investigation, the PIPC found that DeepSeek transferred user data, including device information and AI prompt inputs, to companies in China and the US without user consent, violating South Korean privacy laws. In urging DeepSeek to delete user data transferred overseas without proper consent, this marks the first major regulatory action against DeepSeek globally.

The following issues were revealed as part of the PIPC's investigation:

  • the personal data transfers to, among others, a subsidiary of ByteDance for security and service improvements. The PIPC deemed this unnecessary and requested the deletion of all such transferred data;
  • DeepSeek's privacy policy lacked crucial elements required by South Korean law, namely (i) data deletion procedures; (ii) security measures; and (iii) failed to inform users about how their data was used for AI training. To further expand on the third point, DeepSeek had been using both public data, and user-submitted prompt data to train its system. However, DeepSeek did not provide users with an option to opt out of having their data used for this purpose - this has since been remedied; and
  • DeepSeek lacked age verification safeguards to prohibit users under 14, which it has since implemented. In response, DeepSeek has pledged to improve its policies, including providing a clearer privacy policy in Korean and establishing legal grounds for overseas data transfers.

The PIPC has issued corrective recommendations, including deleting previously transferred data, implementing stronger security measures, and appointing a local legal representative. If DeepSeek fails to accept the recommendations, a formal investigation may be launched.

In UK regulatory news in relation to DeepSeek, you can read our recent update on the request from the Information Commissioner's Office (the "ICO") for more information about the company here.

Garante hopes to enforce fine against Clearview AI

The Italian Data Protection Authority (the "Garante") has issued a data deletion order to Clearview AI, ("Clearview") a US-based facial recognition software provider, following breaches of the GDPR. Clearview mainly provides its software to law enforcement and government agencies.

In March 2022, Clearview AI was fined €20 million for unauthorised biometric monitoring of Italian individuals and was instructed to delete related data and cease further collection. To date, this fine remains unpaid, and the data undeleted.

The Garante is collaborating with the US Federal Trade Commission ("FTC") to enforce the decision, despite challenges posed by the absence of an international enforcement convention, leaving the enforcement process is at an impasse due to jurisdictional limitations. As an initial point, the Garante is working with the FTC Commissioner to notify Clearview, emphasising the necessity of finding a viable method for enforcement.

A two-step enforcement process is proposed to be initiated in the coming weeks: notifying Clearview and securing a US judicial enforcement order. This situation shows the complexities of cross-border data protection enforcement. This will only continue to increase with technological development, evidencing the need for international cooperation when it comes to both compliance and enforcement.

We previously reported on the implications of Clearview's successful appeal of the fine against it by the ICO here.

Possible $1 billion fine for TSMC for AI processing chip production

The Taiwan Semiconductor Manufacturing Company ("TSMC") could face a fine of over $1 billion to settle an ongoing investigation by the US Department of Commerce for potentially violating export controls. This reflects US regulations that allow companies to be fined up to twice the value of transactions that breach export control rules.

The investigation centres around the fact that TSMC-produced chips for Chinese technology company Sophgo allegedly matched chips found in one of Huawei's AI processors. It is widely known that Huawei is on a US trade restriction list, largely due to concerns about national security.

The reason that TSMC is caught by US regulations is because TSMC's use of US technology in its Taiwan factories subjects it to certain controls. Importantly, chip production for Huawei is strictly controlled, if not outrightly prohibited, in the US.

The situation is critical for US-Taiwan relations, especially as TSMC has expanded its semiconductor prowess on US shores – TSMC has made significant investments in Arizona and plans to build five additional chip facilities in the US in the years to come. China has made it clear that it views Taiwan as a breakaway Chinese province, making the US Taiwan's key Western ally. Deteriorating relations between the US and China have improved US-Taiwan relations. The possibility of the US imposing export violation penalties on TSMC complicates these relations, and it remains unclear how the issue will be resolved. The Commerce Department typically issues a "proposed charging letter" detailing alleged violations and penalties, giving the company 30 days to respond. TSMC has suspended shipments to Sophgo and halted advanced chip shipments to China.

This situation highlights US rivalry with, and concern about China, further underscored by the recent Federal implementation of the Final Rule, restricting and prohibiting the transfer of personal data for certain transactions involving China. This was reported in our most recent Data Protection update here.

Technology developments and market news

Solicitors Regulation Authority approves AI law firm

The Solicitors Regulation Authority ("SRA") have approved a new law firm called Garfield AI. The law firm is based on and uses AI to offer legal services. AI is used to guide claimants through the small claims process, even having the capabilities to produce arguments for use at hearings.

The approval of Garfield AI highlights the way that AI models are beginning to encroach on the sensitive and high fee-paying legal industry. Costs for "polite chaser" letters start from £2 and increase to £50 for filing legal documents. One of Garfield AI's co-founders estimated that the service will reduce between £6-£20 billion of unpaid debts caused by the time and costs of court proceedings.

With its strategic use of AI technology, Garfield AI aims to increase access to justice and assist in clearing court backlogs.

Initially, lawyers will review all AI-generated outputs, with plans to transition to a sampling system for quality assurance. While this still maintains an element of human oversight, it remains to be seen whether a sampling system will allow for a satisfactory level of accountability in the output process.

AI to write UAE's legislation

The United Arab Emirates ("UAE") is pioneering the integration of AI into its legislative process, marking a significant move towards "AI-driven regulation". Thus far, AI has largely been used by other governments to summarise legislation and to assist with drafting once laws have been decided by legislators, emphasising that this new active role for AI to play goes beyond the typical applications of AI in government.

A database of UAE federal and local laws, together with publicly available information such as court judgments and data from government services, will be used by the AI system to track the way laws affect the UAE public and economy. The AI system will then propose updates to the law, with the UAE government expecting AI to accelerate the lawmaking process by 70%.

Given both the novel nature and planned scale of the initiative, frameworks for oversight of any AI systems will be critical. The newly established Regulatory Intelligence Office will oversee this initiative, displaying the UAE’s commitment to enshrining AI's role in governance.

The need for strong human oversight as a risk mitigation measure to ensure that any proposed legislative changes properly align with societal need cannot be overemphasised.

AI model safety testing slashed by OpenAI

OpenAI has decreased the time and resources allocated for safety testing of its AI models. Previously, OpenAI testers had several months to evaluate models like GPT-4 (the LLM used in the fourth iteration of ChatGPT) but now have been only given days. This accelerated timeline is driven by the constantly evolving AI industry, which creates increased competition to release new models quickly, as OpenAI competes with major tech companies and start-ups. Some competitors of ChatGPT championed by tech behemoths are Google's Gemini, Microsoft's Copilot and Anthropic's Claude. This raises concerns about potential risks due to insufficient safeguards.

While no global standard for AI safety testing currently exists, the AI Act will begin requiring companies to test their most powerful models starting later this year. Testers and experts worry that the reduced testing time increases the risk of overlooking AI's dangerous capabilities, which could lead to the "weaponisation" of AI technology as the GPT-4 model's specific dangerous capabilities were only discovered after two months of testing.

The US-based Center for AI Safety sets out four key categories for catastrophic AI risks, which include malicious use, for example, the use of AI for propaganda, censorship, and surveillance; and rogue AIs which is the risk of losing control over AIs as they become more capable.

OpenAI has previously committed to customising models to assess the risk of potential misuse. However, it has thus far opted to only do so for older models, raising concerns that OpenAI could be underestimating the risks of its newer models. One of the gravest risks of only testing older models is that the safety testing is not conducted on the model released to the public.

The cost of being polite to ChatGPT

Being polite to AI comes at a cost. OpenAI CEO Sam Altman recently revealed on social media site X, that saying "please" and "thank you" to ChatGPT has potentially added "tens of millions of dollars" to the company's electricity bill. His response came after an X user asked how much money OpenAI had lost to these minor but frequent niceties. While the unverified figure might be hyperbole, it is a reminder of the resource-intensive nature of LLMs, as high energy consumption is a hallmark of genAI, especially during the model training process.

However, politeness also boosts AI performance. Microsoft Copilot design director Kurtis Beavers notes that polite language can influence how genAI responds, encouraging more collaborative answers. Research supports this as a 2024 study from Japan's Waseda University found that rudeness toward AI reduced its performance by 30%, while politeness led to fewer errors and richer responses.

These findings suggest LLMs respond to tone, and until there is further research indicating otherwise, suggests there is a balance to be struck between maintaining politeness and avoiding unnecessary formality when interacting with AI.

Our recent AI publications

If you enjoyed these articles, you may be interested in our podcast series covering various aspects of the evolving world of AI and its implications for businesses and broader society. The full podcast series is available for listening here.

You might also be interested in our Data Protection update, the most recent of which can be found here.