The Neural Network – September 2024
In this edition of the Neural Network, we look at key AI developments from August and September.
In regulatory and government updates, the UK is among the first group of countries to have signed the first-ever international binding AI treaty; two AI bills were considered by the California Senate; and the Australian Government has published a new set of voluntary AI standards alongside proposals for mandatory standards for high-risk use cases.
In AI enforcement and litigation news, the Irish Data Protection Commission has launched a statutory inquiry into whether Google met its regulatory obligations in processing personal data for AI model training; Anthropic was sued by authors for alleged copyright infringement; the Brazilian data protection authority lifted its ban on Meta's AI model; and Max Schrems's group noyb filed nine complaints against X's collection of user data to train its AI chatbot.
In technology developments and market news, X announced a new AI chatbot, OpenAI and Condé Nast announced a multi-year partnership, and the CEOs of Meta and Spotify published a joint statement criticising AI regulation in the EU.
More detail on each of these developments is set out of below.
Regulatory and government updates
Several states sign Council of Europe treaty on AI risks
On 5 September 2024, the Council of Europe opened the landmark Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (the "Convention") for signature. The UK is among the first signatories to the new Convention, alongside the USA, EU, and several other countries and jurisdictions.
The Convention, which represents the first-ever international binding treaty with AI as its subject matter, and which was originally adopted by the Council on 17 May this year, has as its overarching principle that signatory States (as well as private-sector actors acting on behalf of those States) must not carry out acts "within the lifecycle of AI systems" which would be inconsistent with protecting and respecting human rights, the rule of law, and democratic processes. Seven key principles are set out, including:
- Respect for human dignity and individual autonomy;
- Fit-for-purpose transparency and oversight, bearing in mind the specific context in which AI is being used in a given instance and the risks associated with that use;
- Accountability and responsibility, in the event of any adverse impact on the rule of law, democracy and/or human rights;
- Respect for equality and the prohibition of discrimination (within the meaning of international and domestic law, as applicable) throughout the AI lifecycle;
- Respect for equality and the prohibition of discrimination (within the meaning of international and domestic law, as applicable) throughout the AI lifecycle;
- Promotion of AI reliability, quality and security and of trust in AI output; and
- Establishing – and thereafter maintaining – controlled environments, supervised by competent authorities, in which AI development, experimentation and testing can take place.
The Convention adopts the OECD definition of an AI system, which is itself aligned with the relevant definitions used in the EU AI Act, the US Executive Order issued by President Biden in respect of AI, and the AI Act of Colorado.
Certain AI activities carried out by signatory States – in particular, activities related to national security and defence, as well as R&D in respect of AI systems not yet in public use – fall outside of the Convention's scope. Even here, though, signatory States must still fulfil the basic requirements not to impinge on human rights, the rule of law or democracy in their AI activities.
For AI activities which are in-scope, the Convention takes a risk-based approach which requires signatory states to take account of, among other things, the context in which AI will be being used, and what it will be being used for, the potential risks involved, the severity of harms that might result and the probability of those harms actually occurring, and the perspectives of stakeholders broadly, and in particular those whose rights may be impacted.
The UK Government has committed to working with relevant regulators, local authorities, and the devolved administrations of the UK nations during the process of ratifying the Convention in the UK and giving effect to it in UK domestic law, although there is not yet a firm timescale within which this will take place.
California AI safety bill passes Senate
On 29 August, the California Senate voted to pass a controversial AI safety bill, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems bill (the "Bill").
Under the Bill, AI models operating in California that cost over $100 million to develop or that require a high level of computing power will have to undergo safety testing. Developers will also need to create a "kill switch" that would turn off the AI model if it went awry. The Bill will also allow the Attorney General to take action against AI developers if they fail to comply, and if their models are used to threaten public safety.
The Bill also aims to prevent AI systems from causing or materially enabling a "critical harm" (which includes the creation or use of biological and nuclear weapons in a manner that results in mass casualties, or at least $500 million dollars' worth of damage resulting from cyberattacks on critical infrastructure).
Many technology companies, including Google, Meta and OpenAI, have opposed the Bill, arguing that it will be detrimental to AI research and innovation in California, and could lead technology companies to leave the state.
Another AI bill was also considered by the California Senate in August, but it was not passed after being ordered to the inactive file. The California Digital Content Provenance Standards Bill would require companies to watermark any AI-generated content (the "Watermarking Bill"). For more information on the Watermarking bill, please see our insight here.
Unlike the controversial AI safety bill, the Watermarking Bill has the support of many leading technology companies. For instance, OpenAI has announced its support for the Watermarking Bill, stating that it will help people "understand the origin of the content they find online, and avoid confusion between human-generated and photorealistic AI-generated content".
The next step is for the Bill to be considered by the Governor of California. He has until 30 September to decide whether to sign it into law, or whether to veto it.
Australian Government publishes two sets of AI standards
The Department of Industry, Science and Resources of the Australian Government has published two new sets of standards governing the use of AI in various contexts – one mandatory set of standards in draft form, applicable in "high-risk" settings, and another more general set of standards in finalised form, which are available for use on a voluntary basis.
The voluntary standards, published on 5 September, include 10 "guardrails", which organisations may use when developing and deploying AI in Australia, alongside guidance as to when and how to apply them. These guardrails entail, among other things, requirements as to accountability, risk management, and data protection and governance; stipulations as to the testing of AI models and 'human in the loop' best practice such that a given AI model can always be controlled or intervened-in by a person; and transparency and disclosure principles including towards end users and other organisations in the supply chain.
The voluntary standards have been designed to conform to international AI standards, such that organisations complying with the Australian voluntary standards will have a high degree of assurance that they are simultaneously complying with international law and other national regimes.
Meanwhile, on 4 September, the Australian Government also published proposals for a mandatory set of AI standards, to apply when AI is used in "high-risk" settings. The detailed proposals are available to read in full here, and are subject to a call for views, which is closing for submissions on 4 October 2024.
As drafted, the proposals provide a series of principles which may be used to judge whether a particular AI use (or context in which it is used) would constitute "high-risk AI". These include the risk of: adverse impacts on the rights of individuals in Australian and international human rights law; adverse impact on an individual's health and safety, whether mental or physical; adverse legal effects to an individual (such as defamatory statements); adverse impacts to the broader economy, society, and the rule of law; and the severity and extent of those potential impacts.
If a use case is judged, with reference to these principles, as being high-risk, then a series of mandatory "guardrails" would apply to that use of AI. In broad terms, the proposed guardrails focus on addressing testing, transparency and accountability.
Enforcement and civil litigation
On 12 September the Irish Data Protection Commission (the "Irish DPC") announced that it has begun a Cross-Border statutory inquiry into Google Ireland Limited ("Google") under Section 110 of the Irish Data Protection Act 2018.
The purpose of the inquiry is to determine whether, under Article 35 of the General Data Protection Regulation ("GDPR"), Google was under any obligation to undertake a Data Protection Impact Assessment ("DPIA") before processing personal data of EU/EEA data subjects in connection with the development of its AI model, Pathways Language Model 2 – and if so, whether Google met these obligations.
Article 35 of the GDPR outlines that DPIAs are required where a type of processing is likely to result in a high risk to the rights and freedoms of the individuals who are data subjects. The underlying rationale for conducting a DPIA (and the requirement to do so in such instances) is to ensure that any personal data processing that takes place, in the context of this elevated risk to data subjects, is "necessary and proportionate" and, in light of the risks inherent in the particular type of processing being contemplated, that adequate safeguards are put in place to guard against those risks.
This inquiry is part of a wider effort by the Irish DPC, in conjunction with its EU/EEA peers, to regulate the processing of personal data of EU/EEA data subjects in the development of AI models and systems.
The Irish DPC has published guidance on DPIA requirements which can be found here.
Authors sue Anthropic for copyright infringement
Several authors have launched a class action lawsuit in California against Anthropic for allegedly misusing their books to train its AI chatbot, Claude.
The complaint, filed on 19 August, alleges that Anthropic used pirated versions of their works to train Claude to generate "lifelike, complex, and useful text responses to prompts".
This lawsuit is one of many recent actions against AI companies by artists and other creatives for allegedly infringing on their work. For instance, in July, major record labels sued two music AI startups for copyright infringement. See this article for more information.
This is also not the first legal action that Anthropic has faced in the last year. Anthropic is being sued by record labels for allegedly misusing copyrighted song lyrics to train Claude, and its partnership with Amazon is currently being investigated by the Competition and Markets Authority in the UK.
The key takeaway for organisations is to ensure that they understand what data is being used to train their AI systems, and to ensure that no training data is subject to IP rights or restrictions.
Brazilian DPA lifts ban on Meta's AI model
The Brazilian national data protection agency (the "ANPD") lifted its suspension on Meta's privacy policy, which allowed Meta to utilise users' personal data to train its AI tool.
In July, the ANPD suspended Meta's AI system due to concerns around children's safety. See this article from the first edition of the Neural Network for more information on the initial suspension.
Following the suspension, Meta entered talks with the ANPD to address the authority's concerns. Meta has provided the authority with an updated compliance plan, which was approved by the ANPD.
Meta will be required to notify Brazilian users of Facebook and Instagram about Meta's proposed use of their data to train its AI model. Users must also be clearly informed of their right to object to this, and they must be able to easily exercise this right.
However, this may not be the end for Meta's AI-related troubles in Brazil. The company is also facing a $3.6 million fine for allowing paid advertisements that fraudulently used the name of a department store chain, Havan. Reuters' Brazilian fact-checking service found that all the false Havan ads it reviewed showed signs of AI being used to imitate the voice of Havan's owner.
noyb files nine GDPR complaints against X's plans to utilise users' data to train AI chatbot
On 12 August, noyb filed GDPR complaints with nine data protection authorities against X for its processing of user data to train its AI chatbot, Grok. Complaints were filed in: Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Spain and Poland.
For further details on this story, see our article here.
Technology developments and market news
Elon Musk's xAI releases new AI chatbot
Elon Musk's AI company, xAI, launched its new AI chatbot, Grok-2 ("Grok"), claiming it matches the performance of rivals OpenAI, Google and Anthropic. Grok is used as a search assistant on the X platform and has an added image generation tool similar to OpenAI’s DALL-E and Google’s Gemini but with fewer restrictions on the types of images it can generate, including prompts involving political leaders.
The new Grok is ranked by independent AI benchmark sites among the top five chatbots globally. Grok, alongside its sibling Grok-2 Mini, has been introduced in a beta form to the X platform. It is available to X users who pay for a ‘Premium’ or ‘Premium+’ subscription.
The training of this AI has been highly controversial, however, with allegations that the use of data pulled from public posts on the X platform was turned on automatically without explicit consent from users. As a result, the Irish Data Protection Commission applied to the High Court to suspend the service over concerns that the processing of personal data contained in the public posts of X’s EU/EEA users for the purpose of training Grok gave rise to a risk to the fundamental rights and freedoms of individuals.
Following the application, X agreed to partially suspend its processing of the personal data contained in the public posts of X’s EU/EEA users. X has now agreed to comply with the terms of this undertaking permanently leading the High Court to discontinue the proceedings on 4 September 2024.
Grok is just getting started, with plans underway to enhance its capabilities, including improved search functionality, post analytics, and reply features on X.
OpenAI signs deal with Condé Nast
On 20 August 2024, OpenAI announced a multi-year partnership with Condé Nast to feature content from the publisher's brands, including Vogue and The New Yorker, within its AI products, such as ChatGPT and the SearchGPT prototype. The financial terms of the agreement were not disclosed.
This partnership is part of OpenAI's ongoing effort to secure content for training its AI models and to provide real-time information through its AI-powered search engine, SearchGPT, launched in July. The deal allows SearchGPT, which has real-time access to information from the internet, to display content and quotes from Condé Nast articles, potentially challenging Google’s dominance in online search.
In recent months, OpenAI has secured similar agreements with Time magazine, the Financial Times, Axel Springer, Le Monde, and Prisa Media. However, not all publishers are on board; the New York Times and the Intercept have sued OpenAI over copyright issues for using their articles. Condé Nast's CEO, Roger Lynch, emphasised that this partnership with OpenAI could help recover some of the revenue lost to technology companies over the past decade due to their reduced ability to monetise their content.
OpenAI's COO, Brad Lightcap, stated that the company is committed to working with publishers like Condé Nast to maintain “accuracy, integrity, and respect for quality reporting” as AI becomes more central to news discovery and delivery. OpenAI's collaborations with its news partners will also involve gathering feedback to refine SearchGPT's design and performance.
Meta and Spotify CEOs publish joint statement criticising EU AI regulation
Mark Zuckerberg and Daniel Ek, the CEOs of Meta and Spotify, have released a joint statement criticising the EU's approach to AI regulation, arguing that the complex and fragmented framework is hindering Europe's innovation and global competitiveness. They cautioned that the EU's inconsistent and risk-averse regulations are creating obstacles that could cause the region to fall behind in the AI race.
The CEOs highlighted that they believe the next-generation of ideas and startups will use open-source AI (where models and their underlying code are made freely available) as it allows developers to incorporate the latest innovations at low cost and institutions to have more control over their data. Both Meta and Spotify have invested in AI to enhance their services.
Meta has open-sourced several of its AI technologies, including its advanced Llama large language models, which are being used by researchers to advance medical research and preserve languages. The two companies assert that with "more open-source developers than America," Europe is well-positioned to capitalise on the open-source AI wave, but it needs to adopt simpler, more harmonised regulations since it is a single market.
The statement also criticised the EU's uneven implementation of the General Data Protection Regulation ("GDPR") and EU privacy regulators creating delays and uncertainty due to an inability to reach an agreement on how the GDPR's obligations should apply in an AI context. For example, Meta has faced regulatory hurdles in training its models on content on its platforms. The CEOs concluded by urging the EU to adopt a more unified regulatory approach, emphasising the need for clear and consistent policies so that Europe could become a global leader in tech innovation.
Our recent AI publications
If you enjoyed these articles, you may also be interested in our recent publication on the ICO's fifth (and final) call for evidence on generative AI, our article on the roles of the provider and deployer in AI systems and models, or our recent piece on the role of the Authorised Representative under the EU AI Act.
If you're interested in learning more about the challenges and opportunities presented by AI, we invite you to join us at one of our upcoming in-person events. See here for more information.