Welcome to the fifth and final edition of the AI Summer Roundup! Over this summer, we have been reporting on recent legal developments in the rapidly evolving landscape of AI, including generative AI. Past editions are available on our Tech Newsflash website.
We round out our series with a summary of several developments around the world that focus on the adequacy of the various jurisdictions' laws in addressing the opportunities and risks arising from generative AI.
In the United States, the United States Copyright Office has issued a notice seeking public views on a number of copyright issues raised by recent advances in generative AI (see Update 2), while the US Patent and Trademark Office has received an open letter from Google urging more training for patent examiners on AI (see Update 4). AI continues to be a subject of discussion for US Congress and state legislatures (see Updates 5 and 6). As legislative and administrative bodies grapple with these questions, certain industries are dealing with the opportunities and risks in their own ways. YouTube and Universal Music Group collaborate on ways to support AI innovation while respecting artists (see Update 3), and author Jane Freidman seeks to remove AI-generated fake books under her name, highlighting limitations under existing US laws, namely US copyright law not extending to the writing style of an author (only their tangible expression), and the "right to publicity" varying from state to state.
In the rest of the world, the UK House of Commons Science, Innovation and Technology Select Committee is urging the UK Government to introduce AI-specific legislation in the next session of Parliament, issuing a warning that the absence of such legislation may put the UK at risk of falling behind the regulatory efforts in other jurisdictions (see Update 7). Germany also issued an AI Action Plan that dedicates additional public funding to AI research, while also recognizing the need to address systemic discrimination issues arising from use of AI (see Updates 8 and 9). In France, the Minister of Digital Transition and Telecommunications announced substantial investments in AI, and the French Data Protection Authority is seeking public comment on their data processing guidelines, which likely covers data processed by certain AI models (see Updates 11 and 12).
In this fifth issue, we highlight 12 key developments we've identified from the United States, Europe and APC between August 19 and August 31, 2023.
1. United States: OpenAI moves to dismiss Sarah Silverman and Paul Tremblay's copyright infringement lawsuits
In Edition 1, we reported on two lawsuits brought by authors against OpenAI, Inc. (OpenAI); Silverman et al v. OpenAI Inc. et al No. 3:23 Civ. 3416 (N.D. Cal Jul. 7, 2023) and Tremblay et al v. OpenAI, Inc. et al No. 4:23 Civ. 3223 (N.D. Cal Jun. 28, 2023). On August 28, OpenAI filed to dismiss the majority of the plaintiffs' claims in both lawsuits. The respective plaintiffs had alleged that: (i) OpenAI infringed on their copyrights by making copies of their copyrighted books to train OpenAI's large language models (LLMs); and (ii) the LLMs constituted infringing derivative works as they cannot function without the expressive information extracted from their works. In its motion to dismiss the Silverman claims and the Tremblay claims, OpenAI did not deny copying the authors' books for training, but argued that such use did not constitute copyright infringement as the plaintiffs failed to show the particular outputs generated by the LLMs were substantially similar to their copyrighted work.
2. United States: The US Copyright office requests comments and input from the public regarding copyright and policy considerations arising from AI
Following the recent district court decision (Stephen Thaler v. Shira Perlmutter, No. 1:22-cv-01564 (D.D.C. August 18, 2023)) affirming that AI-generated works do not qualify for copyright protections (as reported on in Edition 4), the United States Copyright Office (USCO) issued a notice seeking public views on issues raised by AI-generated art and copyright interests. While, in the US, copyright law requires human authorship, the USCO hopes to gain deeper insight on where to draw the line between human and AI-generated creations. The USCO seeks input on a range of topics, including the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works, and the legal status of AI-generated outputs. Written comments are due by October 18, and replies to comments are due by November 15.
3. United States: YouTube and Universal Music work together to help singers and creators amid the rise of viral AI-generated music
YouTube and Universal Music Group (UMG) have announced a collaboration promoting an "artist-centric approach to AI innovation." Their first step is to launch the "Music AI Incubator," a working group of leading UMG artists, songwriters and producers across multiple genres that will explore, experiment and offer feedback on the AI-related musical tools and products. According to this announcement, written by Sir Lucian Grainge, Chairman and CEO of UMG, "Central to our collective vision is taking steps to build a safe, responsible and profitable ecosystem of music and video—one where artists and songwriters have the ability to maintain their creative integrity, their power to choose, and to be compensated fairly." As you might recall, earlier this year, "Heart on My Sleeve," a purported Drake – The Weeknd duet, went viral for several hours and reached millions of listeners before the public realized that the song was generated by AI appropriating the singers' voices.
4. United States: Google urges US Patent and Trademark Office to train patent examiners with regards to AI
On August 23, Google sent a letter to the Director of the United States Patent and Trademark Office (USPTO), as well as various Senate and House judiciary committees and subcommittees on intellectual property, noting that patent claims on AI technology are increasingly common, and urging the USPTO to provide a comprehensive technical training program for all its patent examiners. Google states that such training "will help to ensure that deserving AI-related patent applications are granted, while those that would hinder further AI innovation—like patents that simply ‘apply AI' to basic ideas—are not." Google also notes that mistakenly granted AI-related patents can hinder innovation for years and calls on the USPTO to withdraw its proposed changes to the Inter Partes Review program that Google states are designed to restrict access to Inter Partes Review. Finally, Google notes that instituting comprehensive technical training requires significant resources and suggests the USPTO increase patent filing fees for large companies, including Google.
5. United States: Legislation concerning increasing risk assessment standards proposed in the House of Representatives and Senate
On July 18, Representative Anna G. Eshoo (D-CA-16) introduced legislation HR4704 and on July 19, Senator Edward J. Markey (D-MA) introduced legislation S2399. If passed, these bills will require the Assistant Secretary for Preparedness and Response, of the Administration for Strategic Preparedness & Response (ASPR), to conduct risk assessments and implement strategic initiatives or activities to address threats to public health and national security due to technical advancements in artificial intelligence that can be used intentionally or unintentionally to develop pathogens, viruses, bioweapons or chemical weapons. The ASPR is responsible for leading medical and public health preparedness for, response to and recovery from disasters and public health emergencies in the US.
6. United States: Committee on Homeland Security and Governmental Affairs report suggests adoption of a bill requiring greater transparency about use of AI
On August 22, the Committee on Homeland Security and Governmental Affairs (Committee) issued a report to accompany S. 1865, the Transparent Automated Governance Act (Bill). Among other things, the Bill requires governmental agencies to (a) notify persons when they interact with certain AI or other automated systems, and (b) institute an appeal process for those who believe an adverse critical decision impacting them was made in error using such a system to enable the impacted applicant to seek alternative human review of the decision. The Committee warned about risks posed by AI technologies, such as lack of accuracy, bias in decision-making and breaches of privacy, identifying concrete examples of such harms (e.g., failure of face recognition AI to register those with darker skin tones resulting in bans on their entry into the United States). The Committee noted that transparency regarding the use of AI is critical and will help with correcting harms caused by use of AI, particularly as AI use continues to grow. The Committee recommends the Bill be passed along with a minor amendment that modifies the definition of "artificial intelligence."
7. UK: Committee report on artificial intelligence governance urges introduction of AI regulation to avoid falling behind
On August 31, the UK House of Commons Science, Innovation and Technology Select Committee released an interim report entitled "The governance of artificial intelligence," calling on the UK Government to take a leading role in regulating AI and to introduce AI-specific legislation in the next session of Parliament. The report warns that the UK will fall behind the regulatory efforts of other jurisdictions: "Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer." The UK Government's present principles-based approach (as set out in a March 2023 White Paper) does not currently include a plan to introduce specific new AI legislation. The report also highlighted key recent developments and potential benefits from AI, and identified "twelve challenges of AI governance, that policymakers and the frameworks they design must meet," including intellectual property, privacy and data protection, liability, accessibility, bias, transparency, global regulatory coordination and the potential for fundamental disruption to the UK society and economy in their current form. In light of the UK's AI Safety Summit, which will take place in early November, the report adds: "The challenges highlighted in our interim Report should form the basis for discussion, with a view to advancing a shared international understanding of the challenges of AI—as well as its opportunities."
8. Germany: Federal Minister of Education and Research announces AI Action Plan and ramps up public funding for AI
On August 23, German Federal Minister of Education and Research, Bettina Stark-Watzinger, announced an "AI Action Plan," which identifies 11 fields with a concrete need for action, including strengthening the AI research foundation, targeted expansion of AI infrastructure and utilizing AI for growth and economic opportunities. Stark-Watzinger notes that the goal of the Federal Ministry of Education and Research is "for Germany and Europe to be able to take a leading position in a world ‘Powered by AI.'" She considers that European and international cooperation is to be pursued more intensively, but that "Europe will need to be able to and will have to follow its own path." The ministry is also ramping up public funding for AI initiatives, allocating €1.6 billion during the current legislative period (through 2025), including almost €500 million for 2024 alone. The AI Action Plan, of which only an executive summary has been published so far, is said to have been discussed at a closed cabinet meeting of the German government at the end of August and will be published in September. Stark-Watzinger has described the AI Action Plan as her ministry's update to the German government's AI strategy, which was published in 2018 and supplemented in 2020.
9. Germany: Independent Federal Anti-Discrimination Commissioner highlights discrimination risks of AI, calling for enhanced transparency and stronger statutory protections
According to an article by Der Spiegel, at an August 30 press conference in Berlin, Germany's Independent Federal Anti-Discrimination Commissioner Ferda Ataman pointed out discrimination risks posed by AI systems, called for improved statutory protections and presented an expert report commissioned by her Federal Anti-Discrimination Agency. The expert report identified deficiencies in the German General Equal Treatment Act's (German Act) protections against discrimination by algorithmic decision-making systems, and states that "discrimination by statistics" "perpetuates (historical) structural inequalities and creates new ones." According to Der Spiegel's article, Ataman therefore urged the German government to: (1) explicitly include "acting through algorithmic decision-making systems" as a possible cause of discrimination in the German Act; (2) impose new information and disclosure obligations on the operators of algorithmic decision-making systems; (3) shift the burden of proof in court to those in charge of AI systems if their system is alleged to have caused discrimination; and (4) introduce a conciliation office as part of the Federal Anti-Discrimination Agency, as well as a mandatory conciliation procedure in the German Act.
10. Spain: Spanish Agency for the Supervision of Artificial Intelligence (AESIA) created, said to be the first of its kind in Europe
According to a press release, the Spanish Council of Ministers has approved a Royal Decree on August 22 establishing the Spanish Agency for the Supervision of Artificial Intelligence (AESIA). In line with Spain's Digital Agenda 2026 and National Artificial Intelligence Strategy (ENIA), the new agency is intended to promote the development of "inclusive, sustainable and citizen-centered" AI. According to the press release, Spain is the first European country to introduce an agency of this kind. The move precedes the adoption of the EU's Artificial Intelligence Act, which is said to potentially take place towards the end of this year.
11. France: Minister of Digital Transition and Telecommunications announces substantial investments in AI
During an interview on August 30, Jean-Noël Barrot, the Minister of Digital Transition and Telecommunications emphasized the French Government's intent to make France a leader in AI. Barrot announced substantial investments in AI, including, €500 million for the formation of five to ten "clusters" (i.e., AI training centers), €50 million to building AI training databases which would be representative of the cultures of the French-speaking world, and €50 million to enhance France's computing capabilities, primarily focusing on improving the capacity of France's supercomputer "Jean Zay." In addition, to boost investments in French startups, the French Government asked insurance companies to commit to an overall budget of €7 billion for French innovation capital funds.
12. France: French Data Protection Authority, CNIL, publishes draft guidelines on the security of critical data processing
On August 28, the French Data Protection Authority, Commission Nationale de L'informatique et des Libertes (CNIL), issued its draft guidelines on critical personal data processing for public comment. The guidelines compile CNIL's recommended security practices regarding critical personal data processing, which covers "large-scale" data processing per the GDPR. Although the guidelines do not specifically target the data processed by AI systems, the activities of certain AI systems would likely be deemed critical due to the scope of data they process. The public consultation period closes on October 8, 2023 and CNIL plans to publish the finalized guidance in early 2024.
Ajita Shukla (Counsel, Washington D.C.), Ketan Pastakia (Counsel, New York), Agathe Malphettes (Counsel, Paris), Caroline Lyannaz (Counsel, Paris), Sahra Nizipli (Associate, New York), Jack Hobbie (Law Clerk, New York), Felix Aspin (Associate, London), Louise Mouclier (Associate, Paris), Laura Tuszynski (Associate, Paris), Alexandre Ghanty (Associate, Paris), Rachael Stowasser (Associate, Sydney), Charlie Hacker (Graduate, Sydney), and Timo Gaudszun (Legal Intern, Berlin) contributed to the development of this publication.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2023 White & Case LLP