AI Legal News Summer Roundup: Edition 2
Welcome to the second edition of our AI Legal News Summer Roundup!
11 min read
In the first edition of this series, we discussed the surge of class action litigation in the U.S., along with coordinated efforts by various governmental bodies around the world collaborating, consulting, and seeking feedback on regulation and rules concerning generative AI. As highlighted in this second edition, this flurry of activity continues.
Around the world, and across both the public and private sectors, there are resounding calls on AI companies to pursue safety and security when developing and commercializing AI. This was a dominant theme of the United Nations Security Council's meeting on the risks AI poses to global peace and security on July 18 (see Update 10 below). Safety, security, and trust were also the three core values at the center of the voluntary commitments regarding AI made by seven leading U.S. tech companies to the White House on July 21 (see Update 1 below). We also saw a continuing focus on privacy and personal data rights, with the California Privacy Protection Agency (whose main responsibility is to implement and enforce the California Consumer Privacy Act) previewing key issues for its Board to consider in drafting future regulatory text for automation decision making technology, the Singaporean Personal Data Protection Commission publishing proposed guidelines on the use of personal data in AI recommendation and decision systems and a consultation paper seeking views on the proposed guidelines (see Update 7 below), and the principal executive organ of the Government of India approving a bill that will likely limit the ability of AI companies to scrape large amounts of personal data without individual consent (see Update 8 below).
The issues surrounding copyright and rights to use training data for generative AI models remain hotly contested in the U.S. and will likely be so for some time as we watch and wait for decisions from the courts. One case we will be following closely is UAB "Planner5D" v. Meta Platforms, Inc. et al., No. 3:19-cv-03132 (N.D. Cal Jun. 5, 2019), where the United States District Court for the Northern District of California has been requested to issue a summary judgement concerning copyright and trade secrets relating to scraping data used for training AI. In the meantime, copyright owners and the creative community are making their objections known, with the Writers Guild of America West (labor union representing writers in film, television, radio, and online media) and the Screen Actors Guild – American Federation of Television and Radio Artists (labor union representing actors and other media professionals) jointly striking in protest of proposals for using AI tools in production, Media executive Barry Diller announcing his plan to initiate legal proceedings with certain unspecified "leading publishers" against use of published content for training datasets, and the Authors Guild issuing an open letter to AI developers calling for compensation and consent for use of their works in generative AI programs and AI outputs (see Update 3 below).
In this second issue, we highlight certain key legal developments we’ve identified from the United States, United Kingdom, Europe, and APAC over July 13 to July 21, 2023:
1. United States: AI companies make voluntary commitments to AI safeguards at the White House's request
On July 21, the White House issued a Fact Sheet stating that it had secured voluntary commitments from seven leading U.S. tech companies with respect to AI: Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. The commitments include: (1) internal and external red-teaming (i.e., the practice of rigorously challenging plans, policies, systems, and assumptions by adopting an adversarial approach) of their AI systems before their release, (2) sharing information across the industry and with governments, civil society, and academia, (3) investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights (i.e., determinants of how much influence an input will have on the output), (4) facilitating third-party discovery and reporting of vulnerabilities in their AI systems, (5) developing robust technical mechanisms to ensure that users know when content is AI generated (e.g., watermarking system), (6) publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, (7) prioritizing research on the societal risks that AI systems can pose (including on avoiding harmful bias and discrimination and protecting privacy), and (8) developing and deploying advanced AI systems to help address society’s greatest challenges. Notably, the commitments do not require companies to disclose information about their training data. While there have been criticisms that the commitments are not legally enforceable, the Federal Trade Commission and National Advertising Division in the U.S. may take these commitments into account in connection with a false advertising or unfair and deceptive acts or practices claim. Further, insufficient observance of the commitments may provide additional impetus for legislative action by Congress and foreign governments.
2. United States: The Associated Press (AP) to license its news archive to OpenAI for AI training
On July 13, AP issued a statement saying that it has reached an agreement with OpenAI to license AP's archive of news stories to help train OpenAI's generative AI systems, including ChatGPT. Under the arrangement, OpenAI will be licensing "part of AP's text archive, while AP will leverage OpenAI’s technology and product expertise." From AP's perspective, this agreement will help "ensure intellectual property is protected and content creators are fairly compensated for their work." Additionally, this collaboration is part of AP’s recent practice of using "automation to make its journalism more effective."
3. United States: Open letter organized by the Authors Guild to Generative AI leaders
On July 18, the Authors Guild and over 9,000 authors signed an open letter to OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft calling for compensation and consent for use of their works both in generative AI programs and in AI outputs. The letter cites the recent U.S. Supreme Court case Warhol v. Goldsmith (see our Tech Newsflash article for more information) stating that "no court would excuse copying illegally sourced works as fair use," especially given the "high commerciality" of the AI companies' use. The letter adds that "[t]he introduction of AI threatens to tip the scale to make it even more difficult, if not impossible, for writers—especially young writers and voices from under-represented communities—to earn a living from their profession."
4. United States: Artists given opportunity to submit a revised complaint for their class action lawsuit against Stability AI and other generative AI art platforms
In a hearing on July 19, U.S. District Court Judge William Orrick stated that he was inclined to dismiss "almost everything" alleged by a proposed class action from artists against AI art platforms Stability AI, Midjourney, and DeviantArt in Andersen et al v. Stability AI Ltd. et al., No. 3:23-cv-00201 (N.D. Cal. Jan. 13, 2023). Judge Orrick raised various concerns about the Plaintiffs' complaint, including unclear explanation as to whether Midjourney and DeviantArt's generative AI systems train on images themselves or rely on StabilityAI's training, and "how each is liable for each of the claims because the defendants do different things." The Plaintiffs will be allowed to submit a revised complaint.
5. United States: Senator introduces the AI Leadership To Enable Accountable Deployment Act (AI LEAD Act)
On July 13, Senator Gary Peters (D-MI) introduced the LEAD Act, S.2293, a federal bill to establish a Chief Artificial Intelligence Officers Council that will promote coordination regarding agency practices relating to AI, and also to require the head of each agency to hire or designate a Chief Artificial Intelligence Officer (CAIO) who will manage, govern, and oversee AI related agency processes. As currently introduced, the LEAD Act has a sunset provision effective 10 years after the date of enactment.
6. France: The French Digital Council studies the impact of generative AI on society
On July 13, the French Digital Council (FDC) – an advisory commission for the French government – issued a press release highlighting studies it has undertaken regarding the impact of generative AI on society, with the aims of better understanding AI and proposing solutions for using AI "peacefully" to serve citizens' needs. According to the FDC, "[t]he issues linked to the development and use of generative AI are multiple and closely interlinked, touching on essential areas of our societies: the construction of knowledge, information, work, privacy, the representation of the world, etc. The questions currently driving public debate point to a key question: how can we put technologies at the service of our priorities?" As part of its studies, the FDC has: (1) conducted several interviews with AI experts from academia and industry (including Laure Soulier, professor in the Machine Learning team at Sorbonne University and Thomas Wolf, co-founder of Hugging Face), (2) participated in several events focused on AI issues, including the LaborIA COMEX (LaborIA is a research partnership between the French National Institute for Research in Digital Science and Technology and the French Ministry of Labor with the aim of identifying issues linked to the use and impact of AI on work, employment, and skills and social dialogue in order to drive public debate and enlighten public and private decision-makers), and (3) given several media interviews on AI-related subjects (including on ChatGPT).
7. Singapore: Personal Data Protection Commission proposes advisory guidelines on use of personal data in AI
On July 18, the Singaporean Personal Data Protection Commission published a public consultation paper, seeking views on the proposed ‘Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems' (Advisory Guidelines). The focus of the Advisory Guidelines is to clarify how the Personal Data Protection Act 2012 (PDPA) applies to the collection and use of personal data by organizations in developing and deploying AI systems, which are used to make decisions autonomously or to assist human decision-makers through recommendations and predictions. The Advisory Guidelines also aim to provide baseline guidance and best practices for organizations on how to be transparent about whether and how such AI systems use personal data to make recommendations, predictions, or decisions, including guidance on consent and notification obligations. The Advisory Guidelines will not be legally binding and will not affect the application of the PDPA. Submissions are due by August 31, 2023.
8. India: India's Union Cabinet approved the Digital Personal Data Protection (DPDP) Bill, 2023 to be tabled in Parliament in the upcoming session
India's Union Cabinet, a subset of the Union Council of Ministers, the principal executive organ of the Government of India, approved the DPDP Bill to be tabled in Parliament in the current Monsoon session, (which began on July 20 and is expected to continue until August 11). According to The Economic Times,1 a leaked version of a draft of the Bill omits a clause that previously allowed data-collecting entities to process publicly available personal data of Indian Internet users. A technology expert at an unnamed public policy think tank opined that this will likely limit the ability for companies with generative AI systems to scrape large amounts of personal data unless an individual has provided consent.
9. India: Telecom Regulatory Authority of India releases AI recommendations
Following an extensive consultation process, on July 20 the Telecom Regulatory Authority of India (TRAI) released recommendations on "Leveraging Artificial Intelligence and Big Data in Telecommunication Sector." The recommendations note the "urgent need" to adopt a broad, risk-based regulatory framework across all sectors, with high-risk AI use cases regulated through legally binding obligations. TRAI has recommended the establishment of an independent statutory authority for ensuring the development of responsible AI and regulation of use cases. The new statutory authority would exercise regulatory and recommendatory functions, including framing regulations, developing a model AI governance framework and model ethical codes, and monitoring and making recommendations on the enforcement framework on AI applications.
10. Global: United Nations (UN) Security Council discuss AI risks in relation to global peace and security
On July 18, the UN Security Council (Council) held its first meeting to address the potential risks AI poses to international peace and stability. The 15-member council was briefed by UN Secretary-General Antonio Guterres, Jack Clark (co-founder of Anthropic), and Professor Zeng Yi (co-director of the China-UK Research Center for AI Ethics and Governance). In his briefing, Guterres warned of the harms that can result when AI tools are used by those with malicious intent. As such, Guterres believes it is necessary to establish a new UN entity to support collective efforts and lead global standards and approaches to "maximize the benefits of AI for good, to mitigate existing and potential risks, and to establish and administer internationally-agreed mechanisms of monitoring and governance." Clark called on the "Governments of the world" to "come together, develop State capacity and make the development of powerful AI systems a shared endeavor across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace."2 In similar vein, Yi proposed that the Council consider creating a working group on AI for peace and security, and that the UN must play a central role to set up a framework on AI development and governance.3
Charlie Hacker (White & Case, Graduate, Sydney), Emma Hallab (White & Case, Vacation Clerk, Sydney), and Avi Tessone (White & Case, Summer Associate, New York) contributed to the development of this publication.
1 Personal data of Indians in public domain may get shielded from AI, The Economic Times (July 17, 2023).
2 International Community Must Urgently Confront New Reality of Generative, Artificial Intelligence, Speakers Stress as Security Council Debates Risks, Rewards, UN Press (July 18, 2023).
3 Id.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2023 White & Case LLP