Legal perspectives on developing and deploying ‘responsible AI’

Article
|
5 min read

Explore Trendscape

Our take on the interconnected global trends that are shaping the business climate for our clients.

White & Case LLP has partnered with the Financial Times on the publication of its Moral Money Forum reports, which explore key issues from the ESG debate. This article has been reproduced with permission from the Financial Times.

Despite previous calls for an "ethical pause" on AI development, corporate investment in it is expected to reach $200bn by 2025. As such, board oversight in this area continues to evolve at pace, particularly in light of concerns raised regarding AI including: the potential for misuse of AI technologies; bias amplification; potential lack of transparency and accountability; and the treatment of personal data and intellectual property used in AI systems.

The growing demand for, and availability, impact and stakeholder scrutiny of generative AI is pushing it higher up the board agenda across all sectors. Businesses stand to benefit from embracing AI — for example, it could improve efficiency, assist decision-making and contribute to risk management. Yet, to realise AI's benefits safely and responsibly, boards must be able to navigate the associated legal/compliance, shareholder activism, ethical and reputational risks.

AI is technically complex and fast-moving, and therefore challenging for governments to develop effective regulation, standards and/or guidance. As a result, the international landscape of legal frameworks governing AI is fragmented — with even the definition of AI differing across jurisdictions. However, a new, global phase of AI regulation is starting to emerge, as indicated by the publication of the G7 AI principles and Hiroshima AI Process, the Bletchley Declaration on AI safety, the Blueprint for an AI Bill of Rights, the "first-of-a-kind" Framework Convention on AI, Human Rights, Democracy and the Rule of Law, the UN's B-Tech Project on Generative AI with the UN Guiding Principles on Business and Human Rights, and the UN General Assembly's recently adopted resolution on "safe, secure and trustworthy" AI that will also benefit sustainable development for all (backed by more than 120 States).

The EU's proposed AI Act is designed to provide a horizontal legal framework for the regulation of AI systems across the EU. Once in force, the AI Act's risk-based approach — which has fundamental rights protection at its core — will have global reach and affect actors across the entire AI value chain. However, several of the concepts set out in the AI Act will require clarification by courts and regulators to provide businesses with greater certainty regarding their compliance obligations. Alongside the AI Act, companies operating in the EU must still consider obligations under other applicable sector-specific instruments, such as the General Data Protection Regulation, the Digital Services Act, and the forthcoming AI Liability Directive. Companies should also be aware of regulatory initiatives at national level in the Member States in which they operate. For example, in February 2024, France's competition authority announced that it would investigate big tech's competitive functioning in the generative AI sector, and would issue an opinion in the coming months.

The UK has taken a different approach to the EU, declining to issue new legislation at this stage, and instead adopting a flexible framework of AI Regulatory Principles that will be enforced by existing regulators. This framework is intended to be both pro-innovation and pro-safety. In February 2024, a Committee of the House of Lords published a report cautioning the Government against a regulatory approach too narrowly focused on AI safety. Days later, the Government published: (i) its consultation response to its AI Regulation White Paper, articulating a principles-based (rather than the EU's risk-based) approach towards regulating AI; and (ii) guidance for regulators on implementing the AI Regulatory Principles.

On the other side of the pond, in late 2023, the Biden administration signed Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of AI. However, in contrast to the EU's risk-based regulatory approach, the Order places near-equal emphasis on the pressing need to (responsibly) develop and harness the potential benefits of AI as it does on the need to understand and mitigate novel risks. Initiatives also unfold at State-level, including California Senator Scott Wiener's recent proposal for sweeping safety measures for AI in SB 1047, while New York City has already introduced Local Law 144 to regulate the use of AI in hiring decisions.

Shareholders have also started to become active, with AI-focused resolutions having made their debut in the US (for example, calling for tech and motion picture companies to publish an "AI transparency report"). Such resolutions are expected to feature more prominently at future AGMs.

Boards should also be alive to the types of AI-related disputes and class actions being filed in national courts, as judgments in these early legal actions will be instructive in evaluating a company's potential exposure to litigation risk. Disputes have already arisen in relation to issues such as whether training data fed into AI systems infringes copyright, other IP rights and/or personal data protections; alleged bias in the output of AI tools; misrepresentation of AI systems' capabilities; and whether an AI system can itself be an "inventor" under patent law or an "author" under copyright law.

To mitigate the risks explored above, companies should implement effective AI governance. This may involve:

  1. developing clear and robust policies which govern — and embed ethical practices into — the use of AI; 
  2. developing strategies for negotiating AI-specific contractual clauses, including in relation to policies, procedures and testing with respect to the various concerns, and liability attribution for AI failures; 
  3. establishing a cross-functional team of specialists from legal, compliance, ethics, data science, marketing (among others) to oversee and report to management on AI governance; and 
  4. undertaking regular risk assessments and audits of AI models and data sets to remediate legal and ethical concerns (e.g., bias).

 

This publication is provided for your convenience and does not constitute legal advice. This publication is protected by copyright.

Top