African Union
The African Union's Continental AI Strategy sets the stage for a unified approach to AI governance across the continent.
Artificial intelligence (AI) has made enormous strides in recent years and has increasingly moved into the public consciousness.
Increases in computational power, coupled with advances in machine learning, have fueled the rapid rise of AI. This has brought enormous opportunities, as new AI applications have given rise to new ways of doing business. It has also brought potential risks, from unintended impacts on individuals (e.g., AI errors harming an individual's credit score or public reputation) to the risk of misuse of AI by malicious third parties (e.g., by manipulating AI systems to produce inaccurate or misleading output, or by using AI to create deepfakes).
Governments and regulatory bodies around the world have had to act quickly to try to ensure that their regulatory frameworks do not become obsolete. In addition, international organizations such as the G7, the UN, the Council of Europe and the OECD have responded to this technological shift by issuing their own AI frameworks. But they are all scrambling to stay abreast of technological developments, and already there are signs that emerging efforts to regulate AI will struggle to keep pace. In an effort to introduce some degree of international consensus, the UK government organized the first global AI Safety Summit in November 2023, with the aim of encouraging the safe and responsible development of AI around the world.
Most jurisdictions have sought to strike a balance between encouraging AI innovation and investment, while at the same time attempting to create rules to protect against possible harms. However, jurisdictions around the world have taken substantially different approaches to achieving these goals, which has in turn increased the risk that businesses face from a fragmented and inconsistent AI regulatory environment. Nevertheless, certain trends are becoming clearer at this stage:
Businesses in almost all sectors need to keep a close eye on these developments to ensure that they are aware of the AI regulations and forthcoming trends, in order to identify new opportunities and new potential business risks. But even at this early stage, the inconsistent approaches each jurisdiction has taken to the core questions of how to regulate AI is clear. As a result, it appears that international businesses may face substantially different AI regulatory compliance challenges in different parts of the world. To that end, this AI Tracker is designed to provide businesses with an understanding of the state of play of AI regulations in the core markets in which they operate. It provides analysis of the approach that each jurisdiction has taken to AI regulation and provides helpful commentary on the likely direction of travel.
Because global AI regulations remain in a constant state of flux, this AI Tracker will develop over time, adding updates and new jurisdictions when appropriate. Stay tuned, as we continue to provide insights to help businesses navigate these ever-evolving issues.
The African Union's Continental AI Strategy sets the stage for a unified approach to AI governance across the continent.
Voluntary AI Ethics Principles guide responsible AI development in Australia, with potential reforms under consideration.
The enactment of Brazil's proposed AI Regulation remains uncertain with compliance requirements pending review.
AIDA expected to regulate AI at the federal level in Canada but provincial legislatures have yet to be introduced.
The Interim AI Measures is China's first specific, administrative regulation on the management of generative AI services.
The Council of Europe is developing a new Convention on AI to safeguard human rights, democracy, and the rule of law in the digital space covering governance, accountability and risk assessment.
The successful implementation of the EU AI Act into national law is the primary focus for the Czech Republic, with its National AI Strategy being the main policy document.
The EU introduces the pioneering EU AI Act, aiming to become a global hub for human-centric, trustworthy AI.
France actively participates in international efforts and proposes sector-specific laws.
The G7's AI regulations mandate Member States' compliance with international human rights law and relevant international frameworks.
Germany evaluates AI-specific legislation needs and actively engages in international initiatives.
National frameworks inform India’s approach to AI regulation, with sector-specific initiatives in finance and health sectors.
Israel promotes responsible AI innovation through policy and sector-specific guidelines to address core issues and ethical principles.
Japan adopts a soft law approach to AI governance but lawmakers advance proposal for a hard law approach for certain harms.
Kenya's National AI Strategy and Code of Practice expected to set foundation of AI regulation once finalized.
Nigeria's draft National AI Policy underway and will pave the way for a comprehensive national AI strategy.
Position paper informs Norwegian approach to AI, with sector-specific legislative amendments to regulate developments in AI.
The OECD's AI recommendations encourage Member States to uphold principles of trustworthy AI.
Saudi Arabia is yet to enact AI Regulations, relying on guidelines to establish practice standards and general principles.
Singapore's AI frameworks guide AI ethical and governance principles, with existing sector-specific regulations addressing AI risks.
South Africa is yet to announce any AI regulation proposals but is in the process of obtaining inputs for a draft National AI plan.
South Korea's AI Act to act as a consolidated body of law governing AI once approved by the National Assembly.
Spain creates Europe's first AI supervisory agency and actively participates in EU AI Act negotiations.
Switzerland's National AI Strategy sets out guidelines for the use of AI, and aims to finalize an AI regulatory proposal in 2025.
Draft laws and guidelines are under consideration in Taiwan, with sector-specific initiatives already in place.
Turkey has published multiple guidelines on the use of AI in various sectors, with a bill for AI regulation now in the legislative process.
Mainland UAE has published an array of decrees and guidelines regarding regulation of AI, while the ADGM and DIFC free zones each rely on amendments to existing data protection laws to regulate AI.
The UK prioritizes a flexible framework over comprehensive regulation and emphasizes sector-specific laws.
The UN's new draft resolution on AI encourages Member States to implement national regulatory and governance approaches for a global consensus on safe, secure and trustworthy AI systems.
The US relies on existing federal laws and guidelines to regulate AI but aims to introduce AI legislation and a federal regulation authority.
The UK prioritizes a flexible framework over comprehensive regulation and emphasizes sector-specific laws.
The UK government's AI Regulation White Paper1 of August 3, 2023 (the "White Paper") and its written response of February 6, 2024 to the feedback it received as part of its consultation on the White Paper (the "Response")2 both indicate that the UK does not intend to enact horizontal AI regulation in the near future. Instead, the White Paper and the Response support a "principles-based framework" for existing sector-specific regulators to interpret and apply to the development and use of AI within their domains.3
The UK considers that a non-statutory approach to the application of the framework offers "critical adaptability" that keeps pace with rapid and uncertain advances in AI technology.4 However, the UK may choose to introduce a statutory duty on regulators to have "due regard" to the application of the principles after reviewing the initial period of their non-statutory implementation.5
The UK Government's Office for Artificial Intelligence, which was set up to oversee the implementation of the UK's National AI Strategy, will perform various central functions to support the framework's implementation. Such support functions include (among other things): (i) monitoring and evaluating the overall efficacy of the regulatory framework; (ii) assessing and monitoring risks across the economy arising from AI; and (iii) promoting interoperability with international regulatory frameworks.6
However, on July 17, 2024, the King’s Speech7 proposed a set of binding measures on AI, which deviates from the previous agile and non-binding approach. Specifically, the government plans to establish "appropriate legislation to place requirements on those working to develop the most powerful [AI] models".8 The Digital Information and Smart Data Bill was also announced, which will be accompanied by reforms to data-related laws, to support the safe development and deployment of new technologies (which may include AI).9 It is not yet clear exactly how this will be implemented.
On July 26, 2024, the Department for Science, Innovation and Technology commissioned an "AI Action Plan"10 to leverage AI for economic growth and improved public services. The plan will evaluate the UK's infrastructure needs, attract top AI talent, and promote AI adoption across the public and private sectors. Evidence will be gathered from academics, businesses, and civil society to create a comprehensive strategy for AI sector growth and integration. Recommendations are expected in Q4 2024, and an "AI Opportunities Unit" will be established to implement these recommendations.
In February 2024, the UK government wrote to a number of regulators whose work is impacted by AI, asking them to publish an update outlining their strategic approach to AI.11 The regulators' subsequent responses contained (among other things) plans on regulating AI, actions they have already taken, and expressed their support and adherence to the White Paper’s five principles (see section titled "Key compliance requirements" below for more details). Most notably:
On September 5, 2024, the Council of Europe’s Framework Convention16 on AI was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom17, Israel, the United States, and the European Union. The treaty will enter into force on the first day of the month following three months after five signatories, including at least three Council of Europe Member States, have ratified it. Countries from all over the world will be eligible to join and commit to its provisions.
There are several domestic laws that will affect the development or use of AI, including but not limited to:
The White Paper describes "AI," "AI systems" and/or "AI technologies" as "products and services that are ‘adaptable' and ‘autonomous'" but stops short of providing an exhaustive definition.18
The proposed regulatory framework applies to the whole of the UK and states that the UK will continue to consider the impacts of devolution as the AI regulatory framework further develops.19
The White Paper also notes that, as the UK is not currently proposing the introduction of new statutory requirements, the current principles-based AI framework will not change the territorial application of existing legislation applicable to AI (including, for example, data protection legislation). The Response notes that as the UK's approach develops, the government will continue to assess the territorial reach of its AI regulatory framework.20
As noted above, sector-specific regulators will be interpreting and applying the UK's overall principles-based AI framework to the development or use of AI within their respective domains. To date, limited sector-specific guidance has been published. We expect regulators will continue to publish updates outlining their respective strategic approach to AI in the near term.
There are two key compliance roles that will be impacted by the UK's AI regulatory framework:
The White Paper identifies a range of high-level risks that the principles-based AI framework seeks to mitigate with proportionate interventions.22 These include:
The White Paper states that the UK's AI regulatory framework will adopt a context-specific approach instead of categorizing AI systems according to risk. Thus, the UK has decided to not assign rules or risk levels across sectors or technologies.23 The White Paper also notes that it would be neither proportionate nor effective to classify all applications of AI in critical infrastructure as high risk, as some uses of AI in relation to critical infrastructure (e.g., the identification of superficial scratches on machinery) can be relatively low risk.24 Essentially, the UK's context-specific approach to risk categorization is expected to allow regulators to respond to the risks posed by AI systems in a proportionate manner.25
The Response highlights the UK's continued commitment to a context-based approach "that avoids unnecessary blanket rules that apply to all AI technologies, regardless of how they are used", noting that such an approach is the "best way" to ensure an agile approach that stands the test of time.26
The White Paper establishes five cross-sectoral principles for existing regulators to interpret and apply within their respective domains:
Principle 1: Regulators should ensure that AI systems function in a robust, secure, and safe way throughout the AI life cycle, and that risks are continually identified, assessed and managed.
To implement this principle, regulators will need to consider:
Principle 2: Regulators should ensure that AI systems are appropriately transparent and explainable. To implement this principle, regulators will need to consider:
Principle 3: Regulators should ensure that AI systems are fair (i.e., they do not undermine the legal rights of individuals or organizations, discriminate unfairly against individuals, or create unfair market outcomes).
To implement this principle, regulators will likely need to:
Principle 4: Regulators should ensure there are governance measures in place to allow for effective oversight of the supply and use of AI systems, with clear lines of accountability across the AI life cycle. To implement this principle, regulators will likely need to:
Principle 5: Regulators should ensure that users, impacted third parties and actors in the AI life cycle are able to contest an AI decision or outcome that is harmful or creates a material risk of harm, and access suitable redress.
To implement this principle, regulators will need to consider:
The Response notes that values and rules associated with human rights, operational resilience, data quality, international alignment, systemic risks and wider societal impacts, sustainability and education, and literacy are largely already enshrined in existing UK laws.
The UK does not have a central AI regulator, and the White Paper indicates that there are no existing plans to establish a central AI regulator either.27 As noted above, sector-specific regulators are expected to interpret and apply the principles-based AI framework within their respective domains.
Sector-specific regulators will need to ensure their regulations incorporate the principles of accountability and suitable redress with reference to the UK's principles-based AI framework.
1 See Convention text here.
2 See Government press release here.
3 See UK government press release (here).
4 See the White Paper (here).
5 See the Response (here).
6 See the White Paper (here), Section 3.2 (The proposed regulatory framework), and the Response (here), section 5 (A regulatory framework to keep pace with a rapidly advancing technology).
7 See the Response (here), paragraph 16.
8 See the Response (here), paragraph 109.
9 See the White Paper (here), paragraph 14.
10 The King’s Speech sets out the new Labor government’s proposed laws and its plans for the upcoming parliamentary term.
11 See the King’s Speech here.
12 See the King’s Speech background notes here, page 40.
13 See all the relevant regulator updates here.
14 See the FCA update here.
15 See the ICO strategic approach here.
16 See Ofcom’s strategic approach to AI 2024/25 here.
17 See the Competition and Markets Authority initial review of AI Foundation Models (here).
18 See the White Paper (here), Section 1.3 (A note on terminology) and Section 3.2.1 (Defining Artificial Intelligence).
19 See the White Paper (here), Part 5 (Territorial application).
20 See the Response (here), paragraph 78.
21 See the White Paper (here), paragraph 25.
22 See the Response (here), paragraph 11.
23 See the White Paper (here), Section 3.2.2 (Regulating the use – not the technology), paragraph 45.
24 See the White Paper (here), Section 3.2.2 (Regulating the use – not the technology), paragraph 45.
25 See the White Paper (here), Section 3.2.2 (Regulating the use – not the technology), paragraph 46.
26 See the Response (here), paragraph 11.
27 See the White Paper (here), paragraph 15.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2024 White & Case LLP
Daniel Mair (Trainee Solicitor, White & Case, Paris) and Jeffrey Shin (Trainee Solicitor, White & Case, London) contributed to this publication.