African Union
The African Union's Continental AI Strategy sets the stage for a unified approach to AI governance across the continent.
Artificial intelligence (AI) has made enormous strides in recent years and has increasingly moved into the public consciousness.
Subscribe
We encourage you to subscribe to receive AI-related updates.
Explore Trendscape Our take on the interconnected global trends that are shaping the business climate for our clients.
Increases in computational power, coupled with advances in machine learning, have fueled the rapid rise of AI. This has brought enormous opportunities, as new AI applications have given rise to new ways of doing business. It has also brought potential risks, from unintended impacts on individuals (e.g., AI errors harming an individual's credit score or public reputation) to the risk of misuse of AI by malicious third parties (e.g., by manipulating AI systems to produce inaccurate or misleading output, or by using AI to create deepfakes).
Governments and regulatory bodies around the world have had to act quickly to try to ensure that their regulatory frameworks do not become obsolete. In addition, international organizations such as the G7, the UN, the Council of Europe and the OECD have responded to this technological shift by issuing their own AI frameworks. But they are all scrambling to stay abreast of technological developments, and already there are signs that emerging efforts to regulate AI will struggle to keep pace. In an effort to introduce some degree of international consensus, the UK government organized the first global AI Safety Summit in November 2023, with the aim of encouraging the safe and responsible development of AI around the world. The EU is also implementing the first comprehensive horizontal legal framework for the regulation of AI systems across EU Member States (the EU AI Act is addressed in more detail here: AI watch: Global regulatory tracker - European Union, and you can read our EU AI Act Handbook here).
Most jurisdictions have sought to strike a balance between encouraging AI innovation and investment, while at the same time attempting to create rules to protect against possible harms. However, jurisdictions around the world have taken substantially different approaches to achieving these goals, which has in turn increased the risk that businesses face from a fragmented and inconsistent AI regulatory environment. Nevertheless, certain trends are becoming clearer at this stage:
Businesses in almost all sectors need to keep a close eye on these developments to ensure that they are aware of the AI regulations and forthcoming trends, in order to identify new opportunities and new potential business risks. But even at this early stage, the inconsistent approaches each jurisdiction has taken to the core questions of how to regulate AI is clear. As a result, it appears that international businesses may face substantially different AI regulatory compliance challenges in different parts of the world. To that end, this AI Tracker is designed to provide businesses with an understanding of the state of play of AI regulations in the core markets in which they operate. It provides analysis of the approach that each jurisdiction has taken to AI regulation and provides helpful commentary on the likely direction of travel.
Because global AI regulations remain in a constant state of flux, this AI Tracker will develop over time, adding updates and new jurisdictions when appropriate. Stay tuned, as we continue to provide insights to help businesses navigate these ever-evolving issues.
The African Union's Continental AI Strategy sets the stage for a unified approach to AI governance across the continent.
Voluntary AI Ethics Principles guide responsible AI development in Australia, with potential reforms under consideration.
The enactment of Brazil's proposed AI Regulation remains uncertain with compliance requirements pending review.
AIDA expected to regulate AI at the federal level in Canada but provincial legislatures have yet to be introduced.
The Interim AI Measures is China's first specific, administrative regulation on the management of generative AI services.
Despite congressional activity on AI in Colombia, regulation remains unclear and uncertain.
The Council of Europe is developing a new Convention on AI to safeguard human rights, democracy, and the rule of law in the digital space covering governance, accountability and risk assessment.
The successful implementation of the EU AI Act into national law is the primary focus for the Czech Republic, with its National AI Strategy being the main policy document.
The EU introduces the pioneering EU AI Act, aiming to become a global hub for human-centric, trustworthy AI.
France actively participates in international efforts and proposes sector-specific laws.
The G7's AI regulations mandate Member States' compliance with international human rights law and relevant international frameworks.
Germany evaluates AI-specific legislation needs and actively engages in international initiatives.
Hong Kong lacks comprehensive AI legislative framework but is developing sector-specific guidelines and regulations, and investing in AI.
National frameworks inform India’s approach to AI regulation, with sector-specific initiatives in finance and health sectors.
Israel promotes responsible AI innovation through policy and sector-specific guidelines to address core issues and ethical principles.
Japan adopts a soft law approach to AI governance but lawmakers advance proposal for a hard law approach for certain harms.
Kenya's National AI Strategy and Code of Practice expected to set foundation of AI regulation once finalized.
Nigeria's draft National AI Policy underway and will pave the way for a comprehensive national AI strategy.
Position paper informs Norwegian approach to AI, with sector-specific legislative amendments to regulate developments in AI.
The OECD's AI recommendations encourage Member States to uphold principles of trustworthy AI.
Saudi Arabia is yet to enact AI Regulations, relying on guidelines to establish practice standards and general principles.
Singapore's AI frameworks guide AI ethical and governance principles, with existing sector-specific regulations addressing AI risks.
South Africa is yet to announce any AI regulation proposals but is in the process of obtaining inputs for a draft National AI plan.
South Korea's AI Act has been promulgated as the fundamental body of law governing AI.
Spain creates Europe's first AI supervisory agency and actively participates in EU AI Act negotiations.
Switzerland's National AI Strategy sets out guidelines for the use of AI, and aims to finalize an AI regulatory proposal in 2025.
Draft laws and guidelines are under consideration in Taiwan, with sector-specific initiatives already in place.
Turkey has published multiple guidelines on the use of AI in various sectors, with a bill for AI regulation now in the legislative process.
Mainland UAE has published an array of decrees and guidelines regarding regulation of AI, while the ADGM and DIFC free zones each rely on amendments to existing data protection laws to regulate AI.
The UK prioritizes a flexible framework over comprehensive regulation and emphasizes sector-specific laws.
The UN's AI resolutions encourage Member States to adopt national rules to establish safe, secure and trustworthy AI systems and create forums to advance global cooperation, scientific understanding, and share best practices.
The US relies on existing federal laws and guidelines to regulate AI but aims to introduce AI legislation and a federal regulation authority.
Italy is the first EU Member State to adopt a comprehensive national framework concerning the use of Artificial Intelligence.
On October 10, 2025, Italy adopted Law no. 132/2025 (the "National AI Law"). Italy is therefore the first EU Member State to adopt a comprehensive national framework dedicated to Artificial Intelligence.
The National AI Law1 intends to complement the Regulation (EU) 2024/1689 (the "EU AI Act"), establishing sector-specific rules which fall under the scope of national law and defining the institutional architecture responsible for overseeing AI in Italy, with designated authorities as well as administrative and criminal sanctions.
The EU AI Act is addressed separately here.
On October 10, 2025, the National AI Law entered into force.
Unlike the EU AI Act, which provides for gradual implementation and will only become fully binding over time, several provisions of the National AI Law are already in force and fully applicable.
However, the National AI Law also delegates the Italian Government to adopt implementing decrees concerning the training of AI systems and civil redress (Article 16), the compliance of the National AI Law with the EU AI Act (Article 24(1)), specific civil procedure rules on damages (Article 24(5)(d)) and criminal sanctions (Article 24(5)(b)). The implementing decrees shall be adopted within twelve months – i.e., by October 10, 2026.
On May 20, 2024, the Italian Data Protection Authority (DPA) adopted a Notice on the use of web-scraping for the purpose of training AI models and contrasting the misuse of personal data.2
Moreover, the DPA has had a specific organizational unit dedicated to artificial intelligence in place since 2021.
Furthermore, Italian courts and regulators have begun to interpret existing laws with regard to AI.3
Additionally, there are various laws that do not directly seek to regulate AI, but may affect the development or use of AI in Italy, such as:
Article 2 of the National AI Law refers to the definitions of "Artificial Intelligence" and "AI system" contained in the EU AI Act.
Interestingly, the Parliament bill first introduced a standalone definition of such concepts, but was subsequently modified after the intervention of the European Commission which noted that the proposal did not align with the EU AI Act and risked causing fragmentation.
The National AI Law highlights the importance of an "anthropo-centric" use of artificial intelligence (Article 1). Additionally, research, experimentation, development and application of AI systems and AI models shall respect human rights and the principles enshrined in the Italian Constitution and in EU law (Article 3).
The National AI Law is set to apply to the development and deployment of AI systems and AI models within the Italian territory.
The National AI Law applies to the following sectors:
National health service and healthcare: Article 7 permits the use of AI to strengthen the national health service and support disease prevention, diagnosis, and treatment, provided fundamental rights and data protection are respected. AI cannot be used to discriminate in healthcare access and must promote inclusion for persons with disabilities. AI systems may only « support » medical procedures; final decisions remain with medical professionals, whose clinical judgment and liability are unchanged. Patients must be informed when AI is used in medical assistance.
With respect to research and development, Article 8 enables public and non-profit entities to process personal data for AI development as a matter of "paramount public interest", requiring only general notice to individuals and anonymization. No additional consent is needed, but the DPA must be notified at least 30 days in advance and may exercise its inspection powers.
By February 2026, the Minister of Health is expected to issue guidelines on the processing of medical data for research purposes with regard to AI systems (Article 9). The National AI Law also established a national AI platform, which aims to assist: healthcare professionals (by providing non-binding decision support), doctors in their daily clinical practice, and patients in accessing community services (Article 10).
According to the National AI Law, the development and implementation of AI systems must respect fundamental rights enshrined in the Italian Constitution and EU law, and be based on the principles of transparency, proportionality, safety, protection of personal data, confidentiality, accuracy, non-discrimination, gender equality and sustainability.
The Italian government released the National AI Strategy, which identifies three areas of action, namely: (i) to strengthen expertise and attract talent in order to develop an AI ecosystem; (ii) to increase funding for advanced research in AI; (iii) to encourage the adoption and the application of AI, both in public administration (PA) and in productive sectors in general.5
The Italian government has also stressed the need to ensure, inter alia, greater clarity with respect to coordination with sector regulations, in particular banking and insurance. It also proposed a system of self-assessment by the companies of AI systems, through guidelines or a repository of examples, and supported the definition of burdens and obligations along the value chain of AI systems, especially for SMEs.
The Presidency of the Council of Ministers is competent for the enforcement of the National AI Strategy, together with the Agency for Digital Italy (AgID) and other sector-specific authorities.
The National AI Law does not establish a different risk categorization than that of the EU AI Act. Accordingly, there is currently no risk categorization of AI in Italy, except for those that are introduced by the EU AI Act.
The National AI Law relates to some specific sectors, as outlined above.
In compliance with the designation of AI supervisory authorities required by the EU AI Act, the National AI Law designates the national authorities responsible for AI:
The National AI Law introduces criminal sanctions for AI-related offenses. This includes a new offense under the Italian Penal Code, "Unlawful dissemination of content generated or altered with AI systems" (Article 612-quater), which targets deepfakes; those who publish or distribute AI-altered images, videos or audio recordings that are likely to be misleading and cause unjust harm may face one to five years' imprisonment. Additionally, a new aggravating circumstance applicable to all criminal offenses increases sentences by up to one third if a crime is committed by using AI, or if AI is used to hinder defense or aggravate the consequences of the offense (Article 61 No 11-undecies).
Further criminal penalties apply to offenses such as "attacks on citizens' political rights", "market rigging", and "market manipulation" when committed using AI. The law also extends copyright sanctions under the Italian Copyright Law to unauthorized scraping and abusive text and data mining.
The Government must issue implementing decrees within twelve months, which may introduce new criminal offenses to address AI risks. By October 2026, the Government is expected to adopt implementing decrees to define sanctioning powers for authorities, as required by the EU AI Act. Further implementing decrees are expected to govern the use of data, algorithms, mathematical methods for training AI systems (such as appropriate civil redress mechanisms), and a related system of sanctions.
The National AI Law also amends the Code of Civil Procedure by assigning disputes concerning the functioning of AI systems to specialized business sections of civil courts, thereby updating the procedural rules for AI-related civil claims.
1 See Law No. 132 of September 23, 2025, "Provisions and delegations to the Government on Artificial Intelligence" here.
2 Decision of May 20, 24 of the Italian DPA, available here.
3 For instance, the Italian Supreme Court (Corte di Cassazione) has ruled on the principles governing the lawful use of artificial intelligence systems involving the processing of biometric data: Italian Supreme Court, judgement No. 12967 of May 13, 2024.
4 The Observatory identifies the sectors and professions most affected by AI and proposes practical solutions to manage these changes. It also helps shape national strategies to promote a balanced and responsible adoption of AI in companies and institutions. Moreover, the Observatory focuses on training, identifying the most requested skills on labor markets and promotes targeted initiatives to support reskilling and upskilling for workers and businesses. Please see here for more information.
5 See the National AI Strategy (2022-2024). In terms of AI regulation, the National AI Strategy calls for a radical update in terms of: (i) strengthen the AI research base and associated funding; (ii) promote measures to attract talent; (iii) improve technology transfer process; (iv) increase the adoption of AI among business and PA, fostering the creation of innovative companies. The recent National AI Strategy also aims to align all IA policies related to data processing, aggregation, sharing and exchange, as well as to data security, with the National Cloud Strategy and with ongoing initiatives at the European level, starting with the European Data Strategy and the recent proposals for a Data Governance Act and a regulation on artificial intelligence.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2026 White & Case LLP