As the world moves towards the Paris AI summit scheduled for February 10-11, global powers have adopted vastly different approaches to artificial intelligence regulation.
From the United States’ hands-off policies to the European Union’s intricate legal framework, the landscape of AI governance remains fragmented.
Returning President Donald Trump last month revoked Joe Biden’s executive order on AI oversight, issued in October 2023. The directive, largely voluntary, had mandated AI developers such as OpenAI to submit safety assessments and crucial data to the federal government.
Supported by major technology firms, the order had been designed to protect privacy and prevent civil rights violations while imposing national security safeguards. However, with no formal AI regulatory framework in place, the United States relied solely on existing privacy laws.
Yael Cohen-Hadria, a digital lawyer at consultancy EY, stated that the United States had “picked up their cowboy hat again, it’s a complete Wild West.” She remarked, “The administration has effectively said that ‘we’re not doing this law anymore... we’re setting all our algorithms running and going for it.’”
China’s government continued formulating a legal framework governing generative AI. In the interim, a set of measures required AI to respect personal and business interests, obtain consent before using personal data, mark AI-generated images and videos, and ensure user safety.
Additionally, AI was mandated to “adhere to core socialist values,” thereby barring AI language models from criticising the ruling Communist Party or compromising China’s national security. The DeepSeek model, which recently gained attention for its powerful yet cost-efficient R1 system, demonstrated these restrictions by refusing to address questions about President Xi Jinping or the 1989 Tiananmen Square crackdown.
Cohen-Hadria predicted that while the Chinese government would strictly regulate businesses, particularly foreign-owned entities, it would grant itself “strong exceptions” to these regulations.
Unlike the United States and China, the European Union positioned ethical considerations at the forefront of its AI legislation. “The ethical philosophy of respecting citizens is at the heart of European regulation,” Cohen-Hadria said.
The bloc’s “AI Act,” passed in March 2024, was regarded as the world’s most comprehensive AI law. Certain provisions, effective from this week, prohibited AI applications such as predictive policing based on profiling and systems that inferred race, religion, or sexual orientation from biometric data.
The legislation introduced a risk-based approach, subjecting high-risk AI systems to more stringent compliance requirements. EU policymakers contended that these rules provided clarity for businesses, fostering both innovation and legal certainty.
Cohen-Hadria underscored the law’s strong intellectual property protections and its facilitation of controlled data circulation. “If I can access a lot of data easily, I can create better things faster,” she said.
Similar to China, India had not introduced a dedicated AI regulation but had enforced existing laws on defamation, privacy, copyright infringement, and cybercrime to address AI-related harms.
Despite frequent government statements and media discussions regarding AI regulation, concrete legislative action remained absent. Cohen-Hadria noted that India’s high-tech sector played a crucial economic role, stating, “If they make a law, it will be because it has some economic return.”
Government intervention triggered backlash in March 2024 when India’s IT ministry released an advisory requiring firms to seek approval before deploying “unreliable” or “under-testing” AI models. AI companies, including Perplexity, opposed the directive, which followed a controversy where Google’s Gemini AI accused Prime Minister Narendra Modi of implementing fascist policies. The government swiftly amended the regulations, ultimately requiring only disclaimers on AI-generated content.
Britain’s centre-left Labour government incorporated AI regulation into its economic growth agenda. As the world’s third-largest AI market, after the United States and China, Britain sought a tailored approach to AI governance.
Prime Minister Keir Starmer introduced an “AI opportunities action plan” in January, outlining London’s independent regulatory strategy. Starmer stated that AI should be “tested” before formal regulation was imposed.
The action plan document asserted, “Well-designed and implemented regulation... can fuel fast, wide and safe development and adoption of AI.” By contrast, the document warned that “ineffective regulation could hold back adoption in crucial sectors.”
An ongoing consultation aimed to determine how copyright law applied to AI, ensuring protections for the creative sector.
The Global Partnership on Artificial Intelligence (GPAI), comprising more than 40 countries, sought to promote responsible AI use. The French presidency confirmed that members would convene on Sunday “in a broader format” to devise a 2025 action plan.
In May 2024, the Council of Europe adopted the world’s first binding AI treaty, signed by the United States, Britain, and the European Union.
Despite these international initiatives, AI governance remained unbalanced—out of 193 UN member states, only seven participated in major AI governance frameworks, while 119, mostly in the Global South, remained outside any initiative.