AI Laws Around the World: What are Each Country’s Regulations?

As the global artificial intelligence market expands – expected to crest $305 billion in 2024 – governments are taking aggressive steps to regulate the technology in 2024.

According to Stanford University’s AI Index, 37 of 126 global countries have taken action of AI regulations in 2024, up from just one in 2022. Exhibit “A” is the European Union’s AT Act, which was just voted into law on March 13, looks to reshape the region’s artificial intelligence usage landscape, taking a risk-based approach to technology compliance.

That’s why updating the global AI regulation outlook in early 2024 is a good idea. Here’s a capsule look at what the major countries are doing on AI regulation.

Are there any Global AI laws?

Currently there are no legally binding global AI laws that are specific to artificial intelligence. In general, global law is difficult to agree on and even more difficult to enforce. Being that AI is still relatively new, it will likely take many years, if ever, to legislate. However many countries have started implementing their own laws and we will take a look at them.

United States

The United States has largely taken a “hands-off” approach to artificial intelligence oversight, allowing U.S. technology firms to self-police their AI operations. Over the past two years, the Biden Administration has hosted a series of meetings with companies like Alphabet, Meta, Microsoft, OpenAI, and Nvidia, which resulted in promises from all parties to develop guardrails on AI development and usage within American borders.

U.S. public policymakers have emphasized licensing and credentialing oversight, especially in high-risk industries like defense, finance and banking, transport and shipping, and consumer retail. Until the smoke clears and the federal government and the leading technology companies come to an agreement on AI rules going forward, federal agencies like the Federal Trade Commission and the Securities and Exchange Commission have told companies to abide by existing U.S. laws.

United Kingdom

The United Kingdom has echoed U.S. public policy of letting companies establish artificial intelligence regulatory policies, with an eye on developing new legislation later this year.

For now, the U.K. is establishing guardrails for AI compliance based on five key standards: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

“Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance,” Deloitte noted in a recent research note. “Selected regulators will publish their AI annual strategic plans by 30th April, providing businesses with much-needed direction.”

European Union

The EU has taken the most aggressive stance on artificial intelligence oversight, primarily through the AI Act, which forces larger AI developers to share application development services with smaller technology upstarts going forward.

The AI Act also categorizes AI management via a four-pronged risk standard including “unacceptable, high-risk, moderate risk, and low risk” classifications, which go into effect this May. Risk factors within those categories include hiring practices, data validation, and consumer profiling compliance.

China

The Chinese government falls under the AI compliance ledger’s more “restrictive” side. The Pacific Rim giant has already rolled out a plan for in-country AI companies to clear a security assessment that closely tracks social, cultural, regulatory, and government “values.” 

Surprisingly, foreign AI developers doing business in China aren’t subjected to the same assessment.

Japan

The Japanese government has been aggressive in helping in-country AI companies fund their operations, particularly for smaller companies.

Unlike the U.S., where The New York Times is battling OpenAI in court over AI copyright issues, the Japanese government, under business-friendly prime minister Fumio Kishida, allows technology companies to leverage copyrighted images to train artificial intelligence models without legal interference.

In the age of AI, that policy may not stand the test of time as copyright owners balk at handing over the content for free. However, it shows how far the Kishida government will bend backward to use AI as a driver of the country’s technological growth.

On a more restrictive note, the Japanese government is also exploring legislation that would address disinformation shared on ChatGPT platforms. While only preliminary rules have been drafted as of mid-March, the legislation would penalize AI developers like OpenAI if they operate outside of the proposed legislation’s boundaries.

Brazil

The Brazilian government is also pivoting to an AI risk assessment model to safeguard AI usage once new applications and tools hit the consumer marketplace. Proposed legislation favors a rights-based and risk-based AI regulatory approach, via Senate Bill NO. 2338/2023.

“The Brazilian bill mirrors the EU AI Act in many aspects, as both texts reflect the OECD’s expanded definition of AI systems, encompassing not only decision-making but also model creation and data training,” according to Policy Review. “Additionally, (the legislation) takes a risk-based approach tailoring the regulatory obligations based on potential AI technology risks. This includes a similar list of high-risk applications and the prohibition of applications that pose excessive or unacceptable risks, such as those related to social scoring.”

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *