AI Act’s global effects might be overstated, experts say

Content-Type:

Analysis Based on factual reporting, although it Incorporates the expertise of the author/producer and may offer interpretations and conclusions.

[Euractiv illustration by Esther Snippe, Photos by EPA/Shutterstock]

[Euractiv illustration by Esther Snippe, Photos by EPA/Shutterstock]

The policymakers behind the EU’s AI Act, passed by the European Parliament with a large majority on 13 March, aimed to set a new global standard for regulating the technology, but not everyone agrees the impact will be as vast as promised.

While the experts Euractiv spoke to do not doubt the legislation will have a worldwide effect, just how big it will be and whether the complex package of laws will keep investments in the bloc are up for debate.

Multinational companies often want to maintain access to Europe’s single market of 460 million people. In many cases, it is easier to implement the bloc’s strict standards across their global operations rather than compartmentalise to adapt to the EU market. This means Brussels-made rules become a reality worldwide.

Most experts Euractiv spoke to agreed that this ‘Brussels Effect‘ will be felt with the AI Act.

Supporting this view is the fact that the EU has become very good at enforcing its rules. Over time, “EU regulators have really flexed and grown their muscles” to enforce their rules on overseas companies, said Joe Jones, research and insights director at the International Association of Privacy Professionals.

Just last week, EU regulators announced they are probing nine big tech platforms for various possible violations, including their use of generative AI.

Leading by example

The EU bureaucratic machine is often the first to develop comprehensive legislation with hundreds of pages and addendums, particularly on digital issues. This usually inspires other jurisdictions to follow suit.

The strategy to set a global benchmark in standards and regulation was expressed by Commission Executive Vice President Margrethe Vestager, who told an event in January 2024: “We [the EU] need to be much more present in standardisation for us” when it comes to technology.

Five years ago, the world took note of the EU’s General Data Protection Regulation (GDPR) in a major way, with some countries adding entire parts of the legislation into their own, Gabriela Zanfir-Fortuna, vice president for global privacy at think tank Future of Privacy Forum, told Euractiv.

Meanwhile, before the AI Act even passed in the European Parliament, some US states were “inspired” by it when drafting their own rules, she said.

But Jones and Zanier-Fortuna are not certain that the AI Act might serve as an example the way the GDPR did.

The AI Act does not regulate every and all uses of AI or follow long precedents, compared to the GDPR, which applies to essentially every type of data processing, whether by your local plumber or a multinational corporation such as Google.

The GDPR was “much more coherently framed within fundamental rights philosophy” whereas the AI Act “doesn’t have precedents” or a “genesis,” said Jones. He added that the AI legislation’s framework is a patchwork of “fundamental rights, a bit of product safety and product liability, and a bit of general digital safety”.

The AI law is innovative in regulating the specific technology and sets rules based on the anticipated risk, weighing it with the need for innovation. So other countries will be taking a closer look at the AI Act’s patchwork and making decisions that suit their specific circumstances and goals.

And unlike with the GDPR, countries were already creating their own rules for AI before the act was voted on.

In a February 2024 white paper, the UK government said legislation for AI will “ultimately” be needed when the associated risks are understood, which is not today.

“Introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation and prevent people from across the UK from benefiting from AI,” said the document published by the UK government’s Department for Science, Innovation & Technology.

Startup impact

Whether the AI Act will nurture innovation in Europe is also debatable.

European AI companies have historically attracted a tiny amount of capital compared to their US or Chinese counterparts. The trend was less pronounced in 2023 thanks to mega investments in French Mistral AI and German Aleph Alpha.

Both companies lobbied during the policymaking process for the AI Act. Mistral AI Co-Founder and Director Arthur Mensch told Le Monde the law’s final form is “manageable”.

Kirsten Rulf, a partner and associate director at Boston Consulting Group, who negotiated the legislation in her previous role as advisor to the German chancellery, told Euractiv that the AI legislation would “absolutely” keep startups in Europe.

The law not only brings clarity to AI developers but can engender trust in consumers, she said.

But not everyone agrees, and other stakeholders insist that implementation of the AI Act is key when to innovation.

The “fact is that companies operating in Europe will face burdens that their competitors will not. This is not an attractive proposition to VCs [venture capital firms]. Implementing the law quickly and effectively will be key to getting the required legal certainty,” said Cecilia Bonefeld Dahl, director-general of industry group DigitalEurope.

The long and winding road to implement the AI Act

On Wednesday (13 March), the European Parliament passed its first comprehensive regulation on artificial intelligence (AI), but major questions remain on how the law will be implemented.

[Edited by Alice Taylor/Zoran Radosavljevic]

Read more with Euractiv

Supporter

AI4TRUST

Funded by the European Union

Check out all Euractiv's Projects here

Subscribe to our newsletters

Subscribe