EU’s much-heralded AI Act agreed by EU Parliament – but serious human rights holes in law remain

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of Euractiv Media network.

Content-Type:

Opinion Advocates for ideas and draws conclusions based on the author/producer’s interpretation of facts and data.

The AI Act has been a long time coming. Whilst it’s a landmark piece of legislation, unfortunately, it fails to meet the bar on human rights protections, writes Laura Lazaro Cabrera. [TSViPhoto/Shutterstock]

The AI Act has been a long time coming. Whilst it’s a landmark piece of legislation, unfortunately, it fails to meet the bar on human rights protections, writes Laura Lazaro Cabrera.

Laura Lazaro Cabrera is the Counsel and Director of the Equity and Data Programme at the Centre for Democracy & Technology (CDT).

It can’t be denied that this is a historic moment both in the EU and globally: a law to govern artificial intelligence has been agreed on by the EU. It is the first of its kind in the world. It’s a long-awaited, hard-fought-over and lengthy piece of legislation. But for CDT Europe, it is a mixed bag when it comes to protecting human rights – one of its key aims, after all.

The AI Act’s significance is clear: it will become the benchmark for AI regulation globally in what has become a race against the clock as lawmakers grapple with a fast-moving development of a technology with far-reaching impacts on our basic human rights.

AI is being increasingly used in areas with profound importance to peoples’ lives: selecting which school your child may go to, helping employers select candidates, processing asylum cases… the list goes on.

Legislation is much needed, and the stakes are extremely high. When AI’s deployment goes to the heart of key human rights such as the right to privacy, freedom of assembly and expression, lawmakers have had to strike a difficult balance.

Council of Europe's proposal for AI Convention is inadequate, EU data watchdog says

The European Data Protection Supervisor (EDPS) expressed its disappointment on Tuesday (March 12) about a treaty on Artificial Intelligence (AI) negotiated in Strasbourg this week, saying it has veered far from its original purpose.

For those such as CDT Europe who have been advocating hard for human rights to be at the core of the AI Act, we had high hopes, but the final text has given away too much in the last-ditch negotiations.

Whilst we can rightly celebrate that privacy and other fundamental rights are foregrounded in the law, there are too many exemptions which could lead to harmful AI posing serious risks to citizens and indeed, often to those in vulnerable situations.

One glaring failure, in our view, is that whilst the Act brings in important limitations on the use of AI by law enforcement, lawmakers did not heed our warning (and that of other civil society organisations) calling for a total ban on untargeted facial recognition by law enforcement.

This goes to the heart of what kind of society we want to live in. The limitations on the use of live facial recognition only apply to law enforcement use in publicly accessible spaces, and explicitly exclude borders, which are known sites of human rights abuse.

This is a law which is supposed to protect people’s most basic human rights and yet it seems to be allowing, through its exemptions, the most nefarious kind of AI, one which invades the right to privacy of often the most marginalised and vulnerable groups.

As always, the devil is in the detail and the many exemptions to what should be laudable provisions in the Act threaten to undermine its purpose. One obvious such exemption is that for national security.

The scope for misuse here is significant: one could easily imagine a scenario in which law enforcement claim a use of AI is in the interests of national security and thus it becomes exempt. Similarly, there’s an exemption to the ban on emotion recognition in the Act, which only applies to education and the workplace, allowing emotion recognition to be deployed elsewhere, such as at the border.

French MPs voice sovereignty, competition concerns after Microsoft-Mistral AI deal

After Mistral AI and Microsoft announced their strategic partnership on Monday (26 February), sparking an outcry from lawmakers in the European Parliament, French MPs have voiced concern over the partnership’s impact on competition and sovereignty in the cloud sector.

One big “win” for civil society was the Fundamental Rights Impact Assessments (FRIAs) – there will be an obligation for high-risk AI deployers to conduct these assessments. But – and it’s a big ‘but’ – the FRIAs do not always include the private sector, so only those deploying AI in the public sector and a narrow subset of private companies will have to assess the risk to human rights – leaving many people unprotected.

Under the AI Act, a company for example would have no obligation to carry out a FRIA when deploying AI in one of its warehouses to increase the pace of work even if that presents risks to workers.

On top of this, it’s not actually clear if these FRIA will be more than a box-ticking exercise: nothing in the Act makes the deployment of a high-risk AI conditional on the FRIA being reviewed. So once carried out and reported on, the FRIA does not seem to have any meaningful impact in the roll-out of a high-risk AI. It’s obvious why human rights advocates’ fears about harmful AI are not assuaged by this law.

As the dust settles and everyone turns towards the implementation of the Act, we face the difficult task of unpacking a complex, lengthy and unprecedented law. For us, key to its success – and also to overcoming its pitfalls – will be the close coordination between those implementing the law and experts and civil society. That’s the only way it can be ensured that in practice it is consistent with its own articulated goals: protecting fundamental rights, democracy and the rule of law.

Subscribe to our newsletters

Subscribe