Viral deepfake videos of Le Pen family reminder that content moderation is still not up to par ahead of EU elections

Content-Type:

Analysis Based on factual reporting, although it Incorporates the expertise of the author/producer and may offer interpretations and conclusions.

Jordan Bardella said this is a “malicious use” of AI on BFM TV. The EU election candidate called for more regulation of AI-powered content. [EPA-EFE/Guillaume Horcajuelo]

Deepfakes of young women allegedly members of the Le Pen family promoting French far-right parties went viral online, escalating the debate around the effectiveness of content moderation ahead of the EU elections in June.

Amid the latest opinion polls predicting the rise of far-right groups a series of fake videos went viral.

They purported to show young members of the Le Pen family dancing in their underwear, at the beach, or skiing while talking about EU elections and mocking people of colour in France.

France’s far-right party, National Rally (ID), leading in the polls, designated Jordan Bardella as the party’s lead candidate for the EU elections, while Marion Maréchal, Marine Le Pen’s niece, is the lead candidate for Reconquête! (ECR).

In the past few weeks, deepfake videos purporting to be of Le Pen’s nieces, surged on TikTok, with some of them being viewed over 2 million times.

They show three nieces of the Le Pen family, Amandine Le Pen, Chloé Le Pen, and Lena Maréchal Le Pen, who do not exist.

Juvenile faces of Marine Le Pen and Marion Maréchal were edited onto videos stolen from real female influencers.

The viral deepfakes appeared to promote the two far-right parties and mentioned the EU elections multiple times.

At least some of the TikTok accounts have been deleted as of Tuesday (16 April), according to Euractiv’s observations.

Le Pen’s family said they were “not very happy” with the fake accounts, while Reconquête! flagged the content to TikTok.

An anonymous operator of Lena Marechal’s account justified the action as a “social experiment” that had “nothing to do with politics,” speaking to BFM TV.

“[This is] the dress I will wear for Jordan [Bardella]’s victory,” said a deepfake video of Amandine Le Pen, dressing up as Jeanne d’Arc, a French historical figure to whom the National Rally often refers.

“When you walk around Paris, you see more veils than baguettes,” or “Mohamed is the guy who insults the National Rally, but ends up in my DMs,” said the scantily clad fake niece.

Bardella said this was a “malicious use” of AI on Saturday (13 April) on BFM TV. The EU election candidate called for more regulation of AI-powered content.

The irony

A sense of irony pervades the situation since national MPs from Nationally Rally voted against a provision prohibiting the online publication of AI-powered deepfakes without consent, said France’s Digital Secretary of State Marina Ferrari on Monday (15 April).

The National Rally and left-wing parties opposed France’s umbrella digital bill last week at the National Assembly over concerns about “authoritarian measures.”

The SREN bill is intended to adapt the EU’s landmark online content moderation law, the Digital Services Act (DSA), into French legislation, among other legislative goals.

French Senators blame Macron government for long delay in adopting SREN digital bill

French Senators blamed the government for the eight-months freeze by the European Commission of the legislative process on a digital bill, considering that the government failed to prepare the bill efficiently.

Lagging behind

France, along with close to a third of EU member states are late in implementing this landmark law, despite the upcoming EU’s June elections. They were scheduled to designate their coordinating administrative authority by 17 February 2024.

These national coordinating authorities are tasked, in turn, with designating organisations responsible for flagging illegal content to platforms, known as “trusted flaggers.

Platforms have to prioritise making moderation decisions on content flagged by these organisations, which could include political content such as the videos of Le Pen’s fake nieces.

Although France has yet to formally designate its audiovisual authority Arcom as its DSA coordination authority, the body started to gather applications for trusted flaggers.

The real challenges for trusted flaggers will be recruitment, training, and securing budgets, in the long run, to be operational, Julie Carel, partner at French law firm Momentum Avocats, told Euractiv.

Carel foresees a “legal latency period” from when Arcom is officially designated, to when it picks the trusted flaggers and an “operational latency period” during which organisations will have to train their moderation teams.

DSA enforcement

Despite delays, Carel points out that the DSA is in force right now. Even if national law have not yet been adapted, companies are still bound by it because of direct application of EU acts.

Yet the viral videos show that content moderation is not perfect yet, despite notable incidents.

In September 2023, an audio clip deepfake of Michal Šimečka (Progressive Slovakia, Renew) portraying the candidate as admitting to rig the election, spread like wildfire days before the Slovak parliamentary election. Some claim this influenced the vote in favour of his opponent, Robert Fico (Smer).

In an attempt to answer growing concerns over election manipulation by deepfakes, 20 platforms committed to “combat deceptive use of AI in 2024 elections” globally in February.

Separately, the EU executive named generative artificial intelligence as a risk to the integrity of the electoral process in its DSA guidelines.

But in March the European Commission still scrutinised nine of the largest platforms in the EU on their use of generative artificial intelligence to assess compliance with the DSA.

EU Commission scrutinises nine big tech platforms over targeted ads and generative AI

The European Commission requested information on Thursday (14 March) from nine big tech platforms on their use of targeted ads and generative artificial intelligence (AI) to gauge compliance with the Digital Services Act (DSA).

[Edited by Rajnish Singh]

Read more with Euractiv

Supporter

AI4TRUST

Funded by the European Union

Check out all Euractiv's Projects here

Subscribe to our newsletters

Subscribe