By Cristina Vanberghen Est. 6min 26-02-2024 Content-Type: Opinion Opinion Advocates for ideas and draws conclusions based on the author/producer’s interpretation of facts and data. The Polish government said on 31 May 2024 that a false story on the state PAP agency stating that Poles would be mobilised to fight in Ukraine was likely a Russian cyberattack. [Shutterstock/Gorodenkoff] Euractiv is part of the Trust Project >>> Print Email Facebook X LinkedIn WhatsApp Telegram Effective enforcement mechanisms to combat deepfakes are vital, considering the transnational nature of deepfakes and the potential for circumventing regulations, writes Cristina Vanberghen. Prof. Dr. Cristina Vanberghen is an international legal practitioner and academic in the area of digitalisation. With elections looming for half of the world, the potential for deepfakes to sow discord and undermine trust in institutions is more significant than ever. This is the terrifying reality of deepfakes, AI-generated content that can impersonate anyone with alarming accuracy. While developers argue that their technology is evolving and mistakes are inevitable, the rapid rise of deepfakes, particularly those used for sexual harassment, fraud, and political manipulation, poses an existential threat to democratic processes and public discourse. Efforts are underway to combat this threat. Initiatives like the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” at the Munich Security Conference (MSC) represent a commendable effort to confront the challenges presented by deepfakes in electoral processes. By uniting leading tech companies, the accord presents a unified stance against malicious actors, showcasing a shared determination to address the issue. However, the accord’s focus on the 2024 elections may overlook the ongoing evolution of the deepfake threat, potentially necessitating adjustments beyond the specified timeframe. The efficacy of the accord hinges on its ability to keep pace with these developments. While the accord establishes guiding principles, it lacks concrete mechanisms for enforcement. Ensuring accountability among participating companies is essential for meaningful progress. Relying on self-regulation from tech companies raises concerns about potential biases in implementation, underscoring the need for transparent and impartial oversight. Top of form In the EU, deepfakes are regulated by the AI Act. Instead of banning deepfakes entirely, the proposed AI Act takes a different approach. Under Article 52(3), it requires transparency from creators. This means anyone who creates or disseminates a deepfake must disclose its artificial origin and provide information about the techniques used. This aims to empower consumers with knowledge about the content they’re encountering and make them less susceptible to manipulation. However, transparency alone might not be enough to address the malicious potential of deepfakes, especially if creators find ways to circumvent the disclosure requirements. Much remains uncertain, particularly regarding legal liability and whether the current framework is sufficient to address the evolving risks posed by deepfakes. The establishment of the EU AI Office on 21 February 2024 marks a significant step in promoting responsible AI practices within the European Union. One of the key functions of the AI Office is to encourage and facilitate the development of codes of practice at the Union level to facilitate the effective implementation of obligations related to the detection and labelling of artificially generated or manipulated content. Under this mandate, the Commission is empowered to adopt implementing acts to approve these codes of practice. This regulatory mechanism ensures that codes of practice meet certain standards and effectively address the challenges posed by artificially generated or manipulated content. Additionally, if the Commission deems a particular code of practice inadequate, it has the authority to adopt implementing acts to address any deficiencies. Overall, the EU legislation suggests a proactive approach to addressing deepfakes and AI-generated text. However, there are controversial issues such as clarity and specificity. The definitions of “deepfake” and “artistic/creative work” could benefit from further clarification. Also, the effectiveness of disclosure requirements hinges on strong enforcement mechanisms. Balancing transparency with the potential stifling effect on artistic expression requires also careful consideration. Deepfakes are currently classified as “limited risk” AI systems in the AI Act. This means they face fewer regulations compared to “high-risk” systems like medical AI or facial recognition. However, deepfakes can have significant harmful impacts and should be considered high-risk. The AI Act doesn’t currently establish a clear framework for legal liability for developers of deepfake technology. The Act emphasizes preventative measures rather than punitive ones. This could involve mandating developers to implement technical safeguards against deepfake misuse, like robust watermarking or detection algorithms. The lack of a clear liability framework leaves open questions about who holds responsibility for deepfake misuse. While the EU AI Act represents a significant step towards regulating artificial intelligence systems, including those capable of generating deepfakes, it’s understandable that some may view it as insufficient in addressing the specific challenges posed by malicious uses of deepfakes. Advocating for the criminalization of deepfakes for end users could be one approach to mitigate the harmful impacts of this technology. By imposing legal consequences for individuals who create or disseminate deepfakes with malicious intent, policymakers may deter the proliferation of harmful content and hold perpetrators accountable for their actions. Criminalizing deepfakes could also serve as a deterrent against the misuse of this technology for fraudulent activities, political manipulation, or other malicious purposes. Addressing this pressing challenge requires the implementation of robust legal measures. For instance, there should be strict prohibitions on the creation and distribution of deepfake child pornography, even when portraying fictional children. Additionally, criminal penalties must be imposed on those who knowingly create or facilitate the spread of harmful deepfakes. Furthermore, it is imperative to mandate that software developers and distributors incorporate measures to prevent the generation of harmful deepfakes through their audio and visual products. Accountability measures should be enforced to ensure that these preventive measures are effective and not easily circumvented. Policymakers must carefully balance the need to protect individuals from the harms of deepfakes with considerations of free speech, privacy rights, and technological innovation. Additionally, effective enforcement mechanisms and international cooperation will be crucial in addressing the transnational nature of deepfake-related threats. Effective enforcement mechanisms are vital, considering the transnational nature of deepfakes and the potential for circumventing regulations. Defining and identifying malicious deepfakes can be challenging, requiring careful legal frameworks and nuanced enforcement strategies. Thoughtfully crafted laws in this regard have the potential to cultivate socially responsible practices within businesses without unduly burdening them. Time for cutting-edge AI-driven detection tools and stringent legal frameworks to hold perpetrators accountable for their nefarious actions. Simultaneously, it is critical to empower the public with digital literacy and critical thinking skills to discern truth from manipulation effectively.