Securing fair elections: ban on voice cloning technologies
Esther (Eske) van Egerschot-Montoya Martinez
Executive & Director | Adjunct Professor & Researcher on AI Governance | Former Member of Parliament | UNESCO W4EAI Expert | Data & Risk Management | Policy | Governmental Affairs | former corporate lawyer
In an unprecedented move to secure the public against sophisticated robocall scams, the Federal Communications Commission (FCC) has taken a decisive stand. Voice cloning technology—a potent tool in the arsenal of fraudsters—has been banned with immediate effect. This bold action empowers State Attorneys General throughout the United States to pursue those exploiting AI to perpetrate fraud and spread misinformation.
"Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice," proclaimed FCC Chairwoman Jessica Rosenworcel. "State Attorneys General will now have new tools to crack down on these scams."
?
This robust response follows alarming reports of fake robocalls, including those mimicking President Biden, intended to mislead voters during the primary season and to discourage them to vote for Biden. With 2024 marking a significant year for democracy—hosting 64 national elections and critical EU elections—the dangers of deepfakes, misinformation, disinformation, and voice cloning to democratic processes have never been more real.
?
The need for robust national and international safeguards against these threats is clear. The FCC's swift action not only demonstrates a commitment to protecting human rights and democracy but also emphasizes the necessity for immediate, tangible measures rather than relying solely on warnings or Congressional hearings for accountability.
?
领英推荐
This American Ruling resonated with the intent of the EU AI Act: prohibit specific technologies and models to protect society at large. Now that the AI Act seems to be in its last stages and won’t be altered or changed (small prediction from my side after statements from the French and Germans on yielding their previous endeavors to make changes) it looks like these prohibitions on unacceptable risk AI will apply before the end of the year since the entry into force in six months after the final publication of the EU AI Act.
?
Yet, the European Parliament's recent announcement to utilize TikTok for the upcoming election campaign—despite its ban from EU institutional devices due to cybersecurity concerns—raises eyebrows. The Parliament's press service insists that this approach will counter disinformation while maintaining system security. However, this decision has sparked a public debate.
?
It is not enough to defend our democracies with quick actions and firm decisions; we must also critically evaluate our methods for combating misinformation. If a tool is deemed unsafe for institutional use due to cybersecurity risks, its promotion for campaign purposes contradicts the very essence of prudent decision-making. Indeed, this is not a case where Machiavelli's principle of "The end justifies the means" should apply.
?
As we navigate these complex issues, let's engage in a dialogue: How do you think we can best balance the use of technology in election campaigns with the need to ensure cybersecurity and protect democratic integrity?
Executive & Director | Adjunct Professor & Researcher on AI Governance | Former Member of Parliament | UNESCO W4EAI Expert | Data & Risk Management | Policy | Governmental Affairs | former corporate lawyer
7 个月And catch up on Khan's AI message from prison declaring victory in Pakistani election https://www.dailymail.co.uk/news/article-13067045/Imprisoned-Imran-Khan-posts-AI-video-declaring-victory-Pakistani-election-independent-candidates-backing-win-seats-vote-marred-mobile-shutdown.html