Artificial Intelligence: A Growing Concern in Election Integrity

Artificial Intelligence: A Growing Concern in Election Integrity

Elections worldwide are encountering an increasingly potent adversary: artificial intelligence (AI). The specter of foreign interference in electoral processes gained notoriety in 2016 when Russian operatives launched a series of social media disinformation campaigns targeting the U.S. presidential election. Over the ensuing seven years, China and Iran, among others, followed suit, deploying social media as a means to sway foreign elections, including those in the United States. As we approach 2023 and 2024, it seems likely that this trend will persist.

However, a new player has entered the scene – generative AI and large language models. These advanced technologies possess the uncanny ability to swiftly generate vast volumes of text on any subject, adopting various tones and perspectives. Experts in the field of security believe that these tools are ideally suited for modern-day propaganda in the digital age. This development is still in its infancy, with ChatGPT making its debut in November 2022, followed by the more potent GPT-4 in March 2023, alongside similar AI models. The ramifications of these technologies on disinformation, their efficacy, and their consequences remain uncertain, but we are poised to find out.

The global calendar will soon be replete with elections in many democratic nations. Approximately seventy-one percent of people residing in democracies will participate in national elections between now and the end of the next year. Among the notable elections are those in Argentina and Poland in October, Taiwan in January, Indonesia in February, India in April, and the European Union and Mexico in June. The United States will hold its presidential election in November, and nine African democracies, including South Africa, will have elections in 2024. While Australia and the U.K. lack fixed dates, elections are expected in 2024.

Many of these elections hold profound significance for countries with a history of social media influence operations. China, for example, closely monitors elections in Taiwan, Indonesia, India, and various African nations. Russia's interests extend to the U.K., Poland, Germany, and the European Union as a whole, and the United States remains a focal point for international attention. Moreover, with the diminishing financial barriers to foreign influence, more countries are entering the fray. Tools like ChatGPT have significantly reduced the costs associated with producing and disseminating propaganda, making them accessible to a broader spectrum of nations.

Election interference has become a persistent concern. A recent conference attended by representatives from all U.S. cybersecurity agencies discussed expectations regarding election interference in 2024. In addition to the usual suspects – Russia, China, and Iran – "domestic actors" emerged as a significant new concern. This is a direct consequence of the lowered cost of involvement.

While generating content is only part of the challenge in running a disinformation campaign, distribution poses a more complex problem. Propagandists require fake accounts to disseminate content and support to amplify it into the mainstream, where it can go viral. Companies like Meta have made strides in identifying and removing such accounts. Just last month, Meta revealed that it had eliminated thousands of accounts associated with a Chinese influence campaign, including Facebook, TikTok, and other platforms. However, this campaign predates the AI-driven disinformation era.

Disinformation campaigns are akin to an arms race, with both the perpetrators and defenders improving their strategies. Additionally, the landscape of social media has evolved significantly. Once a direct line to the media and a platform for political narrative-shaping, Twitter has transformed. Many propaganda outlets have migrated from Facebook to encrypted messaging platforms like Telegram and WhatsApp, making them more challenging to detect and combat. TikTok, a newer platform controlled by China, is suited for short, provocative videos, which AI makes easier to produce. The latest generative AIs are also being integrated with tools that facilitate content distribution.

Generative AI tools have ushered in new production and distribution techniques, including low-level propaganda at scale. Consider a scenario where an AI-powered personal account on social media appears entirely normal, posting about everyday life and engaging in interest groups. Periodically, it subtly introduces or amplifies political content. While these persona bots, as they are called, may have limited individual influence, their cumulative impact can be substantial when deployed in the thousands or millions.

This is but one potential scenario, as military officials in countries like Russia and China are likely devising more sophisticated tactics than those employed in 2016. These nations have a history of testing cyberattacks and information operations on smaller countries before implementing them on a larger scale. Therefore, it is crucial to develop methods for identifying these tactics. Countering new disinformation campaigns necessitates the ability to recognize them, and early cataloging and analysis are essential.

In the realm of computer security, sharing attack methods and their effectiveness is vital to building robust defensive systems. A similar approach applies to information campaigns. By studying the techniques employed in distant countries, researchers can better prepare to defend their own nations.

As disinformation campaigns in the AI era become increasingly sophisticated, the United States must establish mechanisms to identify AI-generated propaganda within its borders and in locations such as Taiwan, where deepfake audio recordings have already been used to defame political candidates. Unfortunately, researchers studying these issues have faced harassment and targeting.

While some recent democratic elections in the generative AI-era have not experienced significant disinformation problems yet. However, it is crucial to remain proactive in anticipating potential challenges. By enhancing our comprehension of possible future problems and making adequate preparations, we can enhance our ability to confront these evolving threats effectively. This calls for an unwavering commitment to vigilance, thorough research, and international collaboration. These are key elements in our efforts to counteract the expanding threat of disinformation and ensure the protection of democratic processes.

要查看或添加评论,请登录

IndraStra Global的更多文章

社区洞察

其他会员也浏览了