MISINFORMATION: AI and Misinformation in a Mega Election Year 2024
The convergence of artificial intelligence and misinformation presents unprecedented challenges to a democratic integrity in a Mega Election Year.

MISINFORMATION: AI and Misinformation in a Mega Election Year 2024

Last week Friday, I had the honour to be invited by TEDxJohannesburg at the GIBS Business School (Gordon Institute of Business Science) as a panelist to discuss AI + Misinformation: AI's Impact on A Mega Election Year 2024. This particular event was in collaboration with TEDx events in Germany, Zambia, Pakistan, and Guatemala, and aimed to contribute to a broader global discussion on the topic. This event was hosted in the format of a moderated panel discussion with local speakers, bringing together thought leaders and professionals from various disciplines to provide valuable insights.

The panel of experts, consisted of individuals deeply engaged in AI, media, content creation, and politics, not only shares their perspectives on the challenges and opportunities posed by AI and misinformation in the context of elections but also provided valuable insights and facilitated an engaging discussion to explore strategies to safeguard the integrity of democratic processes in the face of AI-driven misinformation.

I just wanted to take this time to share my thoughts on the topic of AI + Misinformation in the 2024 Mega Election Year mostly for all those who could not attend the event.

Firstly, let's start with the definition of misinformation. Most websites define it as false or inaccurate information (is about getting the facts wrong). Disinformation on the other hand, is a subset of misinformation, and is deliberately intended to mislead.

I think we can all agree that misinformation has become a critical issue in today’s interconnected world, with the deliberate spread of false or misleading information posing significant threats to democracy, social cohesion, and public trust. Unfortunately, the rise of digital platforms and social media has worsened the problem, making it harder to distinguish truth from falsehood and amplifying the impact of disinformation. This is especially true in recent years where we have seen a proliferation of disinformation, fueled by the consumption of news and information from unverified and unreliable sources online.

An important disinformation vehicle we need to address as part of this conversation of AI + Misinformation is deepfakes.

Deepfakes

The emergence of deep fakes has raised concerns about their potential misuse especially when it comes to things such as fraud, as extortion, harassment, and misinformation.

These are highly realistic digitally manipulated media including videos, audio, photos, and text created thanks to artificial intelligence (AI) software that, starting from real content (images and audio), which can modify or recreate, in an extremely realistic way, the features and movements of a face or body and faithfully imitate a given voice. It is unfortunate that the increasing quality, accessibility, and affordability of deepfake technology contribute to its widespread use and distribution.

Now, let's take a look at some crimes that deepfake technology can facilitate including but are not limited to:

o?? harassment and humiliation online

o?? extortion and fraud

o?? facilitating document fraud,

o?? faking online identities

o?? fooling verification mechanisms

o?? disrupting financial markets

o?? falsifying electronic evidence

o?? stirring social unrest and political polarisation


Crime as a Service (Caas)

Another topic to touch on under this subject is Crime as a Service.

This is where criminals sell access to tools, technologies, and knowledge for cyber and technology-enabled crimes. CaaS is expected to evolve alongside current technologies, automating crimes like hacking and adversarial uses of machine learning and deepfakes. Criminals tend to adopt new technologies early, putting them ahead of law enforcement in implementation and adaptation.

Now, because I work in financial services, it only makes sense to look at how misinformation affects the industry broadly. Regrettably, new technologies complicate the cyber landscape.

Deepfake incidents in the fintech sector increased 700% in 2023 from the previous year in the US alone. And according to UK Finance, 40% of all fraud losses in the UK in 2022 resulted from APP fraud. This leads me to segue to one cautionary tale, reported in February 2024 by CNN, which saw a finance employee at a multinational firm being tricked into paying out US$25m to fraudsters who digitally posed as their boss, the CFO of the company, during a video conference call.? This is an example of what’s commonly referred to as APP (or Authorised Push Payment) fraud, where a person authorizes a payment and the bank acts on that person’s instructions, only to discover subsequently that the individual authorizing the payment has been duped.

The truth of the matter is, hyper-realistic deepfakes, pose a growing threat to the financial sector as they allow cybercriminals to outsmart even the most security-conscious employees. A recent study conducted in the US with 105 cyber security experts in the financial services industry shows that 77% of financial sector Chief Security Officers are concerned about the impact of deepfake video, audio, and images.

I just wanted to share examples of the impact of misinformation/deepfakes in Financial Services and this list is definitely not exhaustive:

o Personal banking and payment transfers are most at risk of deepfake fraud.

o Onboarding processes could be subverted and fraudulent accounts created to facilitate money-laundering.

o Payments or transfers could be authorised fraudulently.

o Synthetic identities could be created, whereby criminals take elements of a real or fake identity and attach them to a non-existent individual.

o Bad actors could use AI-generated audio to game voice-authentication software used by financial services companies to verify customers and grant them access to their accounts

Some Proposed Solutions

·??Increased employee training efforts will be a crucial pillar for security controls.

·??Multi-factor authentication also becomes more essential when seeing or hearing is no longer a guarantee of authenticity.

·??Biometric authentication technology can be built with unique anti-spoofing capabilities that establish the ‘genuine presence’ of a customer.


There are a number of things we need to remember as part of this AI + Misinformation topic including the concern that the data that these AI models use sometimes contains misinformation and biases especially against people of colour and the global south. To this I say, diversifying the creators of AI has a direct impact on the data that the algorithms produce. Because AI is based on past data, mirroring stereotypes or biases that are already present, they can unknowingly amplify these stereotypes in their algorithms. Therefore, the more diverse in gender, race, ethnicity, backgrounds, upbringings, education etc. of teams that build large language models, chatbots and vision-language models the better to ensuring that our own intrinsic biases are not modeled and trained by said AI.

To reiterate the point further, there is no such thing as neutral technology. Technology is never neutral, because technology is developed by humans, and humans, as we know, aren't neutral. This means misinformation in results in misinformation out. Biases in equates to biases out and learning machines can actually reinforce these and also introduce new biases in their code.

Therefore, it is imperative that people affected by these biases need to contribute to the identification of the potential harmful, unintentional side effects of these models so that we can start to fix these problems and make AI more beneficial and less harmful based on the training data used to teach these models. Algorithm developers should not only place emphasis on the technological design, but closely? monitor how data is prepared and processed. It is not about feeding machines with big data. If the data is inaccurate, unjust or unrepresentative, biased behaviours perpetuated through the black box will discriminate against millions of people in the Global South.

We also as the Global South need to stop relying on the Global North for technological advancements within AI and start building our own. As AI producers we are then better able to consider the communities directly and indirectly we serve when providing the data used in AI development.

That is why I am excited that a continent-spanning community is emerging to address the digital data scarcity problem we have for African languages and it embraces the bid to include Africa and the other parts of the Global South in discussions about the responsible use and development of AI and it’s a community comprised of African AI and NLP researchers interest to solve problems prevalent on the African continent and Vukosi Marivate is one of these such people who form part of the Global South inclusion project. These researchers rely heavily on the use, resharing and reuse of African language and context focuses data to fuel their innovations, analysis and developments in AI.


In closing, when referencing AI + Misinformation in a Mega Election Year, it must be stated that bad data does not only produce bad outcomes but it can also helps to suppress sections of society, for instance vulnerable women, children and minorities. The crux of the matter is data representativity.

Data representativity depends on the political economy. The political economy of data representation for AI training is a complex subject that intersects with power dynamics, economic interests, and social structures. Entities with more resources and influence can frequently acquire, manipulate and curate massive datasets, thereby molding AI models trained on this data to represent their opinions and interests. This dynamic might result in a representativity gap, in which marginalised communities are either underrepresented or misrepresented in AI training datasets. As a result, the emerging AI systems may reinforce existing biases, exacerbate structural inequities, and fail to meet the different requirements of the community.

So what solutions does one proposed to this problem?

Recommendations

·??Strengthen Collaboration: Platforms should adopt a collaborative approach involving various stakeholders, including governments, civil society organizations, and fact-checkers, to counter the spread and impact of disinformation. This can include sharing information, best practices, and resources to develop effective strategies.

·??Implement Effective Content Moderation: Platforms need to allocate sufficient resources to effectively monitor and moderate harmful content. This includes investing in advanced AI systems and human moderation teams to detect and remove disinformation in a timely manner. Transparent and consistent guidelines should be in place to ensure fairness and accountability in content moderation decisions.

·??Promote Fact-based Information: Platforms should prioritise the promotion of fact-based information from reliable sources. This can be done by partnering with credible news organizations and fact-checkers to provide accurate information and combat false narratives.

·??Strengthen Oversight: Platforms should proactively cooperate with regulatory authorities to enhance oversight. This includes addressing specific issues related to platforms like Twitter, Reddit, Telegram, Spotify, and TikTok mentioned in the report. Platforms should take swift action to address shortcomings, such as reinstating accounts involved in spreading disinformation or allowing abusive content.

Lastly, just keep in mind that seeing and hearing is no longer believing. Doubt all information you consume especially with the proliferation of misinformation and disinformation in this world of AI. And more importantly ensure that you fact-check all information at your fingertips before believing it and/or passing it onto others to also consume. So, literally, love it all but trust none of it.

Dr. Jefferson Yu-Jen Chen

Innovation catalyst. Impact Orchestrator. Strategy consultant. Executive Coach, Author, Keynote speaker. Full-time faculty at GIBS. Solid friend. Fun dad.

6 个月

You are my superhero, always.

Johnny Muteba

Most Creative Person in Africa,Disruptive Thinker,Futurist,CEO Build Africa Group,Chairman and Founder Pan African Chamber of Commerce,President Blacks in Technology South Africa,GOC Lead South Africa

6 个月

It was great meeting you in person. Your passion for empowerment and mentorship is inspiring. May your cup never run dry

Dr. Mark Nasila

? Chief Data and Analytics Officer: FNB Risk ? PhD ? MSc ? Keynote Speaker ? Professor of Practice in AI ? AI Specialist ? AI Advisory Board Member

6 个月

Great work to you and the panel Nollie Maoto. This topic is so critical especially at a time when we already seeing #AI risks manifest through #deepfakes. also well done on the comprehensively opinion piece, you have extensively covered #AI risks, especially risk that organizations are facing today with the advancement of #AI and #GenAI

Maria Maake

Specialist: Analytics & Insights at Liberty Group SA | Business Management, Analytics Pro | SQL | PowerBI

6 个月

Thank you for the invite Nollie, every time one listens to the discussion around elections or AI u just to learn so much new information....it was a very informative session....

要查看或添加评论,请登录

社区洞察

其他会员也浏览了