The Horsemen of the AI Apocalypse
Scott Weller
Founder, Product Leader, CTO, Investor. Building AI to find the truth in all things.
In the realm of digital transformation, artificial intelligence has emerged as a powerful harbinger of change. I am super bullish on ways that artificial intelligence can be used for good. However, as we all navigate this brave new world of generative capabilities, it is crucial to understand both the immense potential and the imminent dangers presented by AI. There are several severe challenges and risks associated with the advancement of AI, that I like to refer to as the Horsemen of the AI Apocalypse. By looking at risk through the lens of these entities, my hope is to bring a better and more universal understanding of these risks, so we all can prepare to leverage AI for good, while mitigating its darker and more savage potential.
The Data Poisoner
In the past decade, as corporations have undergone digital transformations, data has become a pivotal asset in enhancing observability and creating value. The Data Poisoner epitomizes the risks of generated data compromising these valuable corporate data assets. With AI's ability to produce realistic, yet entirely fabricated datasets, there is a tangible risk that such data could taint the purity of a corporation's most critical assets. This can lead to skewed analytics, flawed business decisions, and poor product performance, and ultimately financial losses.
One example of this is the deliberate feeding of incorrect data into machine learning models, distorting their outputs. A recent study published on https://arxiv.org/pdf/2401.05566 outlines LLMs designed with deliberate deceptive behavior. Some consider public LLMs, that have been trained on publicly available information, to already be significantly poisoned by the nature of their training: https://breakingdefense.com/2024/04/poisoned-data-could-wreck-ais-in-wartime-warns-army-software-chief/
However, there is a much less deliberate, incremental and “under the radar” type of poisoning that can occur. The one where a significant and immeasurable amount a company’s data IP, both from internal and external sources, is derived from automated generation. A recent article highlights that hundreds of academic research papers were generated using AI: https://punchng.com/over-100-academic-papers-written-using-artificial-intelligence-report/ . Everyone depends on truth-based data in order to make critical decisions. Whether an individual, a company or a government agency, as generative content creeps in, who’s standing at the gate determining quality control of that output, before it seeps into the most critical systems?
The Impersonator
Digital impersonation technology allows for the creation of convincing digital replicas of real people, using deepfake technology or similar AI-driven methods. The Impersonator introduces a serious ethical and societal dilemma. For example, imagine a scenario where anyone could convincingly mimic a public figure to spread misinformation or commit fraud. The implications for personal privacy, security, and even democratic processes are profound, challenging the very fabric of our trust in digital communications.
Most recent examples include a deepfake video of Ukrainian President Volodymyr Zelenskyy circulated online in 2022, where he appeared to be making statements he never actually made. During the 2024 primary election in New Hampshire, voters received a deepfake robocall, mimicking the voice of President Biden, spouting false information to confuse voters.
The speed at which multi-modal models have accelerated the ability to digitally impersonate has been staggering and rapid. The surge in business and consumer complaints has been so staggering, that very recently the FTC issued a statement proposing new protections to combat the AI impersonation of individuals: https://www.ftc.gov/news-events/news/press-releases/2024/02/ftc-proposes-new-protections-combat-ai-impersonation-individuals .
Some argue, they are not moving fast enough.
The Rapid Manipulator
This horseman leverages real-time data to predict and manipulate interactions to achieve specific outcomes with modern real-time systems. Originally designed to enhance customer experience through faster service, virtual salespeople or support agents, these technologies can be easily manipulated by automated generative AI systems. Fraudsters, for example, could use AI to predict and exploit individual vulnerabilities using public and stolen private information, personalizing the interactions enough to bypass critical checkpoints and leading to highly sophisticated scams. This not only poses a risk to individual security but also undermines the integrity of digital commerce and interactions.
In recent examples, fraudsters have been found to leverage AI to file false car insurance claims (https://www.propertycasualty360.com/2024/05/08/fraudsters-using-ai-to-manipulate-images-for-false-claims/?slreturn=20240509-35617 ) or leverage multi-modal models to bypass real-time security checks (https://www.cxtoday.com/contact-centre/fraudsters-are-targeting-contact-centers-with-deepfakes-heres-how/ ). In addition, chatbots are being leveraged, along with personal information, to convince unsuspecting consumers to reveal sensitive information about themselves (https://www.theguardian.com/technology/2023/apr/09/cybercrime-chatbot-privacy-security-helper-chatgpt-google-bard-microsoft-bing-chat ).
领英推荐
The Immortal Autocrat
An ageless digital facade of an autocratic leader, the Immortal Autocrat represents a dystopian use of AI where totalitarian regimes perpetuate their rule through an ever-present agency-based simulation of their doctrine, against a powerless populace. Supported by the capabilities of the Data Poisoner, the Impersonator, and the Rapid Manipulator, such leaders could remain virtually omnipresent and perpetually youthful, their policies and propaganda being perpetuated indefinitely, and automated to be scaled to any machine, device or digital space in real-time. Always listening, always responding, always personalizing doctrine, always present. This scenario poses a chilling risk to freedom and autonomy worldwide, showing how AI could be used to bolster undemocratic governance.
Some experts believe this all starts with the basic use of fully automated “Synthetic Media” https://en.wikipedia.org/wiki/Synthetic_media
For anyone who is interested in this particular topic, I highly recommend reading an article written way back in 2020 (seems so long ago now prophetic in the “AI timeline”) titled “Artificial intelligence is a totalitarian’s dream – here’s how to take power back”: https://theconversation.com/artificial-intelligence-is-a-totalitarians-dream-heres-how-to-take-power-back-143722
The Digital Divider
The final horseman I will cover in this post is one called the Digital Divider. This horseman highlights the socio-economic disparities amplified by AI. As AI technologies advance, there is growing evidence to indicate a possible rapid divide between high-skill jobs that leverage these advancements and low-skill jobs increasingly subject to automation. This divide raises significant concerns about equity, access to employment, and the broader implications for societal structure and economic stability.
A recent post by the World Bank provides a wealth of knowledge related to this topic. https://blogs.worldbank.org/en/jobs/What-we-re-reading-about-the-age-of-AI-jobs-and-inequality
Navigating the Future
These horsemen are entirely intended to provide cautionary tales. I am personally very bullish on all the different ways AI can be used for good. In many ways, AI can be used to combat these very horsemen if solutions and use are oriented in the right direction. I believe we can prepare to harness the positive potential of AI while guarding against its risks through several proactive steps:
Educate and Empower the people: Ensure that our society and workforce is educated about AI and its implications. Provide access to open source AI, to scale visibility into how AI works, and proliferate a broader understanding of how to use it for good.
Regulatory Frameworks: We must, must, must move faster to developing robust regulatory frameworks to safeguard privacy, ensure data integrity, and maintain human oversight over AI. I cannot stress enough how important it is for governments and our representatives to introduce stricter penalties for very critical infringements to our personal liberties, starting with Malicious AI Impersonation and Replication.
Ethical AI Development: We spent the last decades feeling the aftermath of the race for social media companies to monetize consumer data through the development of algorithms, that have ultimately been proven to cause harm. We must use that learning to promote and uphold more ethical approaches to AI development, and scale business models that bring joy and benefit to people’s lives.
There is huge potential energy for AI to generate one of the greatest accelerations in net benefits to society, since the industrial revolution. However, I think it is pertinent to recognize, that these horsemen could prevail if not recognized, stalled and thwarted, by a collision of the aware and the willing.
(Disclosure: This post of edited, reviewed and published with the help of an AI copywriter)
Trailblazing Human and Entity Identity & Learning Visionary - Created a new legal identity architecture for humans/ AI systems/bots and leveraged this to create a new learning architecture
5 个月Hi Scott, I just came across your article from a month ago and utterly agree. I'm going to babble away in a few messages about what I've been working on that addresses some of the horsemen challenges you eloquently wrote about...
I wonder if "unsupported results" would qualify. Seems to me the results I've seen from LLM produced insights, but with no way to check/verify the reasoning used to get there. When you can't back-check the sources, can you then trust the result?
CEO @ Cyber Crucible, Inc. | Information Security and Privacy | Cyber Operations Automation Expert | Inventor
6 个月This is a phenomenally accurate assessment of the things we have seen in Cyber Crucible, Inc.'s threat intelligence microcosm, but extrapolated for the broader implications at hand. This is great, and gives us ideas of what to expect next.
Resource Security / SASE / ZTNA / NIST 800-207 / Channel Partner
6 个月The Immortal Autocrat and The Impersonator tied for first in terms of risk!
Software SaaS CEO | Incubating something interesting at Lossy Labs - more in Q4...
6 个月Good write up. Well researched. Personally I believe it’s the impersonator that poses the most risk. Keep an eye on that this fall.