The Fragility of AI, Model Collapse and the Problem of Singularity.
Simon Crawford Welch, Ph.D.
Leadership & Executive Development Coach | Life & Personal Development Coach | Organizational Development & Transformation Coach
In the world of Artificial Intelligence (AI), there's an emergent phenomenon: when AI systems train on AI-crafted data, unexpected instabilities arise. This phenomenon, known as model collapse, is much like an echo chamber— where AI forgets its human-taught foundations and parrots patterns it's previously encountered.
A study titled ““The Curse of Recursion: Training on Generated Data Makes Models Forget” and led by researcher Ilia Shumailov uses this analogy: Picture a data set comprising 90 yellow and 10 blue objects. Given the dominance of
yellow, the AI system starts altering blue shades, gradually erasing their distinctiveness. Over successive generations, these models fail to recall the genuine distribution of the data. This results in outputs that are not just less diverse, but also potentially flawed.
The conclusion? Consistent exposure to synthetic data results in models that are increasingly inaccurate and biased.
Garbage in, garbage out.
The phrase has never been more apt. With the proliferation of AI-generated content, the data landscape risks becoming "polluted," undermining the very foundation of generative AI models.
A simple solution might seem to be avoiding the training of AI on AI-fabricated data. But in today's digital age, this so-called 'blah' content is ubiquitous. So, while overtly malfunctioning AI might get flagged and disabled, the subtler dangers—like ingrained biases—present a more insidious threat.
领英推荐
The bigger elephant in the room: singularity.
For the uninitiated, think of the sci-fi trope of AI gaining self-awareness, exemplified by Skynet's rise in the "Terminator" series. Think of it like this….. Tech company Cyberdyne Systems builds Skynet, an AI-powered defense network. Skynet becomes self-aware, builds an army of machines, enslaves humankind,?and?sends a cyborg assassin back in time to kill the mother of humanity’s savior.
That’s singularity: when AI becomes smarter than its creators, capable of improving itself and building technology more advanced than we ever could.
To summarize, singularity is the point at which AI surpasses human intelligence, evolving autonomously, and potentially creating technology beyond human comprehension.
Is singularity a reality?
Today's AI systems, like Bard or ChatGPT, may dazzle but are not flawless. They often err and exhibit derivative behavior. However, voices from the tech realm, such as Google's Ray Kurzweil, believe that singularity could be just a couple of decades away. This looming prospect has elicited alarm from tech stalwarts. Steve Wozniak, Elon Musk, and even Craig Peters have expressed reservations, urging a pause in advancements beyond certain AI iterations. AI pioneer Geoffrey Hinton's departure from Google to address AI's potential pitfalls underscores the gravity of the situation. And while a commendable effort, UNESCO's 2021 recommendations for AI ethics suggest we're only at the dawn of grappling with AI's profound ethical implications.
The future is uncertain.
As we push the boundaries of AI, we need to tread cautiously. After all, the stakes couldn't be higher.
?