Emerging Risk: AI Model Collapse

Emerging Risk: AI Model Collapse

I recently had a conversation with Kate Kuehn , who is the Chief Trust Officer at Aon's Cyber Solutions. During our discussion, we explored the future of AI, and to be honest, I was surprised by the impact of some of the challenges we talked about. One of the emerging concerns that she shared was Model Collapse, which is a term I had heard before but never fully understood until now. While I'm no stranger to those moments of confusion when looking at something, I always believe that when in doubt, it's best to research the topic thoroughly.

If you're interested in knowing how I conduct my research and writing, this article will provide you with a behind-the-scenes look. The piece is divided into three main sections: 'Crawl', where I attempt to grasp the basics, 'Walk', where I delve deeper into the underlying challenges, and 'Run', where I explore the impacts and safeguards that lie ahead.

If that journey sounds boring to you then feel free to jump to the end, where I share my verdict on this emerging risk.




1. Grasping the Basics (Crawl) ??

When I commit to writing about a topic, I immerse myself fully. I build my understanding from the ground up, layer by layer. Even if I end up discussing only 5% of a subject, I've likely touched on the other 95% beforehand. And this isn't limited to new subjects. I revisit topics I've written about or practiced daily, repeating this over and over again. This isn’t some badge of honor—it's exhausting, but it's how my brain works best. Everyone is different and I’m sure there is a better and less intensive way to go about this. But this stage is foundational and absolutely necessary to me.

1.1 Research Process

There is a certain excitement in exploring things on your own because of the unpredictability it brings. Three months ago, I started using ChatGPT and Bard, two AI assistants, to help me in my research. Although both have their own advantages and disadvantages, I have found that Bard works better for research-specific queries.

To illustrate, here's how I presented this topic to Bard:

Screenshot of a conversation Kris is having with Bard
Screenshot of a conversation Kris is having with Bard

I usually limit myself to a couple of exploratory follow-up questions. My main goal is to extract keywords for Google searches.

In my experience, while AI assistants can be incredibly powerful, it's essential to remember that they shouldn't be relied upon as the sole resource for research. These systems, as advanced as they are, might not always capture the most recent developments. While they provide data, the depth of understanding and critical thinking that comes from human expertise is irreplaceable. It's also crucial to note that every model, no matter how well-trained, carries the risk of reflecting certain biases. Therefore, it's essential to complement AI's output with diverse sources to ensure a comprehensive grasp of any subject. As always, the synergy of human and machine tends to yield the best results.

In this case, my initial Google searches might revolve around terms like "data poisoning," "model degradation," and "feedback loop." Once I have gone through a variety of articles and expert opinions, I tend to shift my focus towards ArXiv. In case you are not familiar with it, ArXiv is a platform where scholars share their research papers before they undergo the rigorous process of peer review and formal publication. Within ArXiv, I specifically look for papers related to computer science and machine learning.

Although this approach may not be suitable for every topic, I find it to be a good balance between efficiency and enjoyment.

1.2 Quick History on AI

For those who are not familiar with the history of AI, it is a fascinating subject! I highly recommend reading "Artificial Intelligence: A Very Short Introduction" by Margaret A. Boden. The current buzz and excitement around AI is mainly focused on the second wave of AI and to be more specific, generative AI. However, AI actually began in the 1950s with foundational concepts and early experiments. By the 1980s, systems like chess programs were able to compete with human players. The 2010s marked a significant shift with the emergence of deep learning. As we move further into the 2020s, models like ChatGPT not only generate human-like text but are also entering a truly multi-modal phase, integrating text, images, audio, and more. This expansion in capability means a broader understanding of the world, but it also brings in a diverse range of data into the system. The evolution from mid-century AI to the multi-modal marvels of the 21st century is truly remarkable.

1.3 How Data Powers AI

One of the most eye-opening realizations to me was the immense value of human data. Imagine my surprise when I thought that the shiny new world of generative AI could simply churn out its own data and self-improve indefinitely. I still believe this is possible (if done the right way and with the right safeguards), but while it's tempting to think that specialized models cranking out synthetic data are the golden ticket, the reality is a tad more complicated. The quality of data is paramount; it all comes down to the richness, subtleties, quirks, and rawness of genuine human interactions that give these models their magic. It's like preferring a hand-written letter over a typed one; the essence, the authenticity, and the personal touch make all the difference.




2. Exploring the Challenges (Walk) ??

Having wrapped my head around the history and evolution of AI, as is often the case when I dig into a subject, every layer I peeled back revealed more beneath. Building on the historical context and the evolution of AI, a few pressing questions began to emerge: Why is high-quality data so crucial? What happens when models begin to lean too heavily on their own outputs or synthetic data? And, what are the potential pitfalls if they adapt too swiftly? These questions naturally guided my exploration into the subsequent challenges.

2.1 Feedback Loops

First on the list: feedback loops. This concept quickly surfaced in my research. When an AI model is consistently trained or fine-tuned using its own generated data, or a very narrow dataset, it's akin to the model hearing its own "voice" echoed back. This reinforcement can solidify certain perspectives and patterns in the model, even if they aren't entirely accurate or well-rounded. Here's a way to visualize it: AI models, continuously fed with their own outputs, run the risk of becoming insular. Picture them ensnared in an ever-tightening echo chamber.

2.2 Training Biases

Generative AI undergoes an initial "pre-training" phase where models are trained on vast amounts of internet data. This process helps them learn grammar, facts, and some reasoning abilities. However, they can also inadvertently absorb biases from this data. To refine their outputs, these models then go through a "fine-tuning" phase, where they're trained on narrower datasets, typically with the help of human reviewers following specific guidelines.

Here lies a potential pitfall: If AI models are perpetually fed with their own outputs or a limited data subset, they can become entrenched in, or even amplify, certain biases. This is because AI doesn't discern truth in the way humans do; it identifies and follows patterns. If a particular pattern, even a biased one, is reinforced repeatedly, the AI starts perceiving it as the "norm."

To put this into perspective, using ChatGPT doesn't directly modify its foundational model weights. However, if organizations like OpenAI were to consistently utilize outputs from users for further training, without incorporating a broader context or balancing it with diverse inputs, they could inadvertently create a feedback loop. This loop might lead to potential biases and a certain narrowness in the model's knowledge and responses.

It reminds me of hearing the song 'Hey There Delilah' covered by AI Ye (Kanye West); it's fun to listen to. The melody might be there, but the original emotion and feeling behind the song are gone. If you haven't heard it, here you go:

2.3 Continual Learning

Moving on, there's this concept called continual learning. Essentially, it means models are designed to perpetually update themselves, integrating new data and insights on the fly. While an interesting concept, AI models, especially those in production environments, do not use continuous learning by default. This is largely due to the potential risks and challenges of continuously updating models without oversight. Continuous learning could introduce unintended biases, decrease performance in certain tasks, or cause other unpredictable behaviors. But this helps shed some light on how a model may collapse. If these AI models adapt too quickly, absorbing every new piece of data at an accelerated pace, they might start to overshadow or even forget the foundational knowledge and nuances they've acquired from previous authentic human experiences.

Why is this? Think of it as our human memory. If we constantly cram new information without revisiting and reinforcing our previous knowledge, some of the older memories can become hazy or forgotten. Similarly, in AI's context, rapid adaptation without proper anchoring can lead to a "recency bias", where the model places undue emphasis on newer data while unintentionally sidelining or misinterpreting the older, yet crucial, information. This delicate balance between retaining learned knowledge and adapting to new information is pivotal in ensuring the model's robustness and accuracy.

2.4 Model Collapse

Now, let's tackle the elephant in the room: how does an AI model collapse? I am going to open with a metaphor: Have you ever stood between two mirrors and seen an endless tunnel of reflections? It's mesmerizing but notice how each reflection gets a bit blurrier, a touch more faded. That's model collapse in a nutshell.

When AI models feed on 'reflected' data from previous models, they start to lose their clarity, focusing too much on certain patterns and missing out on others.

One research paper titled ‘The Curse of Recursion: Training on Generated Data Makes Models Forget ’ by Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson did a great job breaking down this problem. First, they categorize model collapse into two stages:

  • Early model collapse: This stage is akin to the early mirror reflections, starting to drop tiny important details.
  • Late model collapse: This is when the data 'reflection' begins losing its distinctive patterns — much like when your form becomes nearly unrecognizable in the mirrors' farthest reflections.

These researchers put three distinct AI model types to the test, feeding them continuously with model-generated data. In each case, they observed the chilling onset of model collapse:

  • Gaussian Mixture Model (GMM): Designed to cluster data using Gaussian distribution, this model dramatically shifted its data distribution in just 50 re-generations. By the 2,000th cycle, the variance in its data had vanished. GMM is widely used in voice biometric solutions.
  • Variational Autoencoder (VAE): Initially trained on authentic images of handwritten digits, the VAE's output grew increasingly distorted when retrained on its own generated data. Over time, the once-clear digits morphed into indistinct smudges. VAE is used in tools like MolGAN and similar tools for generating molecular structures.
  • Large Language Model (LLM): An Open Pre-trained Transformer (OPT) varient with-125 million parameters underwent fine-tuning with a blend of artificial and human-generated data. While there was a noticeable performance decline, the model retained some learning capacity. Yet, its understanding went awry: when asked about medieval architecture in its fourth generation, it oddly rambled about jackrabbits. OPT models are notable for being open-sourced, meaning anyone can access and use them. This contrasts with other large language models, such as GPT-3, which are only available through commercial APIs.

Due to their specific capabilities and characteristics, each of these models finds utility in various sectors. It's important to note that while these experiments illustrate potential risks, the conditions under which they were tested do not represent how modern AI models are trained and utilized. Robust training protocols, diversification of data sources, and continual oversight prevent such models from being overly reliant on their own outputs. Regularly integrating diverse and authentic human interactions ensures that these models maintain their richness and accuracy. Moreover, developers and researchers are acutely aware of these pitfalls and actively work to refine and improve models to counteract such issues.

That said, it's still worth understanding the nuances of model collapse, as its implications could reshape our trust and reliance on AI in the future.




3. Run (Looking Ahead) ??

Now, it's time to explore the implications of a model collapse and investigate ways to circumvent it. This exercise is pretty straightforward. The first step is to contemplate the implications (this will answer the "why do I care" question) and then the mitigation factors (this will answer the question "how do I prevent this?"). Together, they paint a picture of what could go wrong and how to prevent those outcomes.

3.1 Implications of AI Model Collapse

If a model were to collapse, the consequences could be far-reaching. At a basic level, we'd see a degradation in the model's performance, making it less reliable and accurate. But the ripple effects would go beyond just functionality. Decisions based on such models, especially in critical sectors like healthcare, finance, or autonomous systems, could lead to detrimental outcomes. Let's remember that the United States National Security Agency (NSA) just announced the creation of an artificial intelligence security center that will oversee the development and integration of AI capabilities within U.S. defense and intelligence services. So, what are some these implications?

  • Compromise Critical Decisions: In sectors like healthcare, a misdiagnosis based on flawed AI could be fatal. In finance, erroneous predictions could lead to massive financial losses.
  • Increased Reliance: Given the costs associated with training LLMs from scratch, many organizations rely on pre-trained models. This is increasing as life sciences and supply chain management increasingly adopt LLMs.
  • Exacerbate Human Dependency: I've spoken about the risk of enfeeblement in the past. There is no doubt in my mind that humanity at large will develop an over-reliance on tools and solutions coming out of the Second Wave of AI. These products will weaken our potential and make the consequences even more impactful.
  • Erode Public Trust: If AI consistently fails or makes erroneous decisions, it could precipitate a loss of trust in AI systems, stalling innovations and leading to missed opportunities for advancements that would greatly benefit society.

3.2 Mitigating Factors

Recognizing the potential pitfalls is the first step. Now, we need to explore the mitigating factors to prevent these scenarios. This next bit of information is mainly directed towards data scientists, AI practitioners, AI developers, tech companies, or anyone involved in developing, maintaining, and using AI models.

  • Data Archives: Preserving access to diverse data archives, especially pre-2023 datasets, could serve as a fallback, ensuring continuity even if newer data sources become compromised. Access to a rich tapestry of historical data, especially datasets curated before 2023, offers businesses a dual advantage. First, it acts as an invaluable repository, a reference point that models can turn to for authentic and unbiased insights. Second, in the unfortunate event of newer data sources becoming compromised, these archives serve as a fallback, ensuring continuity in operations and decision-making.
  • Scheduled Model Health Checks: Think of this as a regular health check-up but for models. Routinely evaluating a model's outputs and alignment with genuine human experiences can help prevent undesired deviations.
  • Cautious Use of AI-Generated Data: The quality and authenticity of data are paramount. If machine-generated data can maintain these standards, it could be a viable resource. Yet, a diversified data diet, with a mix of genuine human interactions, remains the best approach.
  • Broaden the Data Horizon: A varied, extensive data pool capturing the vastness of human experiences can act as a safeguard against model myopia and collapse.
  • Authenticity as the North Star: As highlighted in the research paper by Shumailov et al., the authenticity of content and ensuring a realistic data distribution is paramount. This involves rigorous checks and evaluations of data sources, ensuring they are not polluted by low-quality or synthetic data that could skew the model's understanding.




4. Summing It All Up

This exploration reminded me of the many times I've been humbled by the depth and breadth of machine learning. This was a rich learning experience.

Taking everything into account, here is my verdict on this topic:

While there's a lot of buzz around the potential dangers of Model Collapse, it's essential to put things in perspective. Many of the alarming scenarios stem from conditions that don't reflect the practices of the AI developer community. Developers typically employ robust training methods, diversified data sets, and a suite of safeguards. They frequently validate models against unseen data, utilize ensemble techniques to avoid over-reliance on a single model, and leverage the stability of transfer learning by building upon pre-trained models. Active learning and human-in-the-loop systems ensure continuous refinement and oversight, while post-deployment monitoring and feedback loops allow for real-time adjustments and learning. Given these rigorous practices, the likelihood of a genuine model collapse in the real-world is considerably reduced.

I'm optimistic, as I've seen firsthand how the AI and ML community iterates, refines, and ensures our models are not just brilliant but also safe. So, here's to more "what am I even looking at?" moments and the relentless pursuit of understanding!



Josh Fleming, MSITM

Risk Advisory Leader @ Echelon | Incident Response | GRC | People Leader | Mentor

1 年

Love this! What an insightful and entertaining read.

Alla Vyelihina

Head of Design at ElifTech

1 年

Informative and thought-provoking article! ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了