Where do we go from here?
Photo by cottonbro studio: https://www.pexels.com/photo/gray-and-black-wooden-wall-4874232/

Where do we go from here?

What happens when whole generations and the knowledge capital they've accumulated are erased?

TL;DR

The news is rife with articles about model collapse in the past several weeks. We don't know how severe pervasive LLM model collapse and synthetic data poisoning will be nor if the damage that has begun can be repaired. I remain optimistic, but that does not mean blind. Logically speaking, the internet as it was once imagined, may be irretrievable now. To add insult to injury, the security vulnerabilities introduced may have eclipsed the possibility of a data-secure network for decades. The frustrating knowledge that many of us could see this coming long ago, points to an overall problem in public discourse with multiple faces. Our network is filled with people who tried to speak out against poor data-hygiene, cybersecurity recklessness, and unsustainable, low-accuracy language models, and found ourselves with little voice. A recent flury of publications is late in arriving, and these are not novel discoveries. These problems were known in theoretical models and experiments going back to the 20th century. Consequently, LLM Model Collapse is not the main issue.

It's knowledge model collapse.

Where do we go from here?


Epistemic amnesia

In the early days of my graduate research agenda, I studied historic models of knowledge that are disrupted and erased. Contrary to the Hegelian notion of thesis, anti-thesis, and synthesis, it appeared to me that instead, modern history was more a process of rupture and disjuncture, while whole pockets of knowledge and human advance are lost. I became fascinated with developing what I called a "hermeneutic of continuity." The most salient and concrete example I found and used was the Dark Ages, and then the subsequent Renaissance and Enlightenment, which was immediately greeted with rupture and disjuncture again in the Reformation. In this period, just as the older knowledge was brought back into contemporary understanding and synthesized after a period of social amnesia, it was once again contested and ruptured.

The reason there was an overlap with AI and this study was that I held a theory that linguistic systems hold archival evidence of collapsed knowledge models, and I had a belief that computational linguistics could help us perform a "cognitive forensics" of the past.

Part of that line of study included an immersion into the limits of computational linguistic models, which was already well-established by the time I entered formal study (2001-2010). I thought we could use those limits to enhance human understanding rather than replace it; I had to fight with humanities scholars from the opposite side I now inhabit, to argue that computationalism, while limited, could produce data as tools for understanding. Humans would need to liberate that data so it could become insight. If we failed to act on that data, our insight would be impoverished.

It's ironic in multiple regards that when I re-entered industry after having children and suffered through my own rupture in my academic career, I would find myself in business projects, arguing about the limitations of computational data in relation to insight, and the folly of garbage-in-garbage out data models in social data. So there is a bigger problem than LLM model collapse underlying all of this and I've experienced it first-hand.

It's knowledge model collapse.

How did we go from large numbers of scholars from elite institutions around the world understanding these limitations to the large scale implementation of flawed models in business with no regard to the level of risk and cost proportional to the return on investment?

Untapped knowledge capital is explosive.

Epistemic amnesia has happened before. We have some historic antecedents. And yet, have we learned preventative measures? Erasure and epistemic amnesia are relevant here in multiple dimensions, including how we got to this point, and what might come next if we find that the damage incurred is irreparable.

Suddenly, there is an awareness of digital pollution, and that global, public infrastructures will have to invest money in cleaning this up one way or another. This is to say nothing of the fallout and collateral damage of security vulnerabilities that were created in the Great Scrape. Yet, many people have written and warned of this and have carried on explosive conversations in their organizations about the stakes for enterprise carelessness and lack of psychological safety. Those people are visibly punished, silenced, and inflicted with professional wounds rendering their expertise even more inert than it was prior.

The much bigger problem is there are literally thousands who have been articulately explaining, educating, and sounding alarms. The number of people with advanced degrees is larger than it has ever been before in human history, which some people think is a sign of failing rigor rather than increased access. Sidestepping that complicated issue here is the problem of knowledge model collapse emerging in negative space.

How might we enable larger public voice and enlarge dialogues that have influence and impact over how capital is invested for the common global good? This is not a new problem. But it is an increasingly perilous one.

How might we incentivize people to continue offering voice in actionable ways in the public square?

How might we incentivize business leaders and civil servants with gaps in knowledge and skill to listen to the knowledge capital we've accumulated but then rendered mute?

Knowledge Capital, the name of my newsletter and SXQ's embryonic online magazine, is being bound and gagged. If this many people in my immediate circles knew about this, could explain how and why it would happen, detail why these models are unsustainable and of limited utility, how do we have such an epic failure to safeguard and protect the Digital Commons?

We must address this with urgency.

Business leaders who may already be facing a big clean up effort for the meltdown of this tech and its failed investments, the knowledge capital you lack and still need is out here, waiting to be tapped. You are looking at the wrong places and enacting policies that make it impossible for the small outside innovators you need for survival to sustain themselves and therefore you.

We need a new vision for a better tomorrow. An expanded vision free of cynicism and deception.

How might we activate the human knowledge capital that is lying in the weeds unable to apply their minds and their learning because it interferes with short term profits? It's always been a problem. It's now a crisis.

Singular XQ has modeled a solution with input from over 85 experts and businessmen around the globe across industries and from various areas of expertise. If you'd like to receive a free copy of our white paper in advance of it's formal publication, sign up for our newsletter on Ghost. Our website is in my profile.


Bruce LaDuke

Associate Director Medicare Reconciliation at Humana

3 个月

What we now know as AI was founded on religious suppression. Religion is close-minded 'rightness' in any form. Spiritual people listen, learn, and grow. What people are calling AI has been a religion from the coining of the term AGI. There was never an intent to explore, discover, learn, or adjust. The intent has always been to support the narrative that was 'right.' But right people inevitably end up looking dumb.

回复
Martin Spencer

Seasoned, Visionary AI Robotics Inventor on the Hero's Journey as a Serial Entrepreneur

3 个月

要查看或添加评论,请登录

社区洞察

其他会员也浏览了