"How were we to know?"
Train wrecks and industrial revolutions aren't new. Just bigger and more expensive.

"How were we to know?"

All we knew two decades ago should have stopped this train wreck. Why didn't that knowledge stop it?

Goran's post

Goran S. Milovanovi?, Phd is SPOT ON:

And there's another dividing line, another median in this whole story: whether you were (a) educated in cognitive sciences before (around) 2010, or (b) directly educated in Data Science and ML/AI after 2010. Around 2010 was the moment it became clear that engineering breakthroughs in Deep Learning would begin to yield practical and truly incredible results compared to their predecessors in associative, statistical learning. At the same time, it was the year when hype overtook healthy scientific conservatism and the hacker mentality took precedence over scientific rationality. This has brought us to the point where even a script kiddie can deploy an LLM on data and introduce features into a production environment without even understanding the simple proofs of the limitations of neural networks, which Marcus, Pinker, Fodor, Zenon Pylyshyn, and others have been writing about for decades.

Lucy Suchman could have told you why this would fail in 1987.

I studied cognitive science and AI from 2002-2004 and wrote about it steadily from 2007-2010. I differ slightly on the idea that it became clear that engineering breakthroughs in "Deep Learning" was as clearly breakthrough as that. While we were able to train algorithms for personalization to drive sales actually liberating data to become insight remained elusive. We advanced on mass surveillance pretty well with it.

The most impressive strives were happening in images only, in biotech and aeriel surveillance. This led to abandoning the projects that held the most promise for overcoming the very limitations we are seeing in LLM failures today, which, lets be honest is deploying text as image far more than anyone cares to address or admit. This has distinct problems for something that promises to do analysis. Yes, I am saying all the failures we are seeing now were known then, when I was studying, even through theory alone on napkin sketches. We had less computing power, our models and experiments were smaller. But you could do the math. Literally. The barriers we were working on then were:

1. The impossibility of creating autonomy and agency with classical computationalism. (Delancey 2004).

2. The way semantic systems constructed through computation would easily collapse through repeated use. (Bishop 2006).

3. The limitations of repeat exposures of language to train any programmed model, whether robotic or strictly computational (Clark 1997).

4. The limits of classical computationalism and binary for language, intelligence, and analysis. (LUCY Suchman 1987!)

These are just the highlights.

For me, I had a little future shock and thought for a hot second that maybe they had advanced in some unforeseen way. I assumed that some of these problems had been solved in ways I hadn't seen coming. I was a little excited if confused and mystified. Just as I was when there were social data platforms claiming to do automated sentiment analysis. A few months of testing--basic software testing--and we could see that they hadn't. And that the results were strangely poor and erratic, even worse that you'd think in models so created,

What I want to know is did anyone truly believe that these limitations could be overcome by size and force or did they just use it as an excuse to get people to hand over their data?Someone said to me early on: "You might not like the ethics or the elegance of the solution but they did it and now look at this amazing thing we have!"

But: what do we have? Hacker folks and business men are the only ones impressed because they don't know any better and usually have no means to validate what their systems produce. Quicker ways to cut and paste and override intellectual property rights and privacy laws? The entire digital commons repackaged and sold back to us? I'm not sure. The massive cybersecurity vulnerabilities this has created will be talked about for decades. Perhaps with pen and paper and postage stamps, if the worst case scenario happens. I'm optimistic the pain of this bubble will force some people to get smart and better solutions will emerge.

As an assistive tech it has some utility. But at what cost?

We are about to find out aren't we.

#ai #hypebubble

Bruce LaDuke

Associate Director Medicare Reconciliation at Humana

7 个月

Correct, the model was broken at its onset, but those early thinkers had such God complexes that they wouldn't listen to anyone and suppressed all opposing viewpoints, including my own. The correct model is both left and right brain and the current 'solutions' are all half-brained. Nothing is going to work properly in that context. They are always going to hit the right brain wall until they acknowledge that there is a half of the mind they don't understand. These are the same people that originated the term AGI. I hear people on LI talking about how AGI is going to cure cancer, enable space travel, etc. Well all of those things are innovations, and you don't get to innovation by pure reasoning. You have to understand how reasoning and innovation interact. This is why the term AGI is nonsense and all left brain-only models will never work.

Sean Kempton

Founder at Tisquantum Limited

7 个月

"did anyone truly believe that these limitations could be overcome by size and force or did they just use it as an excuse to get people to hand over their data?" To be fair, the arguments about why it doesn't work and will never work take quite a lot of effort to understand. It is much easier to take things at face value, especially when it seems that the investment community, the majority of the press and even governments, are doing exactly the same thing. The 'strength in numbers' thing only works, however, if the thing the majority believe in has some grounding in reality. If it doesn't you tend to get a peak of belief followed by an increasingly strong tail off in behaviour. This is basic animal behaviour (101, I believe my US colleagues call it). I came to the conclusion a long time ago that there is little point in arguing against the majority on this. There are two alternative ways forward - the first is to wait for the inevitable slow motion crash (happening now), and the second is, if you have an idea of a paradigm which understands and accepts the limitations and then presents a solution to them, then quietly start working on it.

Debbie Reynolds

The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath

7 个月

Jennifer Pierce, PhD, brilliant. I often find myself shaking my fist ?? at the screen ?? when it is evident that folks “don’t get it” about AI's limitations. These systems are more like machetes than scalpels. They are terrible at nuance and outliers. They are not and will never be like a human. These flaws can be forgiven when working on low-risk projects, but in high-risk use cases, we are flirting with disaster.

Debbie Reynolds

The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath

7 个月
回复
Debbie Reynolds

The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath

7 个月

要查看或添加评论,请登录

JP Pierce, PhD的更多文章

  • Deep Seek

    Deep Seek

    I kind of got lapped on the Deep Seek headline. I ran some brief tests over the weekend.

    41 条评论
  • It's Beer O'Clock: Public Health Edition

    It's Beer O'Clock: Public Health Edition

    We have a vertical in health and wellness innovation at Singular XQ. We have in our short-not-quite-two-year life…

    1 条评论
  • First, Do No Harm

    First, Do No Harm

    When my daughter Jo (not her real name) thought someone had taken the antique pocket watch her grandmother had given…

    2 条评论
  • Apple Intelligence: These Bots Can't Reason and "Possibly" Never Will: Part 2

    Apple Intelligence: These Bots Can't Reason and "Possibly" Never Will: Part 2

    If we review case studies like Theranos, WeWork, the latest Optimus Prime exposure, the 1,000 offshore cashiers…

    8 条评论
  • Apple Intelligence: These Bots Can't Reason and "Possibly" Never Will: Part 1

    Apple Intelligence: These Bots Can't Reason and "Possibly" Never Will: Part 1

    "The future is dark, with a darkness as much of the womb as the grave."-Rebecca Solnit "The future is so bright, I…

  • Failed amulets: when fear creates false gods.

    Failed amulets: when fear creates false gods.

    Thanks to Dr. Jeffrey Funk who made a provocative post today regarding an editorial about the retreat form science that…

    10 条评论
  • It's Beer O'Clock: The End of Silicon

    It's Beer O'Clock: The End of Silicon

    The end of silicon, the horizon beyond binary, and why people may be suffering from a mass delusion designed to…

    2 条评论
  • Equity means never having to say you are sorry.

    Equity means never having to say you are sorry.

    This Y-combinator story is fascinating, and I'm grateful to both women for making it transparent for learning. It's…

    5 条评论
  • It's beer o'clock: regularly scheduled programming edition

    It's beer o'clock: regularly scheduled programming edition

    Data sovereignty and human rights in the suppoly chain both suffer from siloed thinking between business and academic…

    2 条评论
  • It's beer o'clock

    It's beer o'clock

    Typically this newsletter is my more focused on my technical and industry research whereas our newsletter at Singular…

    3 条评论

社区洞察

其他会员也浏览了