ArtificialIntelligence #108: The moral panic behind AI and its implications for innovation

ArtificialIntelligence #108: The moral panic behind AI and its implications for innovation

This was the week the moral panic around AI came into full force on both sides of the Atlantic

The phrase moral panic? has a specific meaning in history

A moral panic is a widespread feeling of fear, often an irrational one, that some evil person or thing threatens the values, interests, or well-being of a community or society. It is "the process of arousing social concern over an issue", usually perpetuated by moral entrepreneurs and mass media coverage, and exacerbated by politicians and lawmakers. Moral panic can give rise to new laws aimed at controlling the community.?

Now, why does this sound familiar? ??

Sam Altman

I was a bit disappointed to see the testimony of Sam Altman?

There seems to be a collective focus on saying 'AI is bad'

No one seems to be saying that AI is good for US UK and EU companies

To put it another way

We are not talking of 'AI superpowers' Kai fu Lee's book any more - which said that?

China was the next AI superpower

GPT changed that by taking the mantle of leadership firmly to USA

Yet, we seem intent on banning the pioneers of AI (OpenAI Microsoft , Google and Facebook / #meta and all the #opensource companies in #LLMs as well)

I was surprised to see Gary Marcus ' statement below

No alt text provided for this image


What does he mean by a new breed of AI systems?

He means symbolic AI (https://lnkd.in/e_UAcMyv ) which he has long been talking about

An idea which is unproven and has not taken off at scale in decades (and superseded by the efforts of Hinton, LeCunn, Bengio et al)

I think this point needs to be put out - critics seem wise - but they should come up with solutions (other than regulatory ones) - especially ones that work!

I think that the conversation should be more balanced - there are very few innovation centric voices in AI today

EU - Open source - LLMs and Cyber security

Never to be outdone when it comes to regulation? :)?

Supposedly, the AI act targets US open source software

https://lnkd.in/eNaMYpGs ?

And

https://medium.com/coinmonks/amended-eu-ai-act-takes-aim-at-american-open-source-ai-models-and-api-access-c515fe47e3d2 ?

There is some ambiguity with this regulation but there is a similar issue with CRA (Cyber resilience act and open source)

I don't honestly see how this will work as the register says "The road to hell is paved with good intentions" - the issues raised below are very valid wrt CRA and the same ideas affect almost anyone who uses open source and LLMs.

Ironically, that was one of the fastest growing segments of LLMs and also the most transparent !

https://lnkd.in/egeneDwV

All this creates so much ambiguity that companies (large and small) will simply do what Google did with #bard (at least for now) - skip the EU altogether

https://lnkd.in/eWvkRHsZ


Some Notes

a) Views presented here are my own and are not associated with any organisation past or present I am associated with


b) I am not arguing for no regulatory oversight. I am arguing that we need to balance the conversation


c) I do not think anyone really knows exactly what the situation will be. ex a stanford study says that the so called emergent properties of AI may be researchers hallucinating (pun intended!) https://lnkd.in/eqFtXcdh

The respected techtalks by Ben Dickson also makes the same point The emergent abilities of LLMs are not what they seem

Yet, everyone is rushing along to regulate - like a witch hunt in a moral panic.?


d) Finally, there may be some merit in symbolic AI (more specifically hybrid models) - Stephan Wolfram has proposed symbolic models for GPT through the wolfram alpha plugin. Again, WRT symbolic models, we need to put this in context ie they are yet to be proven at scale.

e) I would much rather see a more nuanced (and innovation focused) discussion on technical solutions to provide competitive advantages to USA UK and EU companies - rather than a discussion on banning things only

f) as a teacher of AI, I want my students to master the most disruptive technology to create the best innovations for the future. Today, that comes from Open AI, Microsoft, META and Google. I may not always agree with all that these companies do - but as a society, we will benefit from the innovation arising from these companies.

g) Solutions exist ex Stephen Wolfram has been speaking of symbolic with GPT -

But these have yet to be proven on scale.?

The operative word is 'at scale'. Even if it did, there is no guarantee that it will embody a universal set of values as Gary Marcus seems to be advocating. We saw that with the Internet. The culture of the Internet started with the culture of silicon valley. Over time, that fragmented. every government / region had its own cultural perspectives - which essentially fragmented the internet. So the idea that we are going to magically get a values based system if we abandon the current neural network based models in favour of some kind of symbolic model is flawed.

Ivan K. Ivanov, PhD

President and CEO, CFO, CSO etc. at R.E.D. - Retired and Extremely Delighted

1 年

JIT - Just In Time to start the debate on AI and I expect follow-up discussions leading to reasonable and meaningful decisions and maybe even legislation.

回复
Maryann F.

Project Coordination | Product Management

1 年

I spoke with a course mate the other day and they admitted they feel cheated by recent progress in AI. They had expected 'more' from ChatGPT and were disappointed that it was 'just' a highly sophisticated LLM. I asked them what 'more' meant to them and they struggled to give a coherent response. I think its stemming from the same place as the moral panic. There is an inherent problem with educating the general public on the “what is” and “what is not” of AI. As a result, large swaths of society largely view AI as a form of technological magic. And magic is hard to trust. So it’s no surprise then that public opinion of AI is at best, mixed; and at worst, suspicious. Their interaction with this space comes from pop culture and click-bait articles. If we want people to truly comprehend this space, we’ve got to find ways to socialize it. This may include increasing funding, but also entails a holistic and comprehensive approach to AI education. By creating an environment in which AI is viewed with honest eyes and healthy, but not alarmist levels of trepidation, we can demystify this space. This will make it more common, less unknown, and as a result less scary.

Nikhila Cholleti

Driven Professional Ready to Excel in New Challenges

1 年

Insightful piece! A good reminder to avoid falling into the trap of moral panic surrounding AI. I opine that creating platforms for open dialogue, involving diverse stakeholders, and establishing collaborative initiatives can enable the development of responsible AI.?By considering various perspectives and engaging in thoughtful conversations, companies can identify strategic opportunities, enhance their competitive advantage, and drive innovation in the AI landscape.

Munish Singh

MI Dashboard Specialist, CEO Bots, Future Tech Author and Podcaster

1 年

I carefully observed the entire Congress hearing and discovered that not a single participant discussed the idea of banning AI. Surprisingly, Altman, in his responses, advocated for government regulation and oversight. During an exchange, one of the senators even proposed that Altman could potentially manage the oversight agency. The discussions primarily focused on the current state of AI and its usage, indicating that there is little cause for concern. The committee chairman concluded by emphasizing that it was unprecedented for the tech industry and corporate America to seek regulation voluntarily. Any conversations about banning AI had already concluded weeks ago, and even then, there was a general consensus that banning would not be effective. It is widely acknowledged that AI has already been unleashed and cannot be put back in the metaphorical genie box. Musk himself expressed scepticism about the effectiveness of the recently proposed 6-month AI moratorium during his CNBC interview yesterday.

Lindsay Hiebert

Product Manager, Invisinet, First Packet? Authentication (FPA) Zero Trust Network Access security technology and capabilities. Where Zero Trust Begins! Let's Connect. Gen AI Expert, CISSP certified, M.B.A.; M.S.;

1 年

Thanks Ajit! Great observations and positive thoughts that you share in your article! Kudos! Please see a relevant and related LinkedIn article published by Christina Montgomery on this topic who was also quoted in the news article today. Christina’s post drew many comments with ample references shared by others and support for a reasonable and measured approach as you are conveying here. Thank you! https://www.dhirubhai.net/posts/christina-montgomery-8776b1a_ai-activity-7064307246796546049-CFAE?utm_source=share&utm_medium=member_ios

要查看或添加评论,请登录

社区洞察

其他会员也浏览了