A Quick-Draft Response to the March 2023 “Pause Giant AI Experiments: An Open Letter” by Yoshua Bengio+ Stuart Russell, Elon Musk, Steve Wozniak...
Open letter for pausing AIhttps://futureoflife.org/open-letter/pause-giant-ai-experiments/

A Quick-Draft Response to the March 2023 “Pause Giant AI Experiments: An Open Letter” by Yoshua Bengio+ Stuart Russell, Elon Musk, Steve Wozniak...

A Quick-Draft Response to “Pause Giant AI Experiments: An Open Letter”

Prominent ‘experts’ recently released an open statement for a ‘pause’ to developing AI citing ambiguous risks and yet to be proven ‘dangers’. I am surprised that exceptionally intelligent persons such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari and others have signed on this. Anyone can apparently sign on: https://futureoflife.org/open-letter/pause-giant-ai-experiments/??- I wish they had a sign-on-with-comments option to accommodate partial support. If they had that option, I may have joined in too – with caveats as described below.

Before expressing disagreement with parts of this letter, I agree with these set of experts that AI could be misused or abused, and present future dangers. However, …

Artificial Intelligence (AI) is of itself not evil or good, nor safe or dangerous. AI is a simply a cluster of “technologies that mimic the functions and expressions of human intelligence, specifically cognition and logic” (https://scholars.org/contribution/call-proactive-policies-informatics-and ).

Bengio, Russell, Musk, Wozniak and others known for their exceptional intelligence and accomplishments are probably / should be well aware that AI cannot and should not be paused / stopped. Neither, should segments of AI, that some perceive as ‘black-boxes’ be paused. Instead, they should call for better governance of AI and support accelerated AI development.

It would have been better to exclude the “PAUSE” as an emphatic part of the open letter / solution. WHY?... here are eight reasons that quickly come to mind:

1.?????Motivation and purpose matter: We can use AI to help society or to cause harm. Applications of a technology must be differentiated from the development of the technology itself.

2.?????Non-compliance from highest risk producers: Calls to pause, slow or stop AI development if pursued and given force of policy will only deter those who intend to develop AI as a science or use AI for good, and others (including probably malevolent actors) will continue development anyway.

3.?????Compliance from low risk producers: When AI foundation model / multimodal model developers on the compliant side pause or stop, they may technologically fall behind potentially malevolent forces who will keep developing such technologies. A sufficiently long enough pause could put the compliant side at an irreversible disadvantage for some time.

4.?????Agnostic policy makers: Pausing AI development will not ensure the implementation of safeguards or appropriate policy development. There must be more effective ways to catalyze and promote safeguards and proactive AI policies (https://scholars.org/contribution/call-proactive-policies-informatics-and ).

5.?????Implausibility of key suggestions: The ‘open letter’ posits to pause the AI race and the core suggestions appear to be implausible, –or at best a way to establish pseudo-control-mechanisms: such as the suggestion that experts should develop “protocols should ensure that systems adhering to them are safe beyond a reasonable doubt” using ‘external’ experts to ensure this. Appointing tigers to guard deer?

6.?????Limited logic: The expectation that a general purpose technology, such as AI is, should be paused based on partially implemented pauses to highly specialized aspects of human cloning, gain-of-function research, and eugenics development appears to be fundamentally illogical. The open letter has an appearance of sophistication but appears to be without clear philosophical, logical or pragmatic foundations.

7.?????Unnecessary and unsustainable: The stated goals can be accomplished without the pause. In fact, there is scope for arguing that the proposed pause can do more damage than good, as policy makers may treat the ‘pause’ as a repeatable solution to periodically and probably whimsically address perceived problems with AI technologies.

8.?????Untimely: Humanity is at a unique point in time – history is being made as we enter into a new artificially intelligent era. This is time for aggressive pursuit of AI for good without pausing or even slowing down. A pause may allow for some management of corporate valuations and deal making to profit a few, with little benefit for the masses. The emerging multipolarity of global forces mandate an immediate and aggressively continuing pursuit of AI philosophy, AI science, AI technologies and AI education.

Please do not pause AI.

(Only sufficiently evident misuse / abusive use of AI technologies should be paused /stopped.)

The probably MOST important need now is to ensure full AI TRANSPARENCY. All data, training data in particular, foundation models and key algorithms must be mandatorily open sourced. Opacity should be penalized. Production-level AI systems can be proprietary to support for-profit endeavors and entrepreneurial ventures. Most likely, it is the opaqueness of AI systems that will eventually breed bias, manipulation, hidden risks that foster destructive black swan events and many other adverse dynamics. @Hugging face is a great example of promoting transparent AI.

Maybe ‘AI Summer’ is here, but there is a hidden danger of falling behind in the global arena and we can “reap the rewards” for “the clear benefit of all” WITHOUT a pause. It is the pause that has the most potential to cause us to slumber “unprepared into a fall”.

Anyway, AI is out of the bag – it cannot and will not be stopped or paused, for better or for worse. Our best bet to develop proactive before-the-event policies, risk management frameworks and safeguards, along with aggressive and accelerated development of the compliant side of AI-developers to ensure that the ‘good’ side stays ahead. Let the wisdom of the crowd prevail.

Prof. Samuel @ 美国罗格斯新泽西州立大学新布朗斯维克分校

This article was first posted on https://aboveai.substack.com/p/a-quick-draft-response-to-the-march/

Other reference links, #s and @s

?https://scholars.org/contribution/call-proactive-policies-informatics-and

https://www.dhirubhai.net/company/rutgers-masters-in-public-informatics/

#ArtificialIntelligence #informatics #LLM #GenerativeAI #generativeart #FutureOfWork #AI #analytics #datascience #businessanalytics #NLP #NLU #NLG #ML #responsibleai #Pulicinformatics

The Wall Street Journal Reuters WIRED Newswise Scientific American ScienceDaily AIMAGAZINEBOOKS MITTECH LIMITED Forbes KDnuggets Kdnuggets Editor Bloomberg News

要查看或添加评论,请登录

社区洞察

其他会员也浏览了