Open Letter To Pause AI Experiments Makes ‘ZERO’ Sense

Open Letter To Pause AI Experiments Makes ‘ZERO’ Sense

The?AI world is ablaze ?with controversy sparked by an?open letter to pause giant AI experiments , particularly training of models beyond GPT-4, unleashing a motley crew of?fearmongers ,?doomsday prophets , and?self-proclaimed experts . Amid the chaos,?OpenAI , the company behind the firestorm, appears unfazed and is busy planning a?world tour ?to study the impact of its innovations while?rubbing elbows ?with influencers and power brokers.

No alt text provided for this image

To date, more than?2500+ people ?have signed this open letter. But, why though? The letter says that?powerful AI systems?should be?developed?only once we are?confident?that their effects will be positive and their risks will be manageable.

Further, it states that this?confidence?should be well justified and increase with the magnitude of a?system’s potential effects.?

Citing OpenAI’s blog post –?planning for AGI and beyond , where it talks about having an?independent review?before starting?to train future systems, alongside?limiting the rate of growth of computing?used for?creating new models, the petitioners said that the time is now, and are calling for all AI labs – OpenAI and alike – to immediately pause for at least six months the training of?AI systems more powerful than GPT-4.?

But, why?six months? The letter does not reveal any reason for this. However, there are a lot of hypotheses around the timeline – some say that it gives ample amount of time for OpenAI competitors to catch up to its technology and whatnot.?

But, a more logical way of looking at this is,?OpenAI , for instance, released GPT-4 on March 14, 2023, which is exactly one year after GPT-3.5 was launched (March 15, 2022). Before?GPT-3 , OpenAI took two years to launch a new version. With?AI research accelerating , OpenAI will likely release the next version of GPT in just six months –?ergo, six months.

Looking at the?scale at which AI research is accelerating , many researchers are worried and believe that the pause would help AI labs around the world to reflect and reassess the situation, alongside avoiding any kind of collateral damage from occurring. A few days back, Goldman Sachs reported that more than?300 million jobs ?could be lost or diminished by generative AI.?

The open letter is an effort in that direction, calling AI labs for accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal AI systems. In case the pause cannot be enacted quickly, the petitioners state that the government should step in and institute a moratorium.?

The outcome: A few days back, the Italian government?banned ?ChatGPT over privacy concerns. The centre for AI and digital policy has also filed a complaint arguing that OpenAI’s GPT-4?violates the Federal Trade Commission’s ?rules against unfair and deceptive practices.?

Okay, so what after six months??

Let’s say, hypothetically, that all the AI labs have decided to take a six-month sabbatical to not train more powerful models than GPT-4, then what??

Hopefully, by then, all the AI systems would be safe and regulators across the world should have come up with AI policies and guidelines that can guarantee accurate, safe and transparent AI systems.?

Stanford professor Erik Brynjolfsson believes that if the technology is potentially dangerous, it becomes imperative to spend more time, money and mental energy to improve the safety of new technology than on advancing its core capabilities.?

To this, Meta AI chief, Yann LeCun – who is yet to sign the open letter – said the aerospace industry arguably spends more on safety than on basic design. But since the current AI systems have limited capabilities, asking for safety measures is premature. “It’s as if we devoted enormous efforts to discuss aircraft safety before we knew how to build aeroplanes,” said LeCun.?

Tesla chief?Elon Musk ?– who has signed the open letter – said Aerospace safety is overseen by the FAA because people have died due to shoddy manufacturing and maintenance, but there is no agency overseeing AI at all.?

“Are you suggesting that R&D for basic AI technology should be regulated?” questioned LeCun, saying that there are regulations and regulating agencies for *applications* of AI, citing driving assistance, medical image analysis, etc.


Best Firm for Data Scientists >>

[Best Firms for Data Scientists is one of India's biggest workplace certification platforms in data science. To nominate your organisation for the certification, you can register?here .]

No alt text provided for this image

Top Stories of the Week >>

More of AI, Less of People: GitHub Fires India Engineering Team

Looks like?Microsoft ?is at the epicentre of both AI advancements and global recession. Recently, Microsoft-owned?GitHub ?released the ChatGPT-integrated?GitHub Co-pilot X , an upgraded version of its popular coding assistant. And suddenly, a few days ago, the software code-hosting platform?GitHub ?fired its entire engineering team in India.?Read the full story here.?


Stop Confusing Calculators with GPT-4

An age-old image of math teachers protesting against calculators has been mysteriously re-appearing on the internet, making a case for the open letter to pause the training of AI systems more powerful than?GPT-4 . While many experts have already voiced their opinion on the entire issue, the question is – is the calculator analogy a fair comparison??Read to find out.?


Google Brain Needs DeepMind

Google’s research arm seems stronger than ever. Recently, Google and DeepMind are collaborating on a project called Gemini. According to?The Information , both companies are ‘pausing grudges ’ and joining hands to go beyond what OpenAI has achieved or planning to achieve. The project is said to be led by?Jeff Dean , head of Google Brain.?Read more here.


AIM Videos >>

AI Revolution in India Begins?

We are excited to announce that AIM recently unveiled the?AI Forum for India , a community for AI developers and practitioners, in partnership with NVIDIA. This platform looks to help developers and AI professionals alike to connect, discuss, collaborate, share projects and more.?

Be part of India’s AI revolution now!?Register here.


AIM Shots >>

Twitter?open-sourced ?its?recommendation algorithm? that displays users’ timelines. Click?here ?to access the code.?

According to an internal email accessed by?CNBC , Google is in the process of reorganising its?Google Assistant ?team to prioritise the development of its?conversational AI technology, Bard.?Read more here .?

Wipro recently elevates?Badri Srinivasan ?as the?head of India and Southeast Asia businesses ?within the APMEA (Asia Pacific, Middle East, India, and Africa) Straegic Market Unit.?

Google Cloud recently partnered with the?AI coding platform ‘Replit’. With this, the company gets full access to Google Cloud infra and Google’s machine learning platform ‘Vertex AI ’?Read:?Google and Replit’s Quest to Become the Next Copilot X

Google's anti-trust case takes a new turn. Recently,?NCLAT ?has upheld a fine of?INR 1,337.76 crore ($180 million)?imposed by the country’s competition watchdog?CCI ?for abusing its dominant position in the Android mobile ecosystem.?Read more here .?

After launching GPT-4 powered Copilot for?developers ?and?businesses , Microsoft recently?unveiled ?Security Copilot , a?copilot for cyber security?professionals.?

Disney?laid off ?its metaverse team , roughly 50 members, who were in charge of next-generation storytelling and consumer experiences.?

Chinese search engine giant?Baidu recently?scrapped ?the launch? of its multimodal large language model Ernie.

KRISHNAN N NARAYANAN

Sales Associate at American Airlines

1 年

Thanks for sharing

回复
José H. R.

Data Engineer · Python Developer

1 年

You can distinguish people that know about AI by their opinion in this issue. Those who know don't want to stop it. Sad to see "experts" in Spain that promote a ban.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了