With GPT-4, dangers of ‘Stochastic Parrots’ remain, say researchers. No wonder OpenAI CEO is a ‘bit scared

With GPT-4, dangers of ‘Stochastic Parrots’ remain, say researchers. No wonder OpenAI CEO is a ‘bit scared

Welcome back to VentureBeat Weekly!?

In this week’s?AI Beat , I looked back on what was an epic week in generative AI. After an incredible array of announcements from Google, Microsoft, Anthropic and the launch of OpenAI's GPT-4 model, I focused on what happened on Friday — a look back on an explosive paper that now seems to foreshadow the current debates around the risks of LLMs such as GPT-4. That is, the March 2021 AI research paper, ‘On the Dangers of Stochastic Parrots : Can Language Models Be Too Big?’

Those dangers, said the researchers —?remain. And in the context of OpenAI CEO Sam Altman's comments over the weekend about being 'a bit scared' of it all, they are important to highlight.

Plus:

  • How to chat with OpenAI's GPT-4 and Anthropic's Claude right now
  • Generative AI output could be eligible for copyright protection. But there's a catch.
  • PyTorch 2.0 brings new fire to open-source machine learning

— Sharon Goldman, Senior Editor, VentureBeat

This is The AI Beat, one of VentureBeat’s newsletter offerings. Sign up here to?get more stories like this in your inbox ?every week.


It was another epic week in generative AI. Last Monday, there was Google's laundry-list lineup, including a PaLM API and new integrations in Google Workspace. Tuesday brought the surprise release of OpenAI's GPT-4. On Thursday, Microsoft launched Copilot 365 meant to 'change work as we know it.'

This was all before the?comments ?by OpenAI CEO Sam Altman over the weekend that, just a few days after releasing GPT-4, the company is, in fact, ‘a little bit scared’ of it all.

By the time Friday came, I was more than ready for a dose of thoughtful reality amid the AI hype.

I got it from the authors of a March 2021 AI research paper, ‘On the Dangers of Stochastic Parrots : Can Language Models Be Too Big?’

Two years after its publication led to the?firing ?of two of its authors, Google ethics researchers Timnit Gebru and Margaret Mitchell, the researchers decided was time for a look back on an explosive paper that now seems to foreshadow the current debates around the risks of LLMs such as GPT-4.

Read the full story .

Read more from?Sharon Goldman , Senior Editor, on?VentureBeat.com .

This is?The AI Beat, one of VentureBeat’s newsletter offerings. Sign up here to?get more stories like this in your inbox ?every week.


VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了