Bracing for AI Armageddon

Bracing for AI Armageddon

Last week the Future of Life Institute, a non-profit organisation dedicated to protecting humanity from tech-instigated Armageddon, published an?open letter?calling on AI researchers to slow down.?

The institute wants computer labs to stop work on cutting-edge AI tools for at least six months and to use that time to come up with rules for how to safely develop the technology.?

So far the letter has attracted over 5,500 signatures, including from prominent individuals, such as Tesla and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak and former US presidential candidate Andrew Yang.?

But it seems unlikely that any of the labs developing AI tools in any serious way are going to listen.?

Sam Altman, CEO of OpenAI, the company behind ChatGPT and Dall-E, did not sign the letter. And neither did Google’s CEO, Sundar Pichai, although he did?tell?the?New York Times?that he thought that ‘the spirit of [the letter] is worth being out there’, even if he didn’t agree with everything it said.?

The Future of Life Institute’s suggestion that governments should mandate a moratorium on AI research if the labs themselves are slow or unwilling to do so is probably just as fanciful.?That said, regulators are beginning to pay more attention to the technology.?

The European Commission’s?Artificial Intelligence Act, which categorises AI systems into different tiers of risk and regulates them accordingly, is being ironed out, and last week the UK government?published?its own white paper proposing how it would like to deal with AI.?

Meanwhile, Italy’s data protection authority has gone a bit feral and temporarily?banned?ChatGPT because of privacy concerns, a move which even the country’s deputy prime minister described as ‘excessive’.?

The Financial Times’ John Thornhill has?compared?most efforts at regulating AI to ‘waving a small red flag at an accelerating train’, and suggests that labs will instead be kept in check by the fear that people will revolt against companies they think are not acting in their interest.?

Levi’s got a taste of that backlash late last month when it announced that it was going to use AI-generated models to advertise its clothes and framed the innovation as a way to improve representation. The brand was criticised by people who thought the brand was stiffing real non-white models out of work and had to?issue?a clarification.?

Still, the past decade or so suggests that relying on the public to know which technologies best serve its interests is not a fail-safe system.

No alt text provided for this image

What are the biggest challenges and opportunities in advertising in 2023? We interviewed leading industry executives to find out.?

Download the Contagious Radar report for free to learn about marketing leaders' priorities in the year ahead.?Contagious.

Photo by Gertrūda Valasevi?iūt? on Unsplash

To read this week's Contagious Edit in full,?click here.

To receive the newsletter in your inbox every Wednesday,?click here.

KRISHNAN NARAYANAN

Sales Associate at Microsoft

1 年

Thanks for posting

回复
KRISHNAN N NARAYANAN

Sales Associate at American Airlines

1 年

Thanks for sharing

回复

要查看或添加评论,请登录

Contagious的更多文章

社区洞察

其他会员也浏览了