Should powerful AI be developed in the open or behind closed doors?

Should powerful AI be developed in the open or behind closed doors?

Artificial intelligence is being hailed as a life-changing technology that has the potential to do enormous good or horrendous harm. On the one hand, some believe it could help cure cancer, or mitigate global warming. On the other, a number of people including tech billionaire Elon Musk have said AI could outsmart humans and decide we're no longer necessary. Ian Hogarth , who sold his AI startup to Warner Music Group , is worried that AI could eventually become "God-like".?

With this in mind, should the AI labs that are building the most advanced AI systems share all the intricate details of their work for the rest of the world to see? Or should they keep certain lines of code or other data to themselves??

"This debate between open versus closed AI is the crucial discussion to have at the moment," says Philip Meissner , a professor at ESCP Business School in Berlin.?

OpenAI , one of the world's top AI labs and the creator of the GPT (generative pre-trained transformer) systems, has recently changed its tune on the open vs private debate. OpenAI is backed by LinkedIn parent 微软 .?

Ilya Sutskever , OpenAI’s chief scientist and co-founder, told The Verge last month that his firm's past approach to openly sharing everything was wrong, citing competition and safety as the two main reasons.?

"It’s competitive out there," said Sutskever. "GPT-4 is not easy to develop. It took pretty much all of OpenAI working together for a very long time to produce this thing. And there are many, many companies who want to do the same thing."

OpenAI is competing with the likes of London AI lab Google DeepMind , which is owned by Google parent Alphabet, as well as startups like Cohere , Adept and Inflection AI .?

As far as safety, Sutskever added: "These models are very potent and they’re becoming more and more potent. At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want to disclose them."

One issue with holding back certain information is that many AI researchers like to share the full details of their work with the wider community as it often leads to increases in citations, potential collaborators, jobs and other opportunities. Emad Mostaque, the CEO of London AI lab Stability AI , knows this and recently tried to take advantage of OpenAI's pivot.?

"Open offer to anyone @OpenAI who actually wants to work on Open AI," he tweeted. "We will match your salary, benefits etc but you can work on any open source AI projects you like, ours or others."?

Big tech vs AI startups

For some, OpenAI's decision to withhold certain information is a contradiction in terms.?

"OpenAI was intended [to] structure artificial intelligence in an open way. It was founded to give everybody in the world personalised AI. If this vision became reality, AI could be the great equaliser. It could give people agency, and freedom," said Meissner. "That doesn't look like it's going to happen now, does it?"?

Big tech companies have deep pockets, and the amount of money that Microsoft and 谷歌 can pump into development far exceeds what an entry-level AI startup can raise, even amid the current AI boom. There is an argument, therefore, that big tech's interest in generative AI could lead to both technological breakthroughs and massive adoption via existing client bases.?

But there are competition concerns when it comes to big tech and AI.

No alt text provided for this image

"If we only have one central player, or two or three central players, then basically the rest of the economy pretty much depends on their benevolence," he said.

But Meissner is quick to point to European generative AI startups like Aleph Alpha, a Berlin-based startup and OpenAI competitor that is committed to remaining open source. These firms may not be grabbing the headlines like the rivalry between the main players, said Meissner, but the work that they are doing is just as important.?

"There's much broader competition [...] and I think that will continue in the future," he said.?

If anything, the experience of OpenAI over the last six months is evidence of the power of small players to disrupt entire industries in a short space of time, he said.?

"OpenAI is a relatively new company. It was started with relatively few people, with relatively few resources, and they have now built a product which is rather scary for a trillion dollar company, namely Google," Meissner said. "So I wouldn't underestimate the power of these small teams to create disruptive change."?

Still room for smaller players

For critics, the acquisitions of AI startups like OpenAI by big tech firms risks making the field a closed-off, highly-commercialised space, rather than technology that could ultimately help mankind.?

But Herman Kienhuis, managing partner at Curiosity, an Amsterdam-based VC firm, said that while that risk undoubtedly exists, there will always be room for smaller players.?

Kienhuis' portfolio includes Strise, which helps fintech firms meet know-your-customer (KYC) standards and prevent money laundering. Another, freesi, analyses air quality data in buildings and suggests ways to improve energy efficiency and indoor climate. Founded in 2017 and 2019 respectively, both firms predate the current AI boom by some years.?

"It is a natural cycle. We see it every time [...]. [Big Tech] companies have deep pockets, but there's always room for innovation and for new companies to compete with [major firms] that may be slower and less innovative," he said.?

The buzz around generative AI has funnelled vast quantities of cash into AI generally. For solid startups with a good proposition, this can only be a positive, Kienhuis said.?

"It is a good thing from the investor side, but also from the company's side. There is more interest from property companies, from financial companies, it generates this acceleration of adoption. And that's good for the companies we invest in," he said.?

Wendy Mason Smith

Personal Coach and Writer I combine my deep life and professional experience with empathy to help my clients achieve happiness and success, not just one or the other.

1 年

I think we need a care in how we use language . Chatbots are not Generative AI, although they have the power to learn. I guess the best response right now is to learn more about them. There is an IBM course on AI online at Coursera that I've just started to work my way through. Unfortunately I can't see developing in secret would work anyway. I can't see how you could possibly contain AI in that way.

Tom Berry

Independent Tuesday Walkers

1 年

Couldn't care less. A lot of hype at present talking it up. If it helps solve poverty, homelessness, war, etc, then great. Otherwise, it's just another technical facility to use or not.

Insight thank for sharing

回复
Liam Wright MIIOM

Senior Semiconductor Resilience Engineer and Obsolescence Management Practitioner

1 年

Up to now, the biggest constriction of AI was not necessarily the software (or algorithm), but how a conventional "Von Newman" computer works, in that it is designed to operate a set of instructions in a specific order. What has changed is the power of the GPU has allowed computing power to be distributed in to many threads, which in turn has allowed parallel processing to become much more effective. So without the correct hardware, the algorithm will probably not work However, the real problem isn't the algorithm, it is the source data used to train the algorithm. There is a need too ensure that sensitive data is kept off the Internet so it cannot be accidentally used by AI to draw unsafe conclusions.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了