Artificial Intelligence #42: Can intelligence emerge on it's own if we do nothing else but keep building larger models based on simple components?
Enid Blyton – the mystery of the missing necklace

Artificial Intelligence #42: Can intelligence emerge on it's own if we do nothing else but keep building larger models based on simple components?

Hello all

Welcome to #artificialintelligence #42

Some announcements, in 10 months, we crossed 40K subscribers. Thanks for your support as ever!

Also, this week, in addition to Oxford, I accepted a position as professor or AI and on the advisory board for the Johannesburg business school

And finally, we launched a new metaverse course at #universityofoxford (Great to work with Lars again!) we take a more complex perspective ie the ability to model virtual worlds based on a physical ecosystem. Currently based on large scale urban models, climate change and engineering problems with partners. Using tools from meta, unity and MSFT

?welcome your thoughts

In this newsletter, I would like to cover a unique perspective – Mystery of the large-scale AI models - a title inspired by one of my favourite childhood books Mystery of the missing necklace by Enid Blyton

Late last year, I read a sentence so amazing?that I had to read it again

I don’t think many have realised the significance of it?

Or perhaps I am reading too much into it

The sentence was: “We thought we needed a new idea, but we got there just by scale,” said Jared Kaplan, a researcher at OpenAI (source below)        

Lets put this in context

2021 was the year of massive AI models

Following on from GPT-3 in 2020, 2021 saw many more large language models that could do amazing things. The power of large language models to generalise across tasks does not lie in an algorithm, but rather is purely a function of its scale (data). ?The more the parameters the model has (GPT-3 has 175 billion parameters - ?a number already surpassed by other models), the more powerful the model is.

But if scale alone is enough for generalization across problem domains, then that’s a game changer because it means we do not fully understand how exactly that intelligence originates. Other laboratories also have reported the same results. The results are impressive, but the researchers do not fully understand why they are seeing this ‘AGI-like’ ability of generalization across problem domains

Why do I think this is so interesting?

Because complex systems display a fascinating property called emergence- when an entity is observed to have properties its parts do not have on their own, properties or behaviors which emerge only when the parts interact in a wider whole.

This Nature article describes intelligence in mold. How brainless slime molds redefine intelligence -where simple but ancient ‘brainless’ slime particles of mold (an organism that is at least 600 million years old) can solve complex problems like a maze without a central / governing intelligence.

So, could the same be applying to AI i.e. can intelligence emerge on it's own if we do nothing but keep building larger models based on simple components? aka are large language models emergent?


Source 2021 was the year of monster AI models

Image source: Enid Blyton – the mystery of the missing necklace


Dr. PG Madhavan

Digital Twin maker: Causality & Data Science --> TwinARC - the "INSIGHT Digital Twin"!

1 年

Scale and interaction with its environment - that will get you there! Just like the human brain does...

Ibrahim El Badawi

GovTech | Innovation | Storytelling | Leadership

2 年

Congrats on the 40K milestone! And your question and the article reminded me of Facebook story a few years ago when they decided to shut down of their AI systems because the chatbots started to speak in their own new language that used English words but could not be understood by humans. So maybe yes, an intelligence can emerge, but what kind of intelligence? and for what purpose?

Michael Zeldich

President at Artificial Labour Leasing, Inc

2 年

Even if "intelligence emerge on its own" it will be hard to recognize that because there is no definition of "intelligence" that could be formulated. Any definition of "Intelligence" will not belong to the objective reality because the content of such definition will reflect subjective criteria of the provider. The behavior of a system is a result of its organization. The behavior which could be tagged as "Intelligent" is achievable for subjective systems. Subjectivity is the result of the isolation internal space of a system from the environment and the ability to form behavior in relation to the environment in its own internal space.

回复
Marco Abiuso

CTO - R&D at Esplores - The Ultimate 'Big Data' Exploration Tool

2 年

Probably it's not the full story, but I agree with the idea of emergence of new behaviour by just increasing size. And there is already a proof of that: https://www.quantamagazine.org/computer-scientists-prove-why-bigger-neural-networks-do-better-20220210/ For me, the fact that we're on the right track is the serendipitiness of that: it was unexpected at the beginning! Don't you agree Ajit Jaokar?

Krishnapriya C.R

Director of Engineering at Arm

2 年

Congrats on your new position, Ajit Jaokar. Intriguing article, as I DM'd you, will be interesting to see if higher emerging intelligence can also emerge to explain its decisions, thus naturally solving the explainability AI issue :)

要查看或添加评论,请登录

Ajit Jaokar的更多文章

社区洞察

其他会员也浏览了