What does "#artificialintelligence" do? Post No. 17 - What came first and fallibility
IA is developing very rapidly. We have all probably had a chance to "experiment" with ChatGPT. It does seem to be smart, and we may attribute it some intelligence. But is it really?
We are on the path to general intelligence. Yet here is an example of how we know we are not there yet. General intelligence, which is attributable to us humans, is the ability to take input from our surroundings and extrapolate it beyond just the factual details. For example, if we asked our current AI to describe the pictures above, they would be correct in stating "a wall and a tree" or "a tree and a wall." However, AI would not be able to inform us on what came first, the tree or the wall. We humans are able to tell.
Here is another example of general intelligence. On a recent flight, I sat next to a 5 year old boy. When the meal was served, his father purposefully didn't assist him in getting the folding table out. It took the boy a few minute of playing and toiling and slowly figuring it out, but he eventually did. I asked the father if this was his first flight and he confirmed it was. This is an example of general intelligence.
So, today I read about an Israeli company - DLR Robotics - who are developing "watch & learn" robots: robots that pick up new skills in seconds.
领英推荐
This is certainly on the path to general intelligence.
There is some danger in that, since, we humans, are very well aware of the fallibility of our own intelligence. We are also very easy and quick to "jump to conclusions" and "assumptions" about the world around us, or our perceived understanding of this world. Yet, as a society we have developed some rules to help us cope with this fallibility.
How would we handle it when AI is fallible?