Artificial General Intelligence and the Kitchen Sink
What exactly is artificial general intelligence? Some would have you believe it's some mystical science that's beyond our reach. You, me, and the rest of the world use general intelligence every day. Is it really that difficult? Not if you use the right approach.
It's important to acknowledge that people don't agree on the definition of intelligence, what constituents intelligence or artificial intelligence. My definition is pretty straightforward:
Information is knowledge. Intelligence is applying knowledge.
That's it. It covers all forms of AI - past, present and future. But to make that next big leap to AGI requires the ability to apply knowledge across a broad range of situations, problems and tasks, just like humans.
So what does any of this have to do to with the kitchen sink?
People take for granted the general intelligence they use every day to perform basic tasks. Getting ready for work, doing laundry, cooking, and washing dishes. Most people don't even consider what they do to be a form of intelligence, but there's a lot more thinking going on than people realize. Let's use dishwashing by hand as an example.
Before you decide to do the dishes, you usually know or see that one or more dishes need to be washed. Do you go through the trouble of washing just one item? How about two? Take a second to think about the reasons why you do or don't.
The next thing you need to do is turn on the water. You:
- identify the type of faucet handle
- determine how to turn on the hot water
- twist, lift or pull the faucet handle in the correct direction
Are you ready to start washing? Not quite. You also need dishwashing soap. You:
- locate the bottle
- determine if you have enough liquid
- identify the type of cap
- flip, pull or twist open the cap - or is it already open?
- pour the liquid into... what?
The sink? Is it a double sink? We haven't even discussed the drain stopper. Or are you pouring the dishwashing soap into a small container that you'll fill with water? What small container?
Do you see where I'm going with this? Dishwashing is not as simple as it sounds. There's a lot of processing going on in your brain and you haven't even started washing any dishes.
For a task like dishwashing by hand, AGI embedded in a humanoid robot needs to know:
- the criteria for washing
- what supplies are required
- how to use the supplies
- how to use the sink and faucet
- how to pick up and wash each item
- when an item is actually clean
- how to rinse the item
- where and how to place the item for dying - and on what surface
But that's not all there is to dishwashing. Just like you, the robot also needs to deal with various situations that can arise:
- pots, pans, and dishes already piled up in the sink
- no water when it manipulates the faucet handle
- no hot water (it's cold)
- not enough or no dishwashing liquid
- no dishwashing cloth or sponge
- a worn or smelly dishwashing cloth or sponge
- missing drain stopper
- a blocked drain that's causing the water level to rise in the sink
- running out of drying space
Does it seem impossible to document everything there is to know about dishwashing and dealing with the issues that may arise? Is all of this just mindless behavior as some would lead you to believe or is your brain actually analyzing what needs to be done and making decisions? Because making decisions is applying knowledge.
The expectation is that AGI will figure out what you don't document - and knowing how to do that is the secret sauce of AGI.
AI experts expect AGI to learn everything on its own - and that's a realistic expectation, but those of us in the U.S. spend 12 years in school just to be considered competent enough to be employed. What does that tell you about intelligence?
In order for AGI to understand the world as we do, and solve problems as we do, it has to possess the same detailed knowledge that we do - including how to use a kitchen sink.
If you want to learn more about my approach to AGI and why it's possible now, follow me here on LinkedIn. I'm releasing additional information over the coming months in preparation for a product launch.
Innovation Consultant ● Technology & Artificial Intelligence ● Education & Training ● Product Planning ● Human Capital Management
4 年Rainer, your ambitious attitude reminds me a lot of myself back in the 1980s, as I advocated for a "general-purpose problem solver" for our AI customers. (Back then we didn't have the term AGI.) What's frustrating for everyone in the industry is that things are always more difficult to implement than they feel like they should be. Everyone asks him/herself: Why is this so difficult? I totally agree with the statement: ..."it has to possess the same detailed knowledge that we do..." Theoretically, the information is all there. With Wikipedia as a starting point, and all of the Internet behind it, including online dictionaries, and with Google as an AI-based search engine, AI software has the ability to use virtually all of human knowledge. So what's stopping the AI researchers from just implementing an AGI? (1) All software must run fast enough and be scalable for handling ALL knowledge. (2) No one approach works for all situations, so we need to develop what Pedro Domingos calls "the master algorithm," to integrate the various approaches. Dr. Domingos is focusing on this, but still sees a lot of holes in his comprehensive plan that need to be filled in. (3) For scalability, knowledge is going to have to be preprocessed, so we need internal structures and code to manage that. There are those who are hopeful that we can implement an AGI in the next decade. Some experts say it's impossible. However, the average of all the experts runs around 2 to 3 decades from now. Ray Kurzweil, a major figure in AI with numerous inventions and AI applications to his credit, has a computer model which suggests that the hardware will be fast enough by around 2029 (as fast as the human brain), but the software will require further development, with his estimate being around 2045 (which he sometimes refers to as between 2040 and 2050). In case you aren't aware, Ray Kurzweil is considered by insiders to be an extreme optimist in many respects, so his estimates are not what people would call "conservative." With all of the above said, I strongly encourage you to continue challenging the establishment. I believe that even laypeople and sci-fi writers can contribute to the discussion, because they step back and have a different perspective. (Story: I got my AI job in 1986 because I told my boss that we could do what his team leader said was impossible. The team leader then followed my directions and implemented the feature. I continued as project leader with the big vision, and modes implementation skills, and he continued to implement all the challenging features. The point is: Outsiders are needed to challenge the experts.) As a result, I'm focusing on the following missing components of current research: (1) We need to use AI to integrate the ideas of the various experts. That means that we need to focus on a tool for integrating knowledge, and start with the narrow domain of artificial intelligence. (2) We need to incorporate average humans. Children learn from parents and other adults all the time. We need to expose some machine learner to as many ideas as possible. We need some game or social media device to ask humans for input in a form that the learner has the ability to readily digest, preferably using the object-like structures that you suggest in your last article. We've already agreed to move forward together. I look forward working with you on the above 2 projects, which I see as essential to accelerating the tedious process of designing and building an AGI.