Deconstructing reality for Artificial General Intelligence
Before an Artificial General Intelligence application can even begin to show any semblance of intelligence, it needs methods to store, retrieve, and apply a wide-range of knowledge within its digital brain. As mentioned in my previous article, it means being able to store knowledge in whatever form it takes. But what forms of knowledge? Here's an example.
An apple has an outer skin, stem, inner flesh, core, and seeds. Although its parts can be described with words, an apple is a physical object. It has a physical form, appearance and texture that changes when it's sliced, sauced, juiced, dried, or baked. How it transitions into other forms needs to be shown and documented, as well as how it can be used as an ingredient. It has nutritional value, cellular structures, and chemical compositions. Its seeds can be planted to grow apple trees. And let's not forget what parts of the apple we do and don't consume, including the why, and the how in its different forms.
Have I missed anything? Varieties? How it decomposes? Symbolism and historical context? I'm sure you can think of additional facts and details about apples that could be documented.
If the AGI is informed that a pear has similar qualities to an apple as an edible fruit, then it generally knows all the ways a pear can be prepared, eaten, and used as an ingredient. If you’re thinking the AGI should be able to determine the other info on its own, think about how it will do that, to what level of detail, and the sources of that info.
Do a Google search for "chemical composition of a pear" and click on the first ten non-image results. Most people would find it challenging to make sense of and compile the information.
If an orange is documented with the same consistency as that of an apple, the AGI can tell and show all the similarities and differences between apples and oranges. Keep in mind that the differences need to be conveyed to people at various levels of detail based on the user, context, and situation.
Imagine documenting knowledge in this manner for every organism, object, process (including thought), behavior (conscious and unconscious), event, and concept. But it can't just be free-format. It needs to be stored in a uniform and consistent manner across all forms and scopes of knowledge. That's what it takes for AGI to be able to learn, understand, and apply knowledge as we do.
If you want to learn more about storing, retrieving, and applying knowledge for AGI, follow me here on LinkedIn. I'm releasing additional information over the coming months in preparation for a product launch.
Innovation Consultant ● Technology & Artificial Intelligence ● Education & Training ● Product Planning ● Human Capital Management
4 年Rainer, I'm not up on the latest detailed research, and that's probably impossible to do, but let me tell you what I know in relationship to your proposals. The consistent representation approach is inherent to the Symbolist approach to machine learning. You are talking about defining "objects" with specific "properties." This doesn't work particularly well with the Connectionist approach (neural networks), which are increasingly popular and successful. Pedro Domingos documents his work about integrating approaches in his book The Master Algorithm. As I envision your concern, we would need to create an object structure for each item manually (that's the drawback of the Symbolist approach, the human intensive input). Then the neural networks and other Connectionist-based AIs could try to fill in the properties of the objects using their machine learning abilities. A pure Symbolist approach is not possible. It requires excessive amounts of human input. This bottleneck, dubbed the Knowledge Engineering Bottleneck in the 1980s led to the second AI winter. AI companies could not satisfy the needs of businesses without charging exorbitant fees to hand-code the knowledge. All of the AIs that are currently available are hard-coded for specific applications. An AGI would require true self-learning, which is still an open area for research. I agree that having the structure would help the learning, but how do you create the structure? Right now, it requires significant human intervention. What Pedro Domingos suggests is that, if we could create an AI with the abilities of a 3-year-old human, then the AI would be able to learn by itself. Creating such an AI is still an open area for research.