Someone thinks we are all stupid
Jim Brander
Director of Interactive Engineering, AGI for general purpose problem solving
Dear oh dear - if this is one of the deepest minds we have, we really are in trouble..
Before we start making claims of what will happen, how about we look at our limitations. Four pieces for one. We can only handle four pieces of information in our conscious mind at once, so if our conscious mind is telling us something about something complex, it is probably wrong. Looking back at all the waves of AI - Prolog, Expert Systems, Intelligent Agents - a few moments thought should have told people each wave was garbage (not soundly based, if you prefer). Artificial Neural Networks - how could it have happened that something so stupid - a directed resistor network - could be compared with a self-extensible active network that can appear to be undirected. Going back a little more – the Turing Machine. If you use an inconsistency, you can prove anything. If you add a state indicator (a light) to Turing’s Machine so the two states are visible, the proof collapses (the proof was created to order, so that is an excuse for its foolishness).
领英推è
The only way we will achieve AGI is by building a machine which is too complex for us to understand, but that is OK - we do facets quite well. People are spending millions of manhours on generative methods that require hundreds of billions of nodes (Google quotes 576 billion nodes for its PaLM), but the major problems we face (Climate Change) are new and rapidly evolving, making data-driven methods like ML or DL or LLM useless against them.
Well, what might the machine look like? It will probably have nodes, operators and links, with states and values propagatable in any direction, and self-extension. We could call it Active Structure, because the structure changes its shape as it works on a problem (it changes its connections). How do we get it off the ground? By loading a hundred thousand definitions from a human-readable dictionary. Yes, a human-readable dictionary is pretty terrible – atrocious circularity – “mend -> repair -> fix -> mendâ€, and no attempt to describe emotional states (angry -> annoyed -> angry), but we have to start somewhere (at least some of it is quite good). Many words have multiple meanings – “set†has 72, “run†has 82, “on†has 77, and many words have figurative meanings – a barnacle, a bulldozer. We can’t allow this level of uncertainty to infect the machine, so the machine has to establish the right meaning for the particular word in its context – in the beginning, it will require human help. Humans also clump things – “he put the money from the bank on the table†or “an imaginary flat surfaceâ€. We do all this unconsciously – we have to emulate something we don’t understand to reach human-level performance, but we want to surpass human-level performance – the Four Pieces Limit is an absurdly low limit, which the machine won't have.
But why make it “understand†a natural language? Because humans can describe a complex problem more completely using a natural language than in any other way.