Someone thinks we are all stupid
Grady Booch, a Fellow at IBM, and one of the deepest minds on software development in the history of the field – Gary Marcus

Someone thinks we are all stupid

Dear oh dear - if this is one of the deepest minds we have, we really are in trouble..

Before we start making claims of what will happen, how about we look at our limitations. Four pieces for one. We can only handle four pieces of information in our conscious mind at once, so if our conscious mind is telling us something about something complex, it is probably wrong. Looking back at all the waves of AI - Prolog, Expert Systems, Intelligent Agents - a few moments thought should have told people each wave was garbage (not soundly based, if you prefer). Artificial Neural Networks - how could it have happened that something so stupid - a directed resistor network - could be compared with a self-extensible active network that can appear to be undirected. Going back a little more – the Turing Machine. If you use an inconsistency, you can prove anything. If you add a state indicator (a light) to Turing’s Machine so the two states are visible, the proof collapses (the proof was created to order, so that is an excuse for its foolishness).

The only way we will achieve AGI is by building a machine which is too complex for us to understand, but that is OK - we do facets quite well. People are spending millions of manhours on generative methods that require hundreds of billions of nodes (Google quotes 576 billion nodes for its PaLM), but the major problems we face (Climate Change) are new and rapidly evolving, making data-driven methods like ML or DL or LLM useless against them.

Well, what might the machine look like? It will probably have nodes, operators and links, with states and values propagatable in any direction, and self-extension. We could call it Active Structure, because the structure changes its shape as it works on a problem (it changes its connections). How do we get it off the ground? By loading a hundred thousand definitions from a human-readable dictionary. Yes, a human-readable dictionary is pretty terrible – atrocious circularity – “mend -> repair -> fix -> mend”, and no attempt to describe emotional states (angry -> annoyed -> angry), but we have to start somewhere (at least some of it is quite good). Many words have multiple meanings – “set” has 72, “run” has 82, “on” has 77, and many words have figurative meanings – a barnacle, a bulldozer. We can’t allow this level of uncertainty to infect the machine, so the machine has to establish the right meaning for the particular word in its context – in the beginning, it will require human help. Humans also clump things – “he put the money from the bank on the table” or “an imaginary flat surface”. We do all this unconsciously – we have to emulate something we don’t understand to reach human-level performance, but we want to surpass human-level performance – the Four Pieces Limit is an absurdly low limit, which the machine won't have.

But why make it “understand” a natural language? Because humans can describe a complex problem more completely using a natural language than in any other way.

要查看或添加评论,请登录

Jim Brander的更多文章

  • White Paper

    White Paper

    Introduction There are two areas where complex text is used – legislation and Defence specifications, and, to a lesser…

  • “It thinks like an expert” - New York Times

    “It thinks like an expert” - New York Times

    Here, I address an article in the New York Times about Generative AI and Medicine by Daniela J. Lamas >>…

  • Mission to Mars

    Mission to Mars

    The astronaut has been conditioned over a very long period, since birth, to the Earth’s gravity, its atmosphere, its…

  • ATO data shows 66 millionaires paid no income tax in 2020-21

    ATO data shows 66 millionaires paid no income tax in 2020-21

    The Australia Institute's senior economist Matt Grudnoff said ”Tax is the price we pay for living in a civilised…

  • Praised be the Doomsayers

    Praised be the Doomsayers

    The human race will go extinct if AI has its way! They point to the wonders of Generative AI (LLM) as proof of this. We…

  • Supercharging Search with Generative AI – Google

    Supercharging Search with Generative AI – Google

    Both Google and Microsoft have introduced Generative AI for their search engines. Google is only offering testing of…

  • Furphy #4 - AI and Aviation

    Furphy #4 - AI and Aviation

    Part 4 of a series about AI and trust. First, let’s talk about the recent Boeing 737 MAX crashes (Lion Air in…

  • Furphy #3 – The Mathematical Mind

    Furphy #3 – The Mathematical Mind

    Furphy series part 3. #Neurosymbolics is yet another dead end, based on a serious misunderstanding.

  • The Human Side of Robodebt

    The Human Side of Robodebt

    There was an article in The Mandarin on 22/02/2023 that described the event where a social worker spoke to a very upset…

  • Furphy #2 – Autonomous Vehicles

    Furphy #2 – Autonomous Vehicles

    Furphy series part 2. As mentioned in Part 1, it is thought by many that an artificial neural network (ANN) is a…

社区洞察

其他会员也浏览了