Short statement about the imminent emergence of artificial general intelligence
Sam Altman announced recently that “We are now confident we know how to build AGI as we have traditionally understood it.”? He may be confident, but I doubt very seriously that they do, in fact, know much of anything about accomplishing artificial general intelligence (AGI).? I have just finished a paper on the topic and while waiting for it to appear, I did want to respond to Altman’s claim.
The way we have traditionally understood AGI, it means what Newell and Simon talked about in 1958: “It is not my aim to surprise or shock you—but the simplest way I can summarize is to say that there are now in the world machines that can think, that can learn and that can create. Moreover, their ability to do these things is going to increase rapidly until – in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”
Current AI models of practically every flavor are focused on well-structured problems.? They are given a space of parameters and a tool for finding a configuration of that space that solves the problem.? The core of the problem solving is provided by humans.?
What humans contribute to solving GenAI problems
What the machine contributes to solving GenAI problems:
?
领英推荐
ChatGPT and other transformer based models are also highly dependent on humans to create prompts. This human contribution is rarely acknowledged, but there would be no semblance of intelligence without it. All of this human contribution is anthropogenic debt, akin to technical debt.? It will have to be resolved before a system can be autonomous.? For now, and for the foreseeable future, there is no machine intelligence without human intelligence.?
GenAI models are trained to fill in the blanks, a task invented by human designers. There is no theory for how one gets from a fill-in-the-blanks machine to cognition.? In the absence of a theory, attributing cognition to emergence with scale is nothing more than wishful thinking. It is play acting at science.
The attribution of cognition to current models is based on a logical fallacy (affirming the consequent).? The fact that a model succeeds at a test says nothing about how it succeeded.? Did it succeed by being a stochastic parrot?? By raw association? By narrow problem solving through parameter adjustment?? Success does not allow one to select if any of these is true. ?Finding that cookies are missing from the cookie jar, does not tell us who took them.
Natural problems are not structured in a way that today’s machines can solve them. Among the biggest problems we face as a society is how to eliminate poverty, for example.? We do not know what the parameters are that would enable us to solve this problem, let alone how to adjust them. ?
When Einstein wrote about the equivalence of energy and matter, his idea was contrary to the general thinking of the time.? It was revolutionary.? Today’s models can parrot language patterns that have been included in their training set, but not produce insights that are contrary to those patterns.
These are just a few of the reasons why I doubt that we are on the threshold of general intelligence.? These concerns are rarely even recognized, but unless they are addressed through new insights, discoveries, and inventions, there is no chance of achieving artificial general intelligence.
?
Senior Acquisitions Editor for Philosophy, Cognitive Science, Linguistics, and Bioethics at The MIT Press.
2 个月Didn't Herb Simon say pretty much the same thing like 70 years ago?
??Founder of AIBoost Marketing, Digital Marketing Strategist | Elevating Brands with Data-Driven SEO and Engaging Content??
2 个月Interesting perspective! It's crucial to address human impact on AI advancement. Let's navigate this together! ?? #AGI #HumanInnovation
AGI Realist,Fellow@AAIH, Singapore
2 个月Models just based on Language alone cannot get us to AGI. Human intelligence relies on graphs, maps, images and videos to understand complex causations.. LLLMs currently represent the Chinese room paradox where answering or predicting the next word does not represent reasoning. But what is reasoning ?? Let us first understand knowledge. Knowledge is a pattern which can give us a logical and systematic view of the models that exist in universe. Reasoning is the transformation of this knowledge to sustain these existing models. The innovation of LLM is a great but we need further innovation to conclude that LLM can reason.