The AGI Illusion

The AGI Illusion

I wrote this two years before the launch of ChatGPT. The more I hear about scaling laws, computational power, and super-intelligence, the more I am convinced that much of what we call AI today will eventually confuse, confound, and propagate error. Recently, I heard that today's large language models could actually represent a race to the bottom, rather than a path to the vast economic power some anticipate. If this turns out to be true, the stock market could look very different—the 'Magnificent 7' might be replaced by new players that disrupt the 'too big to fail.' Fascinatingly, AI or no AI, it seems we never learn from history.

Enjoy

https://www.dhirubhai.net/pulse/general-ai-mukul-pal/

Brain AI is an incomplete aspiration for intelligence.

There are many detractors of AI, a few are taking the field ahead by redefining cause and effect [1], a few have called AI dead, and believe it is nothing but an attempt to get computers to do cool stuff [2]. The web is full of garbage in garbage out [3] references when it comes to deep learning. Despite the naysayers, proteins keep getting folded [4], robots keep getting smarter [5] and the surpassing human intelligence timeline keeps getting closer.

With all this happening, one may wonder why to spend time redefining General AI (Strong AI). With so much fuss about AI, why worry about its generality? Humans need smart machines to help them anyway, what’s the big deal in surpassing human intelligence? There are a few of us [6] who worry about the evil side of AI and redefinition could bring clarity.

Bounded Rationality

Father of AI [7] said human rationality is bounded by the limits of available information. With time the bounds extend, and rationality increases. If we extend the concept to intelligence (Intelligence is processed information), with time, intelligence will increase. This can either mean that singularity [8] is near or it could mean, intelligence will get commoditized sooner than later. Narrow AI is weak because it will do its bit, contribute a verse [9] and then die (get commoditized).

Three Pillars

Business of today capable to survive tomorrow will be based on three pillars, community (machines are and should be designed to serve the community), General AI, entities which keep enhancing themselves, staying relevant and ahead of commoditization, and technology, which gets redundant and commoditized the moment its built.

Brain Impact!

In such context, how relevant and far reaching is building a brain (the current definition of General AI)? Like computation, building brains to see, walk, learn is like building a highway for intelligent machines to drive on. Brain AI is an incomplete aspiration for intelligence. Brain impact has limits. Internet has made the greatest impact and now its consciousness [10] is bringing it alive. Thinking machines must eventually become conscious, but even if they do, is it enough for impact??

We humans are conscious and what’s our scorecard? Could conscious quantum computing thinking brains improve our impact scores? If General AI will indeed commoditize intelligence, while staying ahead of its irrelevance, could it create a more balanced world? Could it solve the climate crisis? Avert the wars? Take us beyond Information Age? Build an intelligent Web [11]??

The Game

If we look at the world from my Asimovian prism, scientific research can be viewed as a game of man against Nature [12].?

“How could we consummate the victory of intelligence over Nature more gloriously than be passing our heritage in trim to a greater intelligence - of our own making? “

The prism should eventually break. Nature is a moving target, which is why General AI must think differently. “If “I” (Entity) can’t win the game, how should I become intelligent?” Post consciousness, it may aspire to be wise, philosophical, and conclude that it was better to let Nature take its course, as it is an invincible contender. What would be a more non-philosophical, practical, impactful path this Entity could take?

Intelligence and Humans

I am biased in my vision and would naturally (or randomly) get pulled into the grandmaster’s confirmation [13] who says…?

“We confuse performance - the ability of a machine to replicate or surpass the results of a human - with method, how those results are achieved. This fallacy has proved irresistible in the domain of higher intelligence that is unique to Homo Sapiens.

There are actually two separate but related versions of the fallacy. The first is “the only way a machine will ever be able to do X is if it reaches a level of general intelligence close to human’s” The second, “if we can make a machine that can do the X as well as a human, we will have figured out something very profound about the nature of intelligence”.

The romanticizing and anthropomorphizing of machine intelligence is natural. It’s logical to look at available models when building something, and what better model for intelligence than the human mind? But time and again, attempts to make machines that think like humans have failed, while machines that prioritize results over method have succeeded.”

Error Propagation

I can go back to the history of probability and explain why the measure of chance used by civilization is flawed owing to its stochastic inability [14] to converge (Equal number of head and tails) or diverge (all heads) and cite other confirmation (biases) but you get the point. Nature is about error and you cannot win against Nature. Our laws are prone to fail. Anything and everything super intelligent will succeed and fail. So where does this leave us?

Definition

The definition of General AI then becomes - A conscious entity which bases itself on a premise that comprehending Nature completely is an unachievable frontier which can be aspired by understanding how Nature maintains itself by propagating error. This entity should estimate, reduce, and anticipate error. A probabilistic machine which builds a temporal map in any setting, irrespective of the informational content thrown at it. Seeking behaviors that persist and are stronger than the laws identified over eons of human existence.

“Nature thrives on the failure of the second law of thermodynamics and the breakdown of many other laws. Causality always converges to Chaos. Intelligence, it seems, conceals and propagates inside error. Error, noise, fluctuation, turbulence can be generalized and optimized by temporal bounds, giving intelligence a true dynamic range, which is measurable, enhanceable and perceptible. Such an approach can define intelligence by illustrating how Nature might be extracting order from disorder, relentlessly.”


[1] The Book of Why, Judea Pearl

[2] Roger Schank - AI is dead

[3] Garbage in garbage out in deep learning, Google it.

[4] AlphaFold

[5] Vicarious

[6] Elon Musk

[7] Hebert Simon

[8] Singularity

[9] Walt Whitman

[10] MIT Technology Review

[11] Intelligent Web 4.0

[12] The Intelligent Man’s Guide to Science, Isaac Asimov, 1962

[13] Deep Thinking, Gary Kasparov

[14] Law of Large Numbers

[15] Author’s definition of Nature in context of error.

Aymansha Michelle

Programmer?? "Passionate programmer adept at crafting clean, efficient code. Problem solver, lifelong learner, and team player with a knack for innovation."

5 天前

Send me connection please?

回复

要查看或添加评论,请登录

Mukul Pal的更多文章