Schr?dinger’s Artificial General Intelligence (AGI).
Schr?dinger’s Artificial General Intelligence

Schr?dinger’s Artificial General Intelligence (AGI).

Christmas #JabeOnAI Special?

Here is a Christmas Article, or Hanneke, Solstice, Yule, Holidays choose your cultural reference point – but the celebration at the darkest part of the year in the northern hemisphere, looking ahead to light and new beginnings and celebrating what we have in friendships, family and home (and thinking of those lacking in these areas).??

?Love to you all from JabeOnAI.com.?

Double Espresso?

Double Espresso

Schrodinger’s cat is a good analogy, because it brings to mind the weirdness of the base reality in which we live and guides us to understand that ‘common sense’ intuitions can lead us astray.

?

AGI is a case in point. Our common-sense view of the world leads us to believe that we as humans in the world are akin to information processing systems, and that the latest iteration of computing machines are reaching problem solving capabilities that mirror our own and will soon be surpassing us.

?

This view is incorrect.

?

That said, just as in a very profound sense AGI is not possible, there is also a practical day to day definition by which it may already be true.

?

If we take the view of how well algorithms can ‘play’ problem solving in closed worlds; and the extent to which reinforcement learning can accommodate to optimising approaches within such world representations; then the capability of the upcoming systems, such as Q* will be profound in our day-to-day world.

?

Where does this leave us?

?

It is possible to hold to conflicting ideas in your mind at the same time. Now, in a very real sense we as organic evolved organisms are part of the world in a way that artifacts are not. An artefact in a real sense needs an observer to interpret it; in a way that an organism does not. A plant will absorb sunlight and bear fruit that ripen to shed its seeds just fine without someone observing it. A machine learning algorithm animating a robotic mechanism without an observer holds a very different metaphysical ontology. As Terry Winograd put it in his book “Understanding Computers and Cognition” …


?“Theoretically, one could describe the operation of a digital computer purely in terms of electrical impulses travelling through a complex network of electrical elements without treating these impulses as symbols for anything.”


Electric wires carrying current, can be interpreted as reducing levels of abstraction towards increasing levels of meaning. These interpretations are carrying representation, that are emergent from our social interactions.

A mushroom needs no interpretation; what is the self-driving car doing?

Arabica Cappuccino?

Arabica Cappuccino


We experience the world first; we are in the world. An utterance, or a poem comes from an experience first; and is uttered for the purpose of connecting and coordinating with another being in the world. We can know things we cannot say, once we say something someone else can know. In a sense, representation is about rising above that surface of the sea of daily existence like a flying fish breaching the waves to surf the air before returning to the ocean.

?

Algorithms exist in this uttered world, coalesced for the purpose of collaboration, operating in a cognitive niche, rather than the purpose purely of existing. This does matter.

?

However, in our world of “cognitive niches”, these artifacts can intercede. These utterances made material. These poems of liquid sunshine. They have a form of agency.

?

So, it comes down to a matter of semantics. What do we mean when we utter the phrase “AGI” Artificial General Intelligence? For some it is a magical phrase full of existential dread. For others it is a business opportunity to generate rents and compete in the financial markets.

?

The Ontological feline is both alive and dead simultaneously, it is only when you cash out the semantics does the quantum wave coalesce as one or other. Is AGI a form of conscious intelligence come to life with its own subjective experience and existential needs. No, it isn’t it falls dead to the ground once this semantic light is shone on the matter. As Thomas Nagel might have said in his article from 1974. “There is something it is like to be a bat” … but not something it is like to be a Q* algorithm instantiated in the cloud …

?

On the other hand. Is AGI a means of bringing forth a tool that can solve problems for which no solution has previously been found, at amazing speed, in any domain we can describe in some form of description or other. Certainly, yes. Will that yield new knowledge once we have interpreted it and acted on it. Yes, it will. Finally, will it be beyond our comprehension. I doubt it because it is based on representations that spring from us as humans. It may take us time to figure out what is being said. So be it.

?

?

I see most of these covered in the pre-print “Levels of AGI: Operationalizing Progress on the Path to AGI” https://arxiv.org/abs/2311.02462; (but not this issue of a distinction between being in the world – as dynamical systems – and operating on the level of representations emergent from the world …)

Case Study 1: The Turing Test. – many have said, this is more a test for human gullibility …Meh ;-)

Case Study 2: Strong AI – Systems Possessing Consciousness. – this approaches the question I am highlighting but does not touch it … consciousness is one thing, being “in the world” as Heidegger might describe it, is another.

Case Study 3: Analogies to the Human Brain. – sure, doesn’t have to do things in the same way as the brain (and it doesn’t) but misses the point if what doing the same things is … we are not limited to problem solving in describable domains; rather what we do can inform both new representations of the world, and how we frame them. Once framed, the problem solving can then engage with such representations.

Case Study 4: Human-Level Performance on Cognitive Tasks. – focus on non-physical task, whereas if you look at the ecological-enactive perspective nothing can be truly considered non-physical.

Case Study 5: Ability to Learn Tasks. – We should not confuse function optimisation and heuristic game playing with human learning, beware anthropomorphising. A nuance maybe, but an important one.

Case Study 6: Economically Valuable Work. – I refer you to Oscar Wilde "A fool is someone who knows the price of everything and the value of nothing".

Case Study 7: Flexible and General – The "Coffee Test" and Related Challenges. - https://www.fastcompany.com/1568187/wozniak-could-computer-make-cup-coffee. This is the question of embodiment is spot on (but robotic embodiment is not enough…), you could digress, what is a cup, what is a coffee … I like the mischief rule example … the frame problem is relevant here … check out … the great article on How to Do Things with Contexts:”Is There Anything in the Oven?” By Samuel Bray, Notre Dame Law School:? https://reason.com/volokh/2021/07/28/how-to-do-things-with-contexts/

Case Study 8: Artificial Capable Intelligence. Uh, huh: “performing a complex, multi-step task that humans value.” I refer you again to Oscar Wilde.

Case Study 9: SOTA LLMs as Generalists. - Not sure this really warrants a serious response. Honestly, Augmented AI, but LLMs out of the box. I refer you to the Turing Test, Case Study 1.

I would add … Case Study 10: Cognitive Niche as a lens for Representation – the “Alfred Korzybski, Map Test”. That is … can it look up from the “map” of its working representations, breathe deeply of the fresh breeze of this new dawn, shiver with benefit of being directly and in an unbroken living chain of beating heart animals going back millions of years, set aside the map and stride forth to start the day …

?

?

And, the conclusions drawn …

1. Focus on Capabilities, not Processes. Easy to miscomprehend what a capability actually is.

2. Focus on Generality and Performance. … Uh huh

3. Focus on Cognitive and Metacognitive Tasks. – Metacognitive is powerful, but this is usually within the context of problem solving in a defined domain (to be overly simplistic). Shannon’s view sure … the frame problem again

4. Focus on Potential, not Deployment. … Uh huh

5. Focus on Ecological Validity. – this is where the issue resides but seems hidden and not fully identified in terms of its significance.

6. Focus on the Path to AGI, not a Single Endpoint. … Uh huh

I would add –

(7). Focus on Cognitive Niches and a Kantian view of Objects. ?– in summary, from Andrew Smart, Beyond Zero and One:

Kant argues that we have no way to know the true nature of the world because our perception imposes a priori knowledge on our conscious experience of it

– and this intercedes in how representations are created, upon which AI is built … so it is built more on a foundation of shifting sand than the firm base rock that many assume. The way to stop the sand falling between your fingers is to look at it through the lens of the Cognitive Niche.

And the Ontology of AGI – (again it glosses over the question of “General” … general in the sense of problem solving in a closed domain is useful and powerful; but, in terms of being embodied in the sense of Michael Polanyi’s 1958 phrase “we know more than we can say” ??… achieving that tacit knowledge is almost by definition not possible. Even taking on the work of the great Rodney Brookes.

?

Back again to Schrodinger and his cat…


Espresso Martini?

Espresso Martini

So, in summary… ?I don’t see the question that is posed by Thomas Nagel, or even Anil Seth (in his excellent book “Being You”), with his “beast machine metaphor” being addressed by that great document, or captured in its ontology…

?

I write this because throwing light on this can dispel fear, and fear is not something we want to welcome through our door. Bring the light.

?

Beware what people are doing. You can have folks who willingly become soldiers, criminals and are cruel in their thoughts and actions. We don’t need to invent demons or fear of robots to see that against which we set our intentions and fight. People are enough and will always be the source of good and evil. We are enough. Artificial General Intelligence is a tool that is now in our hands. Learn to wield it for good.

?

<I will add more supporting references to this essay in due course ?>

?

More on this obviously in my forthcoming book: https://poetryofliquidsunshine.com?

- I will be recruiting reviewer shortly … you can read my draft introduction on the teaser page for the book.


And you can get last minute Christmas gifts here: ;-) : https://jabeonai.com/shop/


Other reading options are available:-

?

https://jabeonai.com/schrodingers-artificial-general-intelligence-agi/

https://jabeonai.substack.com/publish/post/139952848

https://medium.com/@jabe.wilson/schr?dingers-artificial-general-intelligence-agi-48b95cd3cf01





Christophe Cop

Making your company Data Driven

11 个月

I don't see why AGI is in principle impossible... It exists in humans, and there is no law in physics that states that what can be done on a fatty substrate can not be achieved on a silicon substrate. Also, "A priori knowledge"? That is a shortcut of the entire emergence and evolution of information processing and memory... which is pre-computationally present in DNA (and epigenetics), and very soon the nurture part kicks in in the co-development of knowledge. Ex nihilo a-priori knowledge is physically impossible (it violates the law of conservation of energy).

回复
Jabe Wilson

Founder & Consultant - JabeWilsonConsulting.com & Founder - HeyTechBro.com Foundation & Chief Ego Officer - JabeOnAI.com

11 个月

Apologies to Squeeze for the image of the cat ??? from their 1978 album cover … https://open.spotify.com/album/6BXnJVcUbSdC6E82xouYK5?si=p_pjKuslTNaCnVoAPzrHuw

Jabe Wilson

Founder & Consultant - JabeWilsonConsulting.com & Founder - HeyTechBro.com Foundation & Chief Ego Officer - JabeOnAI.com

11 个月

This one is for you Tim Hoctor Merry Christmas buddy! xxx

要查看或添加评论,请登录

社区洞察

其他会员也浏览了