What AI can learn from the Day of the Dead
What can AI learn from the Day of the Dead!

What AI can learn from the Day of the Dead

There is no such thing as a sphere. A sphere can be described in mathematics.

?

The equation for a sphere is:?(x - a)2 + (y - b)2 + (z - c)2 = r2, where (a, b, c) represents the center of the sphere, r represents the radius, and x, y, and z are the coordinates of the surface of the sphere.

?

You will find a sphere in the world view of idealism, and Plato. Within the world, you won’t find such things. There are things like, pumpkins, stars, balls. But not Universals. This relates to what I have spoken of before, the map is not the territory.

?

So, what does this mean? Well, you can say there are two kinds of lenses, through which you can see the world. One is: rational, idealist view. The other view: is one where you try to understand what representations actually are, how they have emerged. And there is a sense that we as humans have no direct access to the 'real' world, what you might call the 'substrate'. We only have indirect access through our senses; senses which evolved for a reason, a purpose. At Halloween they say we get closer to the 'spirit world', that is a nice oblique metaphor for this inaccessible 'real world'.

?

It gets even more challenging. From a certain point of view, there is no simple view of what it means for humans to observe the world. You might 'search up' the notion of the 'Cartesian cinema'. Because there is no " I " to observe the cinema, no one sitting watching the screen. It gets to what David Chalmers calls the hard problem.

Personally, I am of the view that what we call Qualia, the quality of how we experience the world. Just is the way it is. It is just the way that these dynamical systems – that we are – apprehend the world. ?There are however, other Dynamic Systems, as Daniel Dennett might say, within us - so to speak - there are dynamical systems within an ecosystem of dynamical systems, a system dedicated to articulating, and you might say observing, the internal mental world and turning it into representations for the purpose of collaboration – driven by the evolutionary advance of building - and existing within - the "cognitive niche" (as Stephen Pinker described it).

?

It seems to me that, in order to apprehend the world, in terms of representations, we need to understand their nature. We should think of the emergence of representations as being driven by this collaboration between ‘ecosystems of dynamic systems’, - that is thinking of the mind as a dynamic system, or more correctly that the brains of people are dynamical systems, and indeed the web of interactions which people undertake within society forms a wider dynamic system.

?

This becomes important, when you are thinking of how to engineer AI systems, because – AI systems are built on representations of the world, and if you take what I would describe as a ‘simple minded’ view of the world and representation, a ‘simple minded ontological view’ then you are led to expect that all of these representations you encounter will be consistent, because they have a common reference point, the so called obvious 'real world' that we experience straightforwardly. That is an illusion, the real-world experience that we all have is a complex construction, in a sense, the ultimate hallucination, to reclaim that word.

?

If you look at the recent work of Andy Clark and others, you can see that there is a ‘model’ constructed, let’s say an emergent model, and what we do as sentient humans is, we look at the delta or error of our brains predictions as compared with the ‘sense data’ coming through our senses. Sense data does not have any direct access to the world (be it pressure waves converted into electrical pulses from our ears along nerves into the central nervous system – these are mediated patterns – these are constructed – or from the light radiation on rods and cones – the common sense notion is that sight is like a pin-hole camera, coming into the mind, reflected off objects in the world. Yet, there is no internal cinema in the mind. You get into an infinite regress. There is no individual, sitting there inside our mind watching that internal cinema screen. With another pin-hole camera looking at that internal cinema screen, and another within that observer, and then another and another.

?

We don’t, we genuinely do not, know ultimately the nature of, let’s call it the ‘scaffold’ within which we exist. What we do is, we construct, from these representations from what comes from our individual senses. Senses that have evolved for purposes, evolutionary purposes of survival and replication, for the sake of advantage. Humans have evolved to collaborate, and from that collaboration has emerged our communication, language, and representation. Those representations are constructed. Maybe another angle on the idea of infinite regress. Constructed on constructs. Recursion going back to the notions in the book G?del Escher Bach, that inspired me to study Artificial Intelligence all those years ago.

?

So, what does this mean. If you are looking for an engineering principle? Well, what it means. Is that you have to understand the materials that you are working with. The bricks and mortar. The materials that you are working with, fundamentally are these socially constructed representations, sort of an infinite regress from our mediated senses for the purposes of collaboration. Which gets pretty sophisticated, and I would argue that the sort of pinnacle for this representation and communication to represent the world is Science.

?

And, within science we have communities, communities of practice that have come together. We have journals which capture some of that community. In essence, to Engineer effective Artificial Intelligence, based on representations – which all flavours of AI are based – whether they be logical based systems, or vectors based on the patterns within the symbol systems of language, or data from various sensors or senses, be they magnetic resonance, radiation, pressure waves, you name it, those representations from instruments are then captured and digitised, and need to be interpreted with models and hypothesis.

?

But it is a category error, to think that this is all coming down to an objective view. And some of my earliest, reflections on this come from the book the embodied mind, published in 1991, by Varela (and others), Varela who sadly passed away at the age I am now, mid-50s. But one of his co-authors has since published a great body of work, and there is a forthcoming book, that I have on pre-order, that I am very much looking forward to reading, which discusses this very subject from MIT press: https://mitpress.mit.edu/9780262048804/the-blind-spot/

?

To quote:

?

“It's tempting to think that science gives us a God's-eye view of reality. But we neglect the place of human experience at our peril. In?The Blind Spot,?astrophysicist Adam Frank, theoretical physicist Marcelo Gleiser, and philosopher Evan Thompson?call for a revolutionary scientific worldview, where science includes—rather than ignores or tries not to see—humanity's lived experience as an inescapable part of our search for objective truth. The authors present science not as discovering an absolute reality but rather as a highly refined, constantly evolving form of human experience. They urge practitioners to reframe how science works for the sake of our future in the face of the planetary climate crisis and increasing science denialism.

Since the dawn of the Enlightenment, humanity has looked to science to tell us who we are, where we come from, and where we're going, but we've gotten stuck thinking we can know the universe from outside our position in it. When we try to understand reality only through external physical things imagined from this outside position, we lose sight of the necessity of experience. This is the Blind Spot, which the authors show lies behind our scientific conundrums about time and the origin of the universe, quantum physics, life, AI and the mind, consciousness, and Earth as a planetary system. The authors propose an alternative vision: scientific knowledge is a self-correcting narrative made from the world and our experience of it evolving together. To finally “see” the Blind Spot is to awaken from a delusion of absolute knowledge and to see how reality and experience intertwine.”

?

Reconceptualising science not as an objective window on the world, which as I have said, is a misapprehension of representation and its metaphysical status. But to understand it in context. And my mission, with #JabeOnAI, is to push to understand this context, in order to make sense of, what you might call a patchwork, of knowledge domains, and a patchwork of different modes of representation. Vectors, and meta-data, patterns in language and language systems, and the foundation in this instance, can be seen not so much in an objective ‘real world’ but rather in this social construct.

?

Understanding the purposes for which a representation has been created, the hypothesis it is to test, ever the evolutionary driver, or rationale for the senses and what they are there to detect in which context and for what purpose, or the instruments which we create and build representations for building models which you can then test, using the scientific method, ultimately about testing these theories with new forms of sensors and measurements. But this happens within the social construct of science, in a sense you cannot sit outside the world that humans inhabit, into a platonic, God like objectivity.

?

Circling back on this All Hallows Eve, and the Day of the Dead. Which celebrates the dead. Those who have passed on before us. Whose knowledge and wisdom our own knowledge and wisdom is based. The shoulders of giants as they say.

?

So, I will close. And say, there is no such thing in the world as the sphere, that is our representation to make sense of the world. But, you will never find one in what you might wish to call the ‘real’ world. You will find, apples, and moons, and the Halloween Pumpkin. The Jack O Lantern and the Day of the Dead. And we should celebrate that. Celebrate them, those who have gone before, the ways in which they communicate, and the communities they have created. And through understanding that, we can build AI systems that can help us.

?

Our mission to understand this world we are in. This magical world. You could say to do magic, to do science. And to do Good in this world.

?

So, I will say, Happy Halloween. Happy Samhain. Happy Day of the Dead. And let’s look forward to building and engineering effective AI based on this understanding.

?

(I was going to post this later this evening, but itching to post it – I decided to post it during my lunchbreak, to get it out early on Halloween :-)

?

https://jabeonai.com/what-ai-can-learn-from-the-day-of-the-dead/

Jabe Wilson

Founder & Consultant - JabeWilsonConsulting.com & Founder - HeyTechBro.com Foundation & Chief Ego Officer - JabeOnAI.com

11 个月

Happy #dayofthedead celebration of Francisco Verela https://mitpress.mit.edu/9780262529365/the-embodied-mind/

Giovanni Nisato

Collaborative innovation management consultant, project manager, community of experts facilitator

11 个月

??we do not perceive the world we see. We “see” the world we perceive”. Serene Samhain to you Jabe. Hope you keep writing this series next year ( or the day after tomorrow) :)

要查看或添加评论,请登录

Jabe Wilson的更多文章

社区洞察

其他会员也浏览了