JabeOnAI Newsletter Summer Edition
JabeOnAI Newsletter Summer Edition

JabeOnAI Newsletter Summer Edition


Allow me to start with a quote within a quote from a review - Cormac McCarthy’s bleak road from TheEconomist

?

"

This great novel is an act of hope because it is a warning, a calmly urgent reminder of what we stand to lose. "You can read me a story, the boy said. Can’t you Papa? Yes, he said. I can."

"

?

Let me say that this summer newsletter holds an anti-apocalypse message (that is not the AI apocalypse but the opportunity of these tools as a salvation from our own failings of the past). I invite you to read the review of the Road from the economist in this link (https://www.economist.com/1843/2013/05/17/cormac-mccarthys-bleak-road ) and reflect on what we have to lose. As I look around me, I think the only way we can avoid Cormac McCarthy’s Road, or winning William Gibson’s bleakly prophetic “Jackpot” is through a combination of brave human endeavour, and the use of AI tools to advance Science to solve the problems that our forbears have bequeathed us.

?

The risk is human politics and selfishness. Not the fairytales of computer intelligence taking over. Trust me. I have studied this long and hard enough to feel confident in the advice I am giving. But to achieve this, people must gain understanding and that is my mission.

?

On a lighter note ....

?

I have been re-reading these books over the summer so you don’t have to ;-) although I would strongly encourage you to …


Summer Reading


Many of the problems with Artificial Intelligence stem from the philosophical underpinnings, and the conclusions that folks draw from them about the nature of representation (see the note in the frame problem to read more on that). A key theme of my work in JabeOnAI is highlighting areas of thinking and research in neuroscience and cognitive science (and philosophy of mind) that help to address some of these issues. This is timely as we go through another cycle of hype and disillusionment (see the latest Gartner 2023 Hype Cycle placing Generative AI at the peak of inflated expectations - https://www.gartner.com/en/articles/what-s-new-in-artificial-intelligence-from-the-2023-gartner-hype-cycle ). So, here is a wrap of my Summer posts. As I might say (tongue in cheek) … I hope you have enjoyed the (AI)Summer as we head into (AI)Autumn … ?(seehttps:// jabeonai.com/california-dreamin/ ).

?

Summer Reading from JabeOnAI



As I said back in July, I would be sharing summer reading. Here is a wrap of my Summer posts – I hope you have enjoyed the (AI)Summer as we head into the (AI)Autumn … ;-)

?

Also, as friends who know me well will be familiar, my phrase is “I choose to be an optimist”. So let us dive into this edition of the JabeOnAI newsletter


Contents

In this edition:

·????? A Manifesto: How to thrive in an age of ubiquitous AI.

o?? Where I talk about a couple of key trends over the last few decades being 4E Cognition and Artificial Neural Networks

·????? A framework for Understanding Ourselves and Artificial Intelligence

o?? In this post I elaborate in more detail the 4E cognition theme

·????? AI is not hallucinating.

o?? In this post I highlight the machine learning theme from a counter cultural angle – and why hallucination is a feature not a bug;

·????? How the kids can survive and thrive in this AI world

o?? An evocation to the spirit of being responsible and ethical, and perhaps even political

·????? How to escape the frame problem

o?? Going deep into an issue of handling context in engineering AI and how a way forward can be found

·????? Imagine no information.

o?? A little thought-provoking piece, radical pushback on the nature of representation (heresy in the high church of cognitive science …)

o?? Quoting from Winograd and Flores mentioned earlier “Theoretically, one could describe the operation of a digital computer purely in terms of electrical impulses travelling through a complex network of electrical elements without treating these impulses as symbols for anything.” …. I will complete the quote another time and extend the discussion on computers and levels of representation. https://www.abebooks.co.uk/9780201112979/Understanding-Computers-Cognition-New-Foundation-0201112973/plp

·????? Conceptual refreshment

o?? Something we all need …

?

Finally, to answer a common question - Jabe why the idiosyncratic art work and the counter cultural references? In part, this is branding to make my message stand out from the whirling mass of comment on AI we have today; but also there is a serious point … unusual perspectives can be important … and finally, autobiographically - because it is me and who I am.

?

More soon, here on Jabe On AI


A Manifesto: How to thrive in an age of ubiquitous AI.


A Manifesto: How to thrive in an age of ubiquitous AI.


Over the last 30 years the most influential trend in cognitive science – arguably has been the rise of 4E cognition as a theoretical under-pinning (at a later date I will explain the Lorenz attractor intertwined with the cartoon Artificial Neural Network in the teaser image). The other side (not so much based on Cognitive Science, but loosely described as AI) where most progress has been made is in Machine Learning which is mostly engineering and heuristics. These two themes from late 80s have been divergent but both influential, I believe that the two can profitably be combined.

Don’t get me wrong, these current AI systems are great, but they are also seriously misunderstood, and their failings are starting to cause doubts. I say AI systems are best understood as an art form, not as a form of true intelligence. They are I say “the Poetry of Liquid Sunshine”. That is to say, as we create: software code, feature engineering, ML architectures, as we undertake data curation; and, what we write and create, and we compile, and run is akin to Poetry. And at least currently they are implemented with electrical current (the analogy there being to liquid sunshine flowing through the circuits). There is a serious point to this lyrical framing – which is that we are starting to see the current optimism around Generative AI fade, and the challenges our society faces, are I think too serious to let that happen (we need AI systems to help address the climate emergency amongst other things).

I feel this is worthy of a Manifesto. We need to understand how to engineer these systems, to have a deeper understanding of what we ourselves are as conscious human beings; and, finally, we need a way forward to being happy, productive, and secure with ubiquitous AI technology all around us.

These three elements I feel form the basis of a manifesto, for what I am seeking to do with JabeOnAI; and I see a strong connection, to what I call practical Cognitive Science as a way to guide us on this journey. So the three themes are, to repeat:-

  1. Creating effective AI systems?(We need to understand how to engineer these systems). As a quick review of a bunch of challenges being faced by the current AI systems, such as:

  1. The realisation that LLMs don’t reason (?https: // arxiv.org/abs/2205.11502 ?);?An inability to reliably interface with external tools:?https://arxiv.org/abs/2308.05713An instability overtime (making them difficult to engineer into larger systems).?https://arxiv.org/pdf/2307.09009.pdfThe implications of adversarial attacks (https://arxiv.org/abs/2307.15043 )?

And my Talk at BioIT a few years back – which I intend to elaborate on in more detail:

https://www.elsevier.com/connect/why-data-science-is-an-art-and-how-to-support-the-people-who-do-it

2.?????????????Understanding ourselves?(to have a deeper understanding of what we ourselves are as conscious human beings)

I couple of examples of applying neuroscience and cognitive science insights to understanding ourselves that are worth checking out are:

3.?????????????Living with ubiquitous AI (a way forward to being happy, productive, and secure with ubiquitous AI technology all around us)

Where there seems to be a growing industry of advice for young people about how to prepare for the current world of ubiquitous AI we find ourselves in and the near future implications: but I think we can do a better job than what I have seen so far; a recent example can be seen here:?‘Be flexible, imaginative and brave’: experts give career advice for an AI world: https://www.theguardian.com/technology/2023/aug/29/experts-give-career-advice-ai-world-artificial-intelligence-work?CMP=Share_iOSApp_Other

More soon, from JabeOnAI …


A framework for Understanding Ourselves and Artificial Intelligence


A framework for Understanding Ourselves and Artificial Intelligence


A framework for Understanding Ourselves and Artificial Intelligence. Or as I say later,?understanding ourselves, understanding how to engineer useful AI systems; and to get a view on how we can best live in a world with these AI systems ubiquitously around us.

Some personal context here

I want to share a perspective – that when I studied Artificial Intelligence – I took for granted, or rather was open to, the basic ideas which were to become the basis of the field of 4E Cognition – due to a number of factors based on my background to that point (which I will elaborate seperately)- but you can get a great taste of what my experience at Sussex University was like, at the School of Cognitive Studies, because it is captured well by this anthropological study (https://anthrobase.com/Txt/R/Risan_L_05.htm ). I digress.

Another point to make is the fact that the current approach to AI had come out of an AI winter just a few months before taking up studies at sussex in 1989 (see the ground breaking book – search up?Rumelhart ,?and??McClelland??and?https://mitpress.mit.edu/9780262680530/parallel-distributed-processing/ ). I rocked up there at 18 years of age, and just took it for granted that we were programming artificial neural networks, sure that’s just what you do at University. I had no real conception at the time just how unusual and new this was.?

A counter intuitive start …

Let’s set the scene with a couple of counter intuitive quotes:

“How does the biological wetware of the brain give rise to our experience: the sight of emerald green, the taste of cinnamon, the smell of wet soil? What it I told you that the world around you, with its rich colours, textures, sounds, and scents is an illusion, a show put on for you by your brain? If you could perceive reality as it really is, you would be shocked by its colourless, odorless, tasteless silence. Outside your brain, there is just energy and matter. Over millions of years of evolution the human brain has become adept at turning this energy and matter into a rich sensory experience of being in the world. How?”

The Brain: The Story of You, by David Eagleman.

… and to be honest, ‘energy’ and ‘matter’ are ideas about what this substrate we exist in might be, we only have second hand, mediated access to anything, so the ideas of ‘energy’ and ‘matter’ are really only guesses at what is “out there” and indeed “within us” because as those who are fond of telling us that the ‘atoms’ that make up our bodies are billions of years old – would acknowledge – we are made up of the “out there” and really there is no boundary between what is us, and what is the outside …

Another more poetic quote:

“Light, shadows and colours do not exist in the world around us”.

Presentation speech by Professor C. G. Bernard, Member of the Nobel Committee for Physiology or Medicine. Quoted in “Sensational: A New Story of our Senses, by Ashley Ward

Engineering AI systems

To paraphrase the prevailing view around AI right now:

“So, why should I care about philosophy of mind? Or this esoteric cognitive science? I just want to build some great AI systems. Just use dictionaries, triangulate it together and then get the biggest possible neural network – (“a big artificial brain”) – to process all of that data – just like chat-GPT4 or GPT4 – so why, why isn’t that going to work?

So, the fiction is, you find you can’t combine all of this data; and those people who have a particular perspective that science and mathematics is simply defining that base reality come unstuck and they don’t really know why, they expend huge amounts of effort, trying to create master dictionaries or ontologies, they don’t understand why the world is so messy, they think it just needs a little bit more tidying up, and that is a kind of category error, a kind of deep misunderstanding of our relationship to the world.?

So, why might dictionaries not map, because people are lazy? Seems to be the assumption of some people. Well, if you dig and delve into philosophy of mind, and cognitive science, you come across what I alluded to early on in these blog posts – the map is not the territory – you might think well, sure, it is just a simplification, but the reality is slightly more nuanced that that – it is that in some respects there is – if you want to put it as a polemic – there is no base reality – as such – onto which all these things are constructed- there is no firm foundation – in reality.?

Now, this is the point that most people say, that is crazy, what are you talking about?

Well, this is why, the area of 4E cognition is so important, in understanding the challenge we face, and a lot of talk around this area, centres on the notion of the ‘brain in a box (the skull)’ the brain, the human brain you have and everyone you know, has no direct access to the world outside, such as the taste of a glass or red wine, or it’s colour, all of these things are constructions, it gets weirder than that, because your sense of self is also a construction

The way in which representations are created, is – and evolutionarily, it is based on the need of a human to collaborate in large groups, to hunt and feed themselves – into a language – and to a great extent the sense of self is used – for communication about your motivations as to why you are doing something – is what they call – a community of practice, which is a fancy way of just saying – a bunch of people trying to do a specific thing together. In order to do that thing together, they need to communicate, in language.

Now language is used to give a sense of what these individuals are experiencing uniquely as qualia (so called) – but put in a common reference point – and this common reference point is completely constructed – and the reason why it seems such a firm foundation to us (as a human) – is because that is how we live our lives – everything we do, is embedded in the context of object as common reference points through our experience – that we approach as infants, that we map on our experiences of the world, from the sensory input – that is partly given out by other humans – they capture the sounds and pressure waves – converted to electrical signals that our brains interpret – basically bring these things together as common representations – but these things are very much associated with our embedded and embodied nature – in our physical bodies – embedded in the cultural context of how we grow – they are enacted – and these representations and the way that we comprehend the world is dynamic – you learn these things – you learn what a cup is by picking it up, trying to drink from it – these representations (i,e spoken words as pressure waves, and later as visual patterns) as these dynamic processed mapping to our lived experiences and observations of other humans interacting in similar ways, and uttering sounds we come to associate with those common shared experiences.

And the complex thinking, if you want to call it that, which we do as we grow, is extended by objects in our environment, symbol systems, tools, all of these things.

So, what I am trying to do – is touch on why representations need to be understood in the context of tasks being done, and communities of practice, and with our global culture, certain things are in common, and seem universal, and in science – (which we have as a system of which I am proud to be a part) we have – scientific communities represent communities of practice – and the fields of cross disciplinary semantics – are trying to address this – and there is ontology mapping – but it comes down to people have an intuition that is incorrect based on our day to day experience – that the representations around us are grounded in what you might call the ‘real world’ of cups and clouds, aeroplanes and sticks … but that is to a very great extent an illusion, but the reality is that when you truly dig down … to based reality it is constructed … sure there must ultimately be a substrate with which we are ultimately interacting, but the only way we can access this through senses or indeed scientific instruments and measurements – is mediated – the tools we construct to measure – and if you look at areas such as quantum physics what you start to find it that our day to day intuitions of common sense reality – just to not match in any way what our more advanced instruments are finding.

So this is a roundabout way of saying, from an engineering perspective, it is much better to take the radical assumption, there is no base reality, as an engineering principle (and perhaps indeed as a day to day perspective – footnote ) – and that the only foundations that we have – for semantics, and data generated from instruments – is a constructed one. And the only way to understand that constructed foundation is to look at the context in which those measurements, those understandings if you like, those ontologies are constructed. And they do not map, because purposes are different – the ontology for one area is very different to another – and ultimately they may just never map, until you bridge them with a whole bunch of other disciplines (intersecting tasks) – so, if you are into cooking, what is the bridge to quantum physics, in terms perhaps of understanding the physics of how an egg binds a cake … where you have to find a path … it doesn’t necessarly follow that things will map, and it is a lot of work.?

So getting back to the position: I just want to build AI systems on common sense – no nonsense base reality – you are gonna fail. Let’s put it plainly, you are just gonna fail.

You are not sure where an when, but I have seen over the years, many people of huge frustrations building semantic systems, howls of rage, why is my agile team failing??

And basically, this is the reason. And to overcome that from an engineering point of view, you have to get your hands dirty with philosophy of mind, and 4E cognition, or at least the heuristic approaches that are based on them.

And I contend the same is probably true of finding a way to live in this new world of ubiquitous AI computing systems in our day to day lives.


What has this got to do with daily life?

Taking this on from another perspective, less the engineering view, and more where does our common sense view of “the world” come from.

So the picture you might want to start from is the concept of – autopoesis – (https://en.wikipedia.org/wiki/Autopoiesis )

we each “reach an accommodation” with the outside and inside world. The outside world also consisting of other humans and existing culture. we accommodate to associating sense responses to bodily actions, reflected in sound waves or visual symbols that we come to associate with these.?

The argument is we do the same with our intereoperception and there is a process of introspection evolved for cooperation and collaboration- but that should not be confused with true knowledge- you can think of it as an organ – not “the brain” but some processes within it focused on collaboration.?

taken together- our senses – our sense of self – and labels we attach to experiences – make up our sense of the world. lets call it the content of our consciousness (other animals are surely conscious but have other content)

this has been described as an hallucination. we see it as base reality, common sense. but really it is a fabrication; and we have no real certainty that what we experience is the same as someone else. We can be sure that other creatures’ qualia IS different.?

surely it is just base. well your sense of what it is depends on what you measure. surely there is what you might call a substrate. but we surely don’t know what it is.??AND our sense of it (excuse the pun) depends on what we measure and for what purpose.?

If there is a gist to this book (I am writing) is that this matters, and can be useful in understanding both ourselves, how to engineer useful AI systems; and how we can best live in a world with these systems ubiquitously around us.?

Most of this book (I am writing) will be focused on this last point, as opposed to my day job which focusses to a great extent on this question of engineering AI systems.?

You can turn this lens of understanding on our creations. And there is a long tradition of this way of thinking in Buddhism (and its Westernised strains as they have interreacted with phenomenology).

More on all this soon … from Jabe On AI


AI is not Hallucinating


AI is not Hallucinating


AI is not Hallucinating.

There has been a great deal of talk about AI systems hallucinating. It is part of an anthropomorphism narrative that obscures and distorts what could be a valuable debate about opportunities for all of us to do good things in the world with advances in technology.?

My personal fear is that just like in the 1990s where the emergence of the internet was seen as advancing great possibilities, it soon descended into predatory business models, and that delightful phrase you can search up – the “enshitification” of the internet (thanks to Cory Doctorow for that great insight).

I would advance another idea here. Every century takes a few years to find it’s character. The 20th?Century probably didn’t get started until the roaring twenties has been soured by the banking crash leading to the rise of Nazism and industrialised war. It seems to me that the 21st?Century started on the 10th?July 2023, after the warmest week in recent history. With war raging, the social death of the pandemic still echoing, and angst about AI now being fully able to eat many folks livelihoods. Welcome finally, to the new century. We are not passengers, or an audience, but participants. As Muhamed Ali once said, “don’t count the days, make the days count”!

This article is another step towards the book I am writing, that I hope will give people some conceptual tools to make their days count in this early dawn of the true 21st?Century. Happy Friday.?

Coming up

#AI ?as actionable insights for yourself and your business under?#JabeOnAiDoubleEspresso ;?#CognitiveScience ?deliberations and thought provoking further reading under?#JabeOnAiArabicaCappuccino ; and valuable alternative perspectives on?#artificialintelligence ?and learnings from Counter Cultural sources under?#JabeOnAiEspressoMartini .?

You know you need it … so much dross being talked about?#AI ?each and every day now!

Decerning reader you are blessed with good fortune… please ensure you put it to good use!?


Conceptual Refreshment


Double Espresso


Key points:?


  • Hallucination is an anthropomorphism.
  • Machine Learning, is giving a probability of a pattern being a useful response to a query. Getting it wrong is a feature; not a bug; and certainly nothing like human hallucination is going on.
  • Ironically, the concept of hallucination does give incredibly useful insights of how human consciousness, perception and reasoning works – more on this later.


Arabica Cappuccino


A discursive journey through some of the ideas and debate in this area.

I want to “meditate a little” on the anthropomorphisation going on right now.

From a couple of angles:


  1. From the perspective of how wrong it is to describe machine learning systems as being – in any way ‘thinking’ having ‘ideas’ or ‘understanding’ because it is wrong – and it is done with a certain agenda in mind (which some critics have described as gaslighting):



  • To protect investment
  • To hoodwink
  • And just plain lazy thinking
  • But these systems really are not – they are tools to optimise functions which describe patterns in data of one kind or another.


2. The other perspective is that many folks don’t have a good understanding of who or what we are – and the use of the term hallucination is interesting from that perspective because it has clues into just what we are – and the strong negative biases about thinking in certain ways what and who we are


  • Let’s unpack that a bit … hallucination has a bad reputation for obvious reasons – in most cases if it is not your priests or shaman doing the hallucinating – then folks are not going to be the most productive in society
  • But it is a wound worth picking at a bit, because it turns out that our well evolved sense of the world we consciously experience is not all it might seem;
  • If you have ever been interested in actually reading any of the literature on consciousness you will have stumbled upon the domains of phenomenology; and westernised Buddhism (or event the original forms) – what have they to say – well one of the areas they bring into focus is the challenging notion of the self that is experiencing the “outside” world;
  • If you are sceptical of some of the ideas thrown up in this literature, I have bad news for you; it is well supported by the latest studies in neuro science.
  • So what is this ‘bad news’??
  • One part may be that if you read the work of Daniel Dennett, there is the idea roughly speaking that there is a part of the brain (or brain-body to be more correct) that is responsible for generating actions, and another that is responsible for interpreting actions (and explaining them to others through language, and indeed actually back to ourselves);
  • What is so bad with that you ask? Well, it turns out that these insights are very unreliable and do not have superior insights to an outside observer; and studies of certain brain injuries show that different interpretations tend to be generated by different parts of the brain (left and right);
  • So it suggests that our insights about ourselves are something of a fabrication … some say, equivalent to an hallucination – a plausible fabrication – and other evidence suggests – with a bias for explanation that is efficient to manufacture, i.e. the simpler the model the better; which further suggests they are often wrong … anyone trying to explain their behaviour late at night to a spouse in a failing relationship will be familiar with how floundering these self-descriptions can turn out to be;
  • Thanks Jabe …
  • There is more; what of the outside world? The hand I see so clearly before me …
  • Erm, you might want to stop reading here.?
  • It goes like this, the brain has no direct access to anything; everything is mediated; all it receives are chemical-electric signals down nerve pathways from a plethora of organs; and those organs don’t as such have ‘direct’ access to anything, so much as you could say the organs detect – pressure waves; or electro-magnetic radiation; and experience is all constructed – that dreaded word – a hallucination is created.?
  • There is no such thing as the colour red; nor the taste of coca cola, the smell of Jasmin; the warmth of the sun; the sweet sound of the birds singing; those things just do not exist.
  • Honestly, believe me, I have read loads on this … and come to the difficult belief that folks who have spent even longer than me thinking about this, and writing about it, are not wrong … there is much literature and debate you are welcome to dive into, but I am giving you a well-considered precis, anyhow I digress …?
  • What do you mean there is no such thing as the taste of Budweiser, I just had one!!! Well, there are chemical-electrical signals with a pattern that repeats and your brain constructs a signature ‘experience’ for you; and you can ‘describe’ this to someone else using language, philosophers call the ‘quality’ of experiences ‘qualia’ and the great thought experiment you can play is – do you experience what we both call red as the same experience, the same qualia. Ok, over a couple of beers or three (Budweiser – the European one … Budweiser Budvar ) you are convinced you do. Well get this, we have three cones or rods I forget – red, green, and blue from which all colours are constructed on a TV set, but birds have four … so they definitely don’t experience red the same way as you … nor do bees because they see patterns on flowers we can’t see; and octopuses can see the polarisation of light which we can’t see …
  • So what, well the argument here is that all experience is … well, a hallucination …
  • Have you every had a dream? I hope so, and experienced a room, a shop, dog, pony, lemon, beer glass … all seemed quite real, well we tend to call them hallucinations when they don’t align with a useful alignment with external objects in the world we can interact with; but whilst sleeping that is ok;
  • Whilst awake less so. But let me take you on a counter cultural diversion, if you have never strayed far from the path. There are these things folks ingest, mushrooms, mouldy bread and other such things that … well induce perceptions that are indistinguishable from ‘real common or garden perceptions’ but for things that are not there, or if you are a neuroscientist, things your brain is generating sense experiences of, fully expecting that there is a correlate, but … there isn’t
  • Hmm, inconvenient … trouble is .. there is other literature that questions that there is a ‘real world out there at all’; and further literature that questions if there is a world in here, that is, if there is a ‘you’. That my friend is called Buddhism (western or otherwise); and dear me … there is quite a bit of western philosophy and indeed neuroscience that provides back up when it comes to a philosophical fist fight on the subject … damn … Jabe, help me out here, this is getting weird, and I just came here to understand a little more about predictive algorithms, not question my sanity and world view. Sincere apologies …?
  • But, you can see why I get a bit irritated when the ‘Tech Bros, tell me that their technological toys are hallucinating, when they are behaving exactly as they should … these are features not bugs … and these same people would disparage the notion that every day perception is a hallucination; the frightening thing is … they are wrong on both counts.
  • Fear not, #JabeOnAI is here to help you get to the end of the rainbow, click your heels twice …?


Is there any value to all this? Well, ultimately, yes, if you want to get ahead in this world; then truth is always helpful, ultimately.?

Where this gets interesting, is in understanding, where does what we say to each other come from, which bits can you trust,?

Ultimately, if you have a bunch of folks all trying to achieve something; and they have developed language which appears to reflect regularities in their sense experiences from this “mythical’ real world; then that is just about as good as it gets; unless you really are a solipsist…??and these folks are what those writing the literature call – communities of practice … now if you were to do something fancy, like take all this chatter from the community of practice and encode them as vectors in a vector space, you might just find the vectors only align well if these are aligned communities of practice – baking pizzas say, rather than measuring quarks in Switzerland, you say tomato and I say tomayto … and all that.

Why would that be useful; well if you want to build a predictive model

And the rub is that there really isn’t a common fully objective ground truth you can dig down to … and it all gets a bit post-modern … but hey, you did want to get all ontological and ask about. Consciousness and meaning, and artificial intelligence, did you really think this was going to be a pedestrian and straight forward journey … sorry to disappoint.

The funny thing is I have seen decades of this cognitive dissonance where folks are surprised when their ontologies don’t match or map; I am always amused and take the perspective, well from my viewpoint I would be more surprised if they did, and often get blank uncomprehending looks ,,, but that is another story …?


Espresso Martini


Exploring some counter cultural references, in particular the 1950s and 1960s scientific and cultural experiments with LSD; and the adjacent areas such as dance culture which also relate to how communities of practice generate, culture, language and ways of thinking and interacting with the world.

I want to help bring something to generation Z that I had. Something I mentioned in relation to the yellow heaven soup kitchen.?

I had been going to the Shoom club a while and it must have been in March 1988 or thereabouts – I had missed a few weeks getting obsessed about exams. I was smoking a Marlboro red (in a club remember that) and swigging on a can of full fat coca cola). And a track came on. By Todd Terry. You can “search it up” it starts with a siren like a rush (a day in a life – and all the folks around me kinda moved in unison). It became known as windmill dancing (or ‘big box’ little box) and was iconic of acid house. Seeing something cultural for the first time first hand, that later becomes ubiquitous through media is quite an uplifting experience. I thought what is this! OMG. a shiver went through me??– this wasn’t just a great club night. This as a thing. SOMETHING WAS HAPPENING ALL AROUND ME. Ineffable at the time. But something new. Something ours. Not belonging to thatcher’s yuppies. Or the 1960s boomers or the punk rockers. I was 17 years old, and this belonged to me and my friends. It was the birth of what became known as ‘Rave Music Culture’. The revolution was not being televised we were living it live and direct. It was ours. And that moment – of ownership of possibilities – has lived on with me every moment ever since.??Culture. Human connection being part of a community of practice visceral, raw counter cultural, not approved by the authorities. We believed. We knew we could change the world. That never went away.?I would like to pass that on …

So what is your point Mr Generation X bore.?

My point is let’s avoid what guy debord called the society of the spectacle. Second hand mediated. Let’s celebrate our humanity. The possibility. Celebrate the potential that Timothy Leary celebrated in the PC back in 1994; what hypertext could be; what these tools subservient to us can be. Let’s fight to save our planet have let’s have some fun!?

To misquote Emenem: I am sick of you little tech bros with all your scheming. You just annoy me, and I have been sent here to destroy you!


How the kids can survive and thrive in this AI world


How the kids can survive and thrive in this AI world


How can our kids survive in this world we now see as saturated with ubiquitous AI?

We are under assault from a bunch of evil magicians, and I am not talking about the world of Harry Potter;??I see them as practitioners of Surveillance Information Technology Hegemony, see what I did there? I call them S.I.T.H. lords (or just Tech Bros if you like), read more here: SurveillanceInformationTechnologyHegemony.com

Really this is all about, predatory business models; they make exaggerated claims that amount to magic, and take?Arthur C. Clarke’s name in vain. Search up his?1962, book “Profiles of the Future: An Inquiry into the Limits of the Possible”, with his famous Three Laws, of which the third law is the best-known and most widely cited: “Any sufficiently advanced technology is indistinguishable from magic.”

But this isn’t magic, it is a conjuring trick of distraction, or more correctly mis-direction

https://en.wikipedia.org/wiki/Misdirection_(magic)

And I quote: ”In?theatrical magic ,?misdirection?is a form of?deception ?in which the performer draws?audience ?attention ?to one thing to distract it from another.”

How can our kids survive in this world we now see as saturated with ubiquitous AI? Better yet, to survive and thrive in this world of ubiquitous AI.

In order to achieve the misdirection, get folks to ask the wrong question. The right question to quote Matt Johnson is “ If you can’t change yourself … chage the world”. Are you with me in changing the world?


“Lonely Planet” Matt Johnson, as “The The”


Planet Earth is slowing down

?

Overseas, underground

Wherever you look around

Lord, take me by the hand

lead me through these desert sands

To the shores of a promised land.


You make me start when you look into my heart

And see me for who I really am.


If you can’t change the world. Change yourself.

If you can’t change the world. Change yourself.


I didn’t care if the sun didn’t shine

& the rain didn’t fall from the sky

I just cared about myself

From this world to the next

And from the next back to this.

By our actions we are bound.

We’re running out of love

running out of hate

running out of space for the human race.

Planet Earth is slowing down.


You make me cry when you look into my eyes.

And see me for who I really am.


If you can’t change the world. Change yourself.

If you can’t change the world. Change yourself.

And if you can’t change yourself…change the world.


I’m in love with the planet I’m standing on

I can’t stop

I can’t stop thinking of

All the people I’ve ever loved

All the people I have lost

All the people I’ll never know

All the feelings I’ve never shown.

The world’s too big. And life’s too short.

To be alone…To be alone.


Let’s not be alone; you are not alone, we are in this together. Come and join me:?jabeonai.com

I am writing a book; and thinking of self publishing a rough draft in time for Christmas – The Scrapbook of Liquid Sunshine ( a rough draft for the Poetry of Liquid Sunshine) …?scrapbookofliquidsunshine.com

More on the challenge, of which AI seems to be joining the club:

https://www.theguardian.com/books/2019/oct/04/shoshana-zuboff-surveillance-capitalism-assault-human-automomy-digital-privacy


Escaping from the AI framing problem


Escaping from the AI framing problem


One of the problems you face when engineering AI systems is determining what information is relevant for the task in hand. Be your system an Artificial Neural Network (what most folks simply call Machine Learning these days) or a logic-based AI system such as a Knowledge Graph. In the Neural Network the challenge is to define what features are relevant or in the case of reinforcement learning you still need to determine what is the input, such as what the sensors will be used; in the case of a knowledge graph, you need to determine what the ontology (or dictionary) to be used will be. As it turns out this is a far from trivial problem, it was first identified in the 1950s when logic-based AI systems were receiving most attention, and as it turns out, it is also a problem for what was called ‘connectionist’ systems or Artificial Neural Networks. It is called the Frame or Framing Problem, in essence how do you put a frame around what is relevant, leaving out what is not relevant. And it turns out to be a deep epistemological problem. You can read more about this here: https://plato.stanford.edu/entries/frame-problem/

?

Let us take an example to show how this works in practice, and I absolutely love this example from a legal context this time.

How to Do Things with Contexts:"Is There Anything in the Oven?" By Samuel Bray, Notre Dame Law School: ?https://reason.com/volokh/2021/07/28/how-to-do-things-with-contexts/ He describes having a conversation with a 5 year-old child where the answer to the simple question “is there anything in the oven” will be answered in opposite ways dependent on the context. I encourage you to read the article. Also worth a read, he then goes on to make the connection to work on interpretation of legal statutes, in his longer article on ‘the mischief rule’: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3452037

?

If you look at the frame problem today, as described in the Stanford summary, and a rather extended quote (below) you will see the way forward here is proposed by the late great Hubert Dreyfus, and cashed out in practical terms as to how this is achieved by Michael Wheeler:

?

“Although it can be argued that it arises even in a?connectionist ?setting (Haselager & Van Rappard 1998; Samuels 2010), the frame problem inherits much of its philosophical significance from the classical assumption of the explanatory value of computation over representations, an assumption that has been under vigorous attack for some time (Clark 1997; Wheeler 2005). Despite this, many philosophers of mind, in the company of Fodor and Pylyshyn, still subscribe to the view that human mental processes consist chiefly of inferences over a set of propositions, and that those inferences are carried out by some form of computation. To such philosophers, the epistemological frame problem and its computational counterpart remain a genuine threat.”

?

?

Now many folks have said that you just don’t need all this philosophy:

?

“I have heard expressed many versions of the propositions . . . that philosophy is a matter of mere thinking whereas technology is a matter of real doing, and that philosophy consequently can be understood only as deficient.”

?

Philip E. Agre, Computation and Human Experience (Cambridge: Cambridge University Press, 1997), 239.

?

But has Dreyfus, 2008 has pointed out, those very same classical AI technologists rest their work on a great number of philosophical foundations, that they just take as ‘common sense’ but in fact is taking a position. He describes his time in the 1960s, in the heyday of classical AI in discussion with folks from MIT Artificial Intelligence Laboratory (such as Marvin Minsky), and the RAND corporation (Alan Newell and Herbert Simon), where folks were saying we don’t need Philosophy, we can do in a few years what you Philosophers failed to do in thousands of years, and solve the mysteries of cognition and intelligence. Hubert Dreyfus then lists the philosophical foundations of their work, which I list here in summary:

?

·????? De Corpore, Hobbes - the view that reasoning is computation - https://en.wikipedia.org/wiki/De_Corpore

·????? René Descartes - "evil demon" and his mental representations - https://www.philosophybasics.com/philosophers_descartes.html

·????? Leibniz's systematic character of all knowledge and plans for a universal symbolism, a Characteristica Universalis - https://en.wikipedia.org/wiki/Characteristica_universalis

·????? Kant’s view that understanding must provide the concepts, which are rules for identifying what is common or universal in different representations - https://en.wikipedia.org/wiki/Immanuel_Kant

·????? Frege’s formalization of such rules - https://plato.stanford.edu/entries/frege-theorem/

·????? And finally; Russell’s postulation of logical atoms as the building blocks of reality - https://plato.stanford.edu/entries/logical-atomism/

?

?

?

?

There is a set of alternative philosophical positions:

?

Returning to Stanford University’s summary of the frame problem once more, and I quote:

?

“Dreyfus claims that this “extreme version of the frame problem” is no less a consequence of the Cartesian assumptions of classical AI and cognitive science than its less demanding relatives (Dreyfus 2008, 361). He advances the view that a suitably Heideggerian account of mind is the basis for dissolving the frame problem here too, and that our “background familiarity with how things in the world behave” is sufficient, in such cases, to allow us to “step back and figure out what is relevant and how”. Dreyfus doesn't explain how, given the holistic, open-ended, context-sensitive character of relevance, this figuring-out is achieved. But Wheeler, from a similarly Heideggerian position, claims that the way to address the “inter-context” frame problem, as he calls it, is with a dynamical system in which “the causal contribution of each systemic component partially determines, and is partially determined by, the causal contributions of large numbers of other systemic components” (Wheeler 2008, 341).”

?

?

See for further references

?

·????? Then Maurice Merleau-Ponty: https://plato.stanford.edu/entries/merleau-ponty/

·????? Martin Heidegger: https://plato.stanford.edu/entries/heidegger/

?

Also, work in biology and neuroscience, to name just two:

?

·????? The biology of Humberto Maturana https://en.wikipedia.org/wiki/Humberto_Maturana

·????? The neuroscience (neurodynamics) of Walter Freedman: https://www.sciencedirect.com/science/article/pii/S0079612306650280

?

If you really want to dig into this issue, I would strongly recommend you purchase and read the work of Michael Wheeler, 2005. https://mitpress.mit.edu/9780262731829/reconstructing-the-cognitive-world/

Not just because it presents a clear way forward to escaping from the frame problem for cognitive science, but also because he was my flatmate in the early 1990s, and we shared a great number of late night philosophical discussions which will also form the basis of ongoing posts on #JabeOnAI as well as a foundational part of my forthcoming books, the Poetry of Liquid Sunshine (and the first rough draft, the scrapbook of liquid sunshine).


Imagine no information


Imagine no information


Imagine, to misquote John Lennon. No information. And no computation too! It’s easy if you try … (So how can I Build an AI …).?

Let me start Monday with a little bit of gentle heresy. You won’t get this from the usual fan-boy and fan-girl posts on AI or even the detractors. But that is why discerning reader you subscribe to?#JabeOnAI .?

(Imagine) There is no such thing as information. Or computation. In the same way that we think about the ‘engineering concept’ of aerodynamic lift, it is a convenient concept, but has no objective existence as a foundational aspect of the world other than summarising other causal patterns.

Which is a radical departure as some hold computation to be foundational to existence, and those that don’t go quite so far, have a rather lazy way of considering the ontological status of information and computation, that allows a whole bunch of issues to slip through the cracks. Let’s not be so ‘slack’.

“Imagine No Information” Apologies to the late great John Lennon (https://www.johnlennon.com )

Imagine there’s no?information

It’s easy if you try

No?A.I. below us

Above us only sky

Imagine all the people Living for today… Aha-ah…


Imagine there’s no?computation

It isn’t hard to do Nothing to kill or die for

And no?data, too

Imagine all the people Living life in peace…


You…

You may say I’m a dreamer

But I’m not the only one

I hope someday you’ll join us

And the world will be as one


Imagine no?algorithms

I wonder if you can

No need for greed or hunger

A brother(sister)hood of (wo)man

Imagine all the people Sharing all the world…


You…

You may say I’m a dreamer

But I’m not the only one

I hope someday you’ll join us

And the world will live as one (or zero)


Beware the “#humblebrag ”, this is a placeholder I have been super busy with the day job, some cool things coming, but more on that separately and will be writing more on this topic as a full article in due course. In the meantime if you want some inspiration, you can “search up” Andrew Smart’s book: “Beyond zero and one” which has a great take on this subject. https://www.orbooks.com/catalog/beyond-zero-and-one-by-andrew-smart/


Quoting from Winograd and Flores mentioned earlier “Theoretically, one could describe the operation of a digital computer purely in terms of electrical impulses travelling through a complex network of electrical elements without treating these impulses as symbols for anything.” …. I will complete the quote another time and extend the discussion on computers and levels of representation. https://www.abebooks.co.uk/9780201112979/Understanding-Computers-Cognition-New-Foundation-0201112973/plp



Conceptual Refreshment … from #JabeOnAI


Conceptual Refreshment … from JabeOnAI



You know you need it … so much dross being talked about #AI each and every day now!Decerning reader you are blessed with good fortune… please ensure you put it to good use!


Finally, to answer a common question - Jabe why the idiosyncratic art work and the counter cultural references? In part, this is branding to make my message stand out from the whirling mass of comment on AI we have today; but also there is a serious point … unusual perspectives can be important … and finally, autobiographically - because it is me and who I am.

?

More soon, here on Jabe On AI






Looking forward to the next panel discussion!

Anne-Marie Scoones

Academic Success Manager at Elsevier Health, soon to be retired!

1 年

My question is, can one thrive in an age of ubiquitous AI without engaging with it?

Jabe Wilson

Founder & Consultant - JabeWilsonConsulting.com & Founder - HeyTechBro.com Foundation & Chief Ego Officer - JabeOnAI.com

1 年

Wow Jabe Wilson you have really outdone yourself with this edition it is #PureBrilliance from #JabeOnAI ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了