Dennett’s Dead End (An Antidote to Toxic Rationality)
Warning: this is a long, philosophical post, on the topic of cognition, computers, consciousness, creatures and care (and learning :o) in response to Daniel Dennett's book 'From Bacteria to Bach and Back - The Evolution of Minds.
Popular vs Profound
As a young philosopher I liked Dennett et al. very much – pecking pigeon computers, Einstein’s brain in a book – ideas that had popular appear. We felt like we were grappling with the big questions.
I was also a fan of Adam and the Ants, a pop band at the time, and I remember arguing with my dad that Adam and the Ants would end up being bigger than the Beatles.
I was wrong about both – and for pretty much the same reason: I didn’t understand the difference between popular and profound.
Dennett has been riffing on the same theme his entire life. This isn’t a criticism – most philosophers do something like that – but what started out as a simple, lively idea is now a vast sprawling thicket of a book (From Bacteria to Back and Back). And the simple idea is wrong.
Dennett’s central idea is a familiar one: what if the human mind were a computer? These days everyone likes to think the human mind is a computer, and so his books have popular appeal. Back in Victorian times steam engines were popular and people compared the human mind to a steam engine. Long before that, we were like gods.
The point of the computer comparison is fundamentally to deepen our legacy of Cartesian dualism. Descartes argued for the distinction between the mind and body, reason and emotion – the ‘res cogitans’ that thinks, and the flesh that feels. In saying that the mind is a computer, Dennett is able to draw on the hardware/software distinction with which we are familiar and claim that the mind is something like an ‘algorithm’ - which if not actually floating free may be implemented on any variety of hardware. Imagine that. Living forever. Free of our corporeal form. Almost like the soul. Certainly the stuff of popular movies. Over and over again Dennett asserts this assumption: “The brain…is a kind of computer” (p.111).
The oddity is that Dennett protests vigorously and repeatedly that he is not a Cartesian and that he favours an evolutionary perspective. But his approach is not to deny the mind/body duality but to covertly re-affirm it by showing how one may arise from the other – which in effect he does by repeatedly taking his thumb to the page and smearing the two back and forth until it is unclear where one begins and the other ends. ‘Do bacteria have a smidgeon of comprehension?’ he asks.
Although he draws heavily on evolutionary arguments, I do not believe Dennett is fundamentally Darwinian. Darwin’s most provocative claim was not that we are descended from apes – but that there are only differences in degree, not in nature, between us and other creatures. The latter is a far more radical claim: it is one thing to look at a picture of yourself alongside an ape and think ‘I am descended from this’ – quite another to look at the two pictures and think ‘there are no fundamental differences between the two of us’. This is a far cry from Dennett’s claim that we are “the only species on Earth with proper minds” (p.171).
So Dennett is deeply Cartesian, profoundly anti-Darwinian. He is going to tell us that we are different. So how does he accomplish this?
Well, on page 1 he simply states it ‘Our minds are different’. The philosophical equivalent of the sucker-punch. Hit the reader with the assumptions up front, then argue backwards. What other assumptions does he introduce on page 1? ‘Minds’ he writes ‘evolved thinking tools’ such as spoken words, arithmetic, map-making for the purposes of ‘manipulating information’.
You might buy these assertions if you thought the mind were a computer, but they are not true: our words, for example, evolved to serve our more primitive drives – the so called ‘four f’s’ (feeding, fighting, fleeing, reproduction) – and are used for example to woo, to yell, to warn, to scream.
In short, we use words to express how we feel and – yes - those feelings are most certainly connected to our evolutionary imperatives. Dogs bark, birds sing, and people tell stories. Although you could argue it at a stretch, it would be odd to describe birdsong as ‘thinking tools’ or ‘manipulating information’. The case looks stronger when we come to arithmetic or map-making – but let’s be honest: these have only been around a couple of thousand years and are likely derivative from language’s more essential functions. Map-making doesn’t play much of a role in my life, frankly. By casually lumping ‘words’ in with ‘arithmetic’ and calling them ‘thinking tools’ Dennett is smuggling his assumptions in at the outset – a philosophical Trojan horse. One can safely assume that Darwin wouldn’t agree with the claim ‘our minds are different’.
You Are Not A Computer
Looking at the body of his work, it is clear that Dennett wishes we were more like computers: his book ‘Intuition pumps and other tools for thinking’ sets out a whole array of logical operators that humans should be using to think more rationally, but which they will never actually use because they don’t (as Kahneman et al. have described). Nietzsche repeatedly warns us of this tendency of the philosopher – to project a world in their own image, presenting the world as it should be, as if it is. At one point (p.301) Dennett actually says of Intuition Pumps: ‘when you read what I write, you download a new app to your necktop’. Really? And how did that work out? You might think I am being a little harsh on Dennett who, as far as one can tell, is a decent chap – but it precisely this kind of rationalist claptrap that has made possible a global regime of torture, encompassing billions of people, in which they are expected to behave like computers, ‘inscribing’ knowledge and algorithms into their brains as if they were automatons.
There is a curiously recurrent acknowledgement of this dualism in the way that Dennett writes: his work is exaggeratedly emotive – he uses phrases such as pathetic bleats (p19)’ and ‘cheering, hissing spectators’. At other times he presents himself as the ‘calm, objective account (p.24)’, the voice of reason, the great mediator. It is almost as though Dennett recognises that reason is beyond the grasp of the masses (you and I, the readers) – and that in order to sway people he must descend from the giddy heights of reason and indulge in rhetoric and persuasion. It is a striking example of the duality inherent within his approach – almost a tacit admission that it is not reason but emotion that comprises cognition – merely skimmed with a ‘fa?ade’ of rationality.
The philosopher’s disease is the obsession with reasons. Despite ‘reason’ being a very recent evolutionary adaptation – and demonstrably a tiny fraction of the cognitive account in even the best of us modern humans – Plato, Descartes, Dennett all attach central importance to it, using it to separate one kind of thing from another. It is like saying ‘look, humans are the only creatures that wear clothes!’. By way of example Dennett distinguishes Darwinian, Skinnerian, Popperian and Gregorian creatures. Popperian creatures – such as cats and dogs – may model hypothetical situations without consciously being aware of this process. Gregorian creatures on the other hand use ‘thinking tools’ and are ‘reason representors’ – able to consider the hypothetical models they employ.
But there is something distinctly amiss with these distinctions: language – one of Dennett’s ‘thinking tools’ – emerges while humans are still Popperian creatures. Children typically speak their first words at around age one, when it seems extremely unlikely they are ‘reason representors’ (or ‘wondering about what they are wondering about’). Only later do we develop into a ‘Gregorian’ creature – if at all! Recent research into decision-making suggests that conscious reason may play very little role in our cognition, with decisions taken several seconds before we are consciously aware of them. In a more familiar vein when you ask people what they look for in a potential mate, their answers turn out not to be terribly predictive. For sure, there are sounds coming out of our mouths when we speak – but generally these are more likely to be ‘rationalisations’ rather than ‘reasons’ (later on I will call them ‘stories’). In summary language, words, are not developed as ‘thinking tools’ – perhaps they are later repurposed to that end, but first and foremost they are expressive tools. Or perhaps Dennett is the only truly Gregorian mind.
Human beings are not artefacts that store information (such as books, CDs, computers) but creatures that experience and encode reactions. Nowhere is this clearer than when Dennett describes the ‘umwelt’ – a borrowed term used to mean ‘all the things that matter to our being’. He points out that the umwelt of one creature may differ from that of another – for example a squirrel may notice a human approaching but have no reaction to the Apple watch they are wearing, whilst you and I might remark on it. ‘Apple watch’ is part of our umwelt, but not the squirrel’s.
Dennett comes perilously close to making sense with this concept of umwelt, but fails to notice the significance of the descriptor ‘matters’ (presumably since it is not part of his philosophical umwelt) – it is our reactions to things that determine our umwelt – our emotional reactions. Not our concepts, not our words, not our thinking tools. If Dennett were not to stop short of defining ‘matters’ he would conclude that ‘matters’ describes what is affectively significant. Unlike ‘thoughts’, reactions are things that scale nicely across species: a squirrel has a distinct reaction to the approaching human, but not to the watch they are wearing. In like fashion, you and I may take the same train journey and whilst you have a reaction to the architecture you see, I do not. My umwelt is defined by the things I react to – the things that matter to me - just as is the squirrel’s. At no point does ‘reaction’ need to become a separate type of thing called ‘thoughts’ in the way that Dennett would like (and we know why he would like this – in order to build a bridge to his ‘mind as algorithm’ idea.) Sure, there is some substance to the distinctly human ‘thinking about thinking’ phenomenon – but if we recast this as ‘reacting to our reactions’ we don’t encounter all the dualistic huffing and puffing that Dennett experiences.
The Creature That Tells Stories
I don’t disagree with Dennett that there is something odd about humans – but it is not that we have reasons whilst other creatures don’t. It is that whilst all creatures have reactions (affective responses to certain experiences), we have a greater ability to imagine affective states and react to those – we can react to reactions in other words. We can say ‘imagine if you forgot to wear trousers to work!’ and experience an echo of the emotional reactions we might experience. This is a powerful ability – but it is not unique. As for our ‘reasons’ – it is interesting to speculate: a human can string sounds together to inform another human about an emotional experience they had. We call these strings of sounds ‘stories’. It seems likely other mammals (such as whales) can tell stories. Possibly bees do. Since we can also react to internal reactions to hypothetical experiences, we can tell a story about these – which we call a ‘reason’. Reasons are an extension of our storytelling ability. This seems quite plausible in the case of young children who can – on demand – make up a story about almost anything (‘why does that tree look sad?’) – stories that sound very much like reasons. Primitive cultures abound with stories that look very much like reasons in terms of the functional role they play.
Time and time again, Dennett mistakes ‘reaction’ for ‘reason’. For example, he illustrates his argument for the distinctiveness of human reason with the child who asks ‘why shouldn’t we leave the door open?’ to which the answer is ‘because we don’t want strangers taking our things?’. But the child is not asking for a rational ‘why’ – we do not respond with a diatribe about property rights – they are asking us to link one reaction to another. Dennett has misunderstood what is happening here. This is not a rational process: the child is learning to associate one reaction with another. For the child the ‘door open’ and ‘door closed’ states elicit the same reaction. The parent might say ‘for heaven’s sake – shut the door!’. Now the child might accept this emotional significance at face value (‘I closed the door so mum won’t yell at me’) – or the parent might explain ‘otherwise we will be burgled!’ Being burgled is something the child does have a reaction to – they have seen through stories and TV programmes how distressing this can be. They are able to link a one (hypothetical) reaction to another.
The significance of this is entirely lost on our Cartesian thinkers, whose ‘umwelt’ (thanks to Plato) pretty much excludes emotional states – despite these being the fundamental fabric of cognition. They cannot grasp that words do not ‘represent’ objects – instead words correspond to the feelings we have in response to certain experiences. “Chair” does not represent a class of objects in the world, instead it describes things that make us feel ‘chair’. All our words are feeling words. You would have thought we would have figured this out by now. The hopelessness of our attempts to get computers to understand words should have provided a clue. Children do not learn about chairs from dictionaries, but from having their parents bark ‘get down from there!’ or implore ‘please sit down!’ repeatedly. They don’t ‘look up’ the correct word to use, words just form automatically according to how one feels – we literally speak without thinking the vast majority of the time (because we speak with feeling). But thanks to Plato, Descartes, Wittgenstein and Dennett we have projected a rational narrative onto an irrational world and struggled to see what is really going on as a consequence (and built a bunch of machines that, to our surprise, function nothing like us). Rather ironically, poets, painters, dancers and dreamers have remained far closer to the truth – they can tell a story about the storytelling creature – the creature that sometimes sings like a bird or barks like a dog. The Cartesians can only tell the story about the monkey that could be like a computer if only it tried harder. The pigeon that hypothetically pecks in code.
As Heraclitus so aptly put it ‘Dogs, too, bark at things they don’t recognise.’
A dog sees a squirrel and experiences a distinct ‘prey creature’ reaction which it expresses as a bark. A human sees a seating object and experiences a distinct reaction which it expresses as ‘chair’. Our collective failure to understand that these two processes are fundamentally the same has led us into all manner of supernatural explanatory dead-ends. Our ability to have feelings towards our ‘chair feeling’ is indeed remarkable – but I suspect dogs may dream of squirrels.
Mid-way through the book (p.154) Dennett addresses this central question ‘what are the differences between brains and computers?’ His response is breathtakingly cursory ‘three differences are most frequently cited but are not, in my estimation, very important’. He dismisses these differences in a few short paragraphs, in essence his response is: ‘anything your brain can do, a computer can do better.’ How extraordinary. Is this avoidance? A blind spot? More likely Dennett has persuaded himself of this repeatedly over the course of decades, and (correctly) assumes that since it is the prevailing sentiment it needs little in the way of justification. But since this is the heart of the matter let me be very clear: Dennett does not consider the central, most important way in which brains and computers differ. It is not on the list. I can tell you what it is, and it will seem entirely strange to you (since it is not the popular view), and almost certainly entirely unacceptable to Dennett. But I can assure you it is true:
Brains process experience affectively, computers process information digitally.
The difference between brains and computers has little to do with parallel vs Von Neuman processing (although affective responses are invariably massively distributed), people and other creatures with nervous systems react to the world around them, and it is these reactions that determine what they store, and the composition of their umwelt. To correct Wittgenstein: feeling, not language, is the limit of my world – my umwelt. What we have no reaction to, we don’t notice. I can certainly notice something shocking, for which I have no words. Since a computer feels nothing, it cannot even begin to function as does a creature with a central nervous system.
What basis do I have for this claim? Just look at the difference between the way in which humans and computers process information: human memory is unreliable, in fact more reconstruction than memory. Our memories favour the bizarre, the shocking, the delightful and unexpected. We tend to remember what matters – to us, personally. We tell stories. Most of everyday conversation is stories. Stories are always about emotions and reactions. We use metaphor – which means we compare things in terms of how they make us feel. Our concepts and words are always fuzzy (not at all like computers’). We go to the movies, we listen to music, we love and we feel intensely about right and wrong. We have a sense of purpose. We are thoroughly affective beings – because we process all experience affectively – it is our ‘machine code’ so to speak.
We built an entire bureaucracy around this silly idea of making humans store facts like a computer (or a book) – education. And the verdict is clear: it has been a monumental disaster. Humans find it terribly hard to memorise meaningless information – we have to scare them into doing it and even then they forget most of it shortly thereafter. Computers store meaningless information with ease. We don’t have to ‘pop quiz’ our PC.
Now I suspect that simpler creatures have simpler feelings – but I can assure you that at no point is it necessary to introduce a new category of thing – ‘thoughts’ – distinct from feelings. No, allow me to be clear on this point: ‘thoughts’ are just a sophisticated type of feeling.
One of Dennett’s central themes is ‘competence without comprehension’: humans are different, he argues because we are ‘reason representors’. When we look at a nest of ants, for example, it seems clear that there are good reasons why ants do what they do – but it seems unlikely that the ants themselves are aware of these. “We, in contrast, don’t just do things for reasons; we often have reasons for what we do” (p.219) But do we? Or do we just make them up?
I was struck by a story told by a colleague – Steve Wheeler – in which he described traveling to Australia to give a presentation and being so jet-lagged that he had no recollection of giving the presentation at all, which he presumably accomplished on ‘auto-pilot’. We all have this experience sometimes, of doing things on ‘auto-pilot’ and subsequently the uneasy sensation that somehow comprehension is a ‘nice to have’ – elevator music if you like. Once again, I agree with Dennett that there is something unusual about humans – but I doubt that it is comprehension. Do we really understand why we are doing things – sure, I can give you a reason why I chose the career path that I did, why I support Arsenal, why a certain person is my ‘soul-mate’ - a reason why I chose one food brand over another. But do I really comprehend? Or did I just make up a story on the spot? I think it is the latter. (Dennett seems to acknowledge this possibility on page 288). Humans are the creatures that can make up a story – and this is certainly a powerful ability. We can share these stories, they can influence our behaviour, but I doubt that they come anywhere near comprehension, except in an abstract philosophical sense. Stripped of all the philosophical baggage, the competence-comprehension distinction is merely the reaction vs a reaction to a reaction distinction. I can have a feeling that I call ‘myself’ and a feeling about that feeling that I call ‘self awareness’. But if I can tell you a story about how the universe came into being courtesy of an elephant and a gang of co-operative turtles, shall we say I comprehend the reason for the universe’s existence? But I agree with Dennett that having a story about things in our heads is a little unusual – we’ll come back to this in a minute.
Why We Love Memes
The second part of Dennett’s book is devoted to ‘cultural evolution’ – in particular the concept of memes. I think the idea that ideas are themselves subject to some kind of evolutionary process is a good one and I don’t disagree with this way of thinking about memes. In fact, I think it would be better to consider memes something like a parasitical species that merely happened to find viable hosts in a certain species of monkey. Dennett comes close to this at times, but once more the ‘infectiousness’ of the rationalist paradigm prevents him from seeing clearly.
Dennett sees memes as a “kind of way of behaving (roughly) that can be copied, transmitted, remembered, taught, shunned, denounced, brandished, ridiculed, parodied, censored, hallowed.” (p.206). it’s a shame that Dennett doesn’t notice that his own definition is profoundly affective in nature – because in the very next paragraph he bounces on to say ‘they are semantic information’.
They are not. Memes, like words, are sentiments.
Philosophers have long been confused by words which, they assumed, referred to objects and relations between objects in the world. In the Tractatus, Wittgenstein writes
“The existence of an internal relation between possible states of affairs expresses itself
in language by an internal relation between the propositions presenting them”
But this is a fatal error, and once you have made it, it is all downhill from there (as Wittgenstein discovered to his cost). Words refer to feelings, and feelings correspond to experiences. Our word ‘chair’ is an expression of the feeling ‘chair’ which we learn to differentiate from other feelings through experience. A parent hands a child a cookie and makes the sound ‘cookie’ which the child learns to associate with that feeling. The bootstrapping is possible because we are physiologically similar creatures that experience similar feelings in similar situations – these feelings are refined and differentiated over time as we observe and react to the feelings of others.
Notice how quickly this ‘saying what you feel’ mechanism becomes conversation: if you walk into a stranger’s living room, point at a chair and exclaim ‘Chair!’ then you are a bit of a dumbass, but if you do the same and exclaim ‘I love this chair!’ then we are well on the way to regular brunch dates.
This is really important to grasp since if we were not physical creatures, wired in a certain way, we could not develop language. We need bodies to develop language in other words, since words are expressions of sensations. Even two humans may attach different significance to words, due to their idiosyncratic reactions – consider the words ‘Brussel Sprouts’ for example. The embodiment of meaning is essential to minds; we cannot separate reason and emotion, mind and body. On the other hand we can separate hardware and software, because computers are nothing like us.
If we start with a different set of assumptions it is really quite easy to come up with a system that has the kind of complexity that is vexing Dennett: humans, like other creatures, store their experiences as a sort of emotional imprint – what we call ‘memories’. This is really quite an efficient system since only what matters gets memorised; the origins of Dennett’s umwelt in other words. Humans (and other creatures) can use these affective imprints in interesting ways – to associate them with noises (or words), to predict the future, and even to string them together as a series of sounds that tell a story. A story is comprised of noises stung together to correspond to a series experiences of affective significance. Humans can take an extra step: we can string together a hypothetical set of experiences, conjured from memory, and imagine how we and others would react. Now we have reasons: someone can say ‘why do you wear pants to work?’ and we can picture the horrified faces of colleagues as we arrive pant-free at the office, our ensuing embarrassment and subsequent dismissal and we can give the questioner a ‘reason’.
One of the curious things about people is that ‘everybody is right’: everybody can come up with a plausible-sounding reason for doing and believing the things they do, even when these things are contradictory. This is what you would expect of the kind of system I have described, where affect (‘System 1’ if you prefer) drives our behaviours, and stories are conjured to post-rationalise them.
And so back to memes: memes are a kind of ‘infectious sentiment’. If you look at internet memes this is abundantly clear. An internet meme is not a rational idea, nor even a behaviour – it might be a short clip of a person falling down the stairs accompanied by the title ‘Monday morning, here I come!’. We should not need to point out that this is purely sentiment. Humans especially like this linking of different sentiments (‘I wandered lonely as a cloud’) and especially the trick of linking in an unexpected or exaggerated way (a common feature of humour). This is why computers can’t do the metaphor trick or laugh at jokes – without processing things affectively, you can’t feel the similarities. In this example, we recognise the feeling of being unprepared and disorganised on a Monday morning and the exaggerated comparison with falling down the stairs makes us smile. We save the meme, we share it with friends.
Of course I am aware that this is not the ‘type’ of meme that Dawkins of Dennett had in mind – but I am cautioning against the philosopher’s tendency to project their own image. If we take one of Dennett’s own examples, we can readily apply our explanation: Dennett describes wearing a baseball hat backwards as a kind of ‘meme’ that spreads. Why would we do this?
I can assure you it is not because we were persuaded through rational argument (and Dennett doesn’t suggest this). Instead, we see someone we think is ‘cool’ wearing a baseball hat backwards (say Brad Pitt) and we see a few other cool people do the same, and pretty quickly we feel like this is the cool thing to do and it makes us feel cool when we do it ourselves. The substance of memes – as with all cognition – is sentiment. Dawkins himself includes ‘tunes’ in his definition of memes – so perhaps there is some awareness of this since tunes are exactly the kinds of things that convey sentiment whilst excluding semantics.
This redefinition helps Dennett with some of the challenges he encounters over words – a word is (essentially) a vocalisation of a sentiment, and the sentiments themselves may vary almost infinitely whilst the word remains the same. The word ‘Catholic’ for example varies enormously in affective significance between someone who has spent a lifetime as a devout Catholic and someone who has merely heard of it as a religion.
The concept of ‘sameness’ in language is also worth closer inspection. Later in the book (p.271) Dennett considers the possibility that we recognise that two things are, in some respects, similar. This ‘noticing of similarities’ can be quite delightful – and is often the basis of jokes or poetry where we indicate a similarity between two things that we hadn’t noticed as similar before – for example Wordsworth’s ‘I wandered lonely as a cloud’. But notice that this ability is entirely prohibited to computers, in fact this problem has been the bane of AI: computers need to know in exactly what way are two things similar? They also need to know ‘exactly what makes a bird a bird?’ And the truth is, we have never been able to say (we don’t even agree amongst ourselves). We have never been able to say, because we feel the similarities. Wordsworth says ‘I wandered lonely as a cloud’ and we intuitively get that our feeling of floating around, adrift is somehow captured by the image of a solitary cloud in the sky. We compare things affectively, not semantically.
This also means that we can dispense with all the silly arguments for a universal grammar – or even just a set of rules that govern what we say. We simply speak in the way that feels right. ‘Feels right’ is a kind of fuzzy logic – an iterative approximation to what we hear around us. Compare this to ethics, where we also struggle to precisely define what is right and wrong – because this is also, ultimately about what feels right. What feels right will depend on any number of things – your upbringing, your culture, the look of the person you are dealing with, what you ate for lunch. So ethics is going to be a problem for computers. Imagine writing a line of code for a universal machine ethics:
do {what feels right}
until (end)
Consciousness Revisited
At the end, Dennett returns to the topic of consciousness. I don’t believe either of us can fully explain consciousness, but I think the account presented here is more promising than Dennett’s for the following reasons:
1) It resolves the question of what creatures can think and which cannot. Since ‘thought’ is merely a category of sentiment all creatures capable of feeling – at a minimum creatures capable of experiencing pain – are capable of thought. We can draw an arbitrary distinction in the use of the word ‘thought’ if we like – saying that it is a word only applies to human sentiments for example – but at least we won’t fall into the trap of thinking that thinking is something mystical, or something only we can do.
2) It helps us to understand why differences in consciousness might arise: cognition is derived from our physical experience of the world over time. Humans share a way of feeling/thinking about the world because they are biologically similar, raised in similar ways, in similar environments. Even amongst humans, the evidence shows that radically different upbringing may significantly alter the way an individual feels about the world (their ‘umwelt’). So it is safe to assume (as per our intuitions) that the greater the physical differences the greater the differences there will be in ‘what it feels like’ to be that creature (hence it will be increasingly non-sensical to ask ‘what it feels like’)
3) It provides a (non-Cartesian) framework for understanding human consciousness: whilst all creatures have reactions, humans have reactions to reactions. This is what we normally mean by consciousness – rather than, say, simply being aware of stuff. We can have a sentiment that corresponds to ‘self’ (just as a dog will have a sentiment that corresponds to ‘squirrel’), but we can also have a feeling towards our ‘self’. We can, for example, imagine ourselves in various situations, and how that might feel. Some other animals are capable of self-recognition, so it is safe to assume that some form of (self-)consciousness is present in other creatures to some degree.
Hopefully this helps close the kind of gap that Dennett opens up when he says things like we use “thinking tools to design our own future acts. No other animal does that.” (p.341) I think that all this really means is that we can imagine ourselves in hypothetical situations, and use the projected sentiments to guide our behaviour. This does seem like the sort of thing other creatures can do, even if only at a more primitive level.
What tends to distract people is the sheer magnitude of the differences in what we – humans – are doing and what our closest relatives, evolutionary-speaking, are doing. We are flying around in planes and searching for WiFi. They are climbing trees and running from predators. It’s tempting to think that because of this superficial discontinuity, there must be some fundamental discontinuity in our heads. But we should remember that stripped of our culture, we would be little better than our cousins. In living memory, some humans were living in conditions little better than our cousins’ – small groups of hunter-gatherers, crafting simple tools. This doesn’t suggest that there is something ‘special’ about us that naturally lends itself to cultural advancement (and indeed until recently the rate of cultural advancement has been very slow) – rather that there is a point beyond which human brains are capable of supporting culture which may or may not take off, much as dry brushwood may or may not catch fire, depending on conditions. It is culture that is special, not us in other words.
In Conclusion
It is time to put an end to this philosophers’ nonsense about reason: let me be clear – an ant is a feeling thing, a bird is a feeling thing, a dog is a feeling thing, a human is a feeling thing. A computer is not a feeling thing. And now you may notice that the real question now steps out of the shadows: how does the non-feeling thing become a feeling thing? This – ultimately – is the artificial intelligence question. We will never have a computer that thinks like we do, until we have a computer that feels like we do. How to create such a thing? I don’t know - but I suspect it has something to do with the origins of pleasure and pain (as does the neuroscientist Antonio Damasio).
Like many a philosopher before him, Dennett dreams of being divine. But what is truly great about Dennett is his humanity – his passion, his humour – not his logic. That alone makes his errors forgivable and his writing worthwhile. Adam and the Ants may not have turned out to be bigger than the Beatles, but they still had some great tunes.
Post Script.
I might point out that this entire article is no more than a complex set of related sentiments – as evidenced by the fact that you are more likely to be swayed by your feelings towards myself or Daniel Dennett (‘but he is a famous philosopher at an important university!’) than by any abstract logical considerations that pertain to either of our positions.
Nor am I expecting a response from Daniel Dennett. Not because he is a celebrity and I am a nobody (although that is true) but because he is at a dead end, arrived at by a long path he took as a young man. There is now almost no chance that he can retrace his steps; the level of emotional commitment is too deep. So I am writing this for you, dear reader, hanging a warning sign over this treacherous mine-shaft ‘Do not enter!’ – inviting as it may seem, this path leads nowhere.
I can see another path but, alas, it will be for someone else to explore it fully.
Educational media designer and learning pathway strategist
5 年Hi Nick, I enjoyed this immensely! I love seeing educators engage the philosophical study of mind. I did my PhD in philosophy of education and studied the impact of evolution on theory of mind. Because Dennett was so popular, I started my PhD thinking I would study him. But his rationalism was indeed toxic. I began exploring philosophy of biology and learned there are other interpretations of the purpose and meaning of evolution that provide a much richer story as to how people think and learn. I was especially taken with the extended evolutionary synthesis, through writers like Lewontin, Oyama, and Jablonka. Have you ever read into this area? I ended up connecting modern evolutionary theory with that of Charles Sanders Peirce (one of John Dewey's teachers) and I'm so happy I did. Humankind, indeed all of nature, is more than just a machine and I genuinely feel I have a greater appreciation for it by having refused to contain it in such a small explanatory box. Dead end, indeed.?
I shall add my own meme: TL;DR, although actually... I did. All of it :-) Very thought provoking. After reading this in my head I've summarised that computers will never be creatures until they've had to react to survive. Does instinct as a concept deserve a mention here? It sort of follows on from your idea about what people care about. Animals react to threats and opportunities, as those are things that they 'care' about. Pursuit of food or reproduction as an opportunity, death or injury as a threat. Computers can be made to appear to be reacting in this way, but they aren't. I often wonder if we've really moved as far past animatronics as we think we have.?
Trainee Clinical Psychologist
5 年Amazing stuff! I'm really hoping you're going to publish more books in the future. The point about why the child should close the door really illustrates your points. We're only every mentalising/imagining affective reactions to behavioural cause/effect outcomes. I.E how would it feel to have a burglar/murderer in the house? How would it feel is that burglar/murderer was successful and the multitude of other potential scenarios arising from this. The way we use rational really means behaviour that moves us towards the states that we want and away from the ones we don't.??
Dad, Husband, Passionate for People Development
5 年Thanks for sharing this Nick I’m still trying to understand/make sense of my feelings towards your argument. One thing that I don’t understand is how words like ‘the’ and ‘a’ are feeling words. And how grammar evolves from sentiment. Are these like ancient versions of the ‘baseball cap’ you describe. Used to seem ‘educated/wealthy’ (read cool in the 1200s). Therefore they’re adopted through fashion and passed on in that manner. Also to build on your point 2. That explains how different languages place emphasis on different feelings in the way the words are arranged together. If we were computers wouldn’t there be one system that was most effective for communication that we’d eventually adopt??