The Colour of AI
Blog post originally published at https://www.sandraleatongray.wordpress.com on 8th January 2018.
This week, Professor Rose Luckin wrote an insightful blog post about the value of 'slow' human intelligence (as opposed to 'fast' machine intelligence) on the UCL IOE Blog, which you can read here if you are interested:
At a time when Artificial Intelligence (AI) is being routinely hailed as the new revolution, as well as potentially the new hell, it's important we reflect on why this has come about, and what it means for human society.
The primary reason for all the fuss is that many people (most of whom should know better) are unwittingly anthropomorphising AI, which is a problem. When we make criticisms of them doing so, we need to start using that word very specifically. Seeing AI as some kind of indicator of how humans think is a problem because AI tells us *nothing* about how the human brain works – it only tells us how AI works, as a mathematically-based system of learning remotely outsourced to a machine. It needs to be seen as a glorified system of punch cards, admittedly calculating things at considerable speed and with increasing technical panache.
Naive acceptance of AI - or the assumption that it will take over the planet and we will all have to flee to outer space - is invariably rooted in the idea that knowledge and intuition are in the final analysis interchangeable, and bounded by physical phenomena, rendering them comprehensible on those terms. Yet its path to knowledge is governed by existing human prejudice - we develop algorithms to tell AI systems what to privilege and what to ignore and these algorithms are always flawed (because human beings are always flawed, and always will be). Some researchers are onto this, for example Ansgar Koene, Elvira Perez Vallejos and colleagues working on the Unbias project (Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy) at Oxford and Nottingham Universities (more here https://unbias.wp.horizon.ac.uk), and Christopher Grimm's team at Brown University, who work on what is known as 'Deep AI' including identifying which areas of selective attention exist in AI systems (their latest paper on this issue is here if you feel inclined to follow this up further https://arxiv.org/abs/1706.00536)
Consequently when people aren't coming up with Hollywood-style doom scenarios, AI is seen as a panacea for many things, including unlocking the secrets of humankind and its future, ensuring human development is fully optimised, and facilitating a bright new meritocratic dawn. The problem is that AI can only tweak human experience at the edges, make things a bit more efficient, or help us navigate society in a slightly more productive way. And that's all it will ever be able to do. The reason for this is that human intelligence has infinite variables and takes infinite forms, and cannot be reduced to a binary phenomenon, or measured in logic gates, in some kind of gargantuan flow chart (if only we had a metaphorical piece of paper or screen large enough to lay it all out). Indeed, to attempt to reduce it to such a concept represents the height of intellectual and philosophical laziness, and dare I say, hubris.
So when someone tells you that in the future AI will be able to tell you about a disease you are harbouring before you experience symptoms, take your child rapidly through a mathematics curriculum, or find you the perfect life partner, this may be true and it's fine to acknowledge this, but we must always ask ourselves, "But will another AI system then block me being able to have health insurance? Will my child feel daunted by being rattled through a screen-based curriculum when their developmental interests might legitimately be served in another way? Is online dating reducing us to trading emotional commodities rather than encouraging the very altruism that is primary to the human condition?" If we ask these questions of ourselves, then we are truly paying attention to what matters.
It's unfashionable to think like this, of course, and it is certainly not where the money lies. Through allowing ourselves to consider the human brain as little more than a highly complex computer system, with attendant technical metaphors, we have made it apparently acceptable for tech poster boys such as Zuckerberg, Gates, Musk and their ilk to become increasingly privileged in terms of voice, imposing their way of thinking and behaving on the rest of us, regardless of any consequences.
- Zuckerberg et al are indirectly guilty of rendering society dysfunctional and inattentive, through allowing their companies to develop powerful tools to captivate and distract, with little regard for the wider societal implications of doing so, particularly in relation to children and young people.
- Gates et al are indirectly guilty of aggressively harnessing infrastructure and systems for the financial benefit of highly dominant companies, and then engaging in philanthropic wealth redistribution exercises of their own choosing via foundations, rather than applying truly democratic principles to their work.
- Musk et al, even though they claim to be addressing potential shortcomings of AI, are indirectly guilty of feeding harsh discontent about the human condition, and encouraging intolerance of more spiritual means of human flourishing right here on earth.
All of the above are routinely engaging in what I would term narrow-spectrum analysis and promoting this as the desired medium, ultimately for personal profit and gain (however it is dressed up). Narrow-spectrum analysis focuses tightly on a limited number of ultimately self-interested, self-absorbed endeavours and concerns, while disrupting the status quo ('disrupting' being another fashionable modus operandi). We never really talk about the clean up effort that follows disruption, and the cost to groups of individuals. We need to start doing so. Tolerance of narrow-spectrum analysis needs to be challenged much more powerfully. Just because someone presents a clever argument, this does not give them the right to sidestep their responsibilities to human relationships (or insist we take them seriously when they endlessly assert that we will soon have to evacuate nine billion people to Mars, or we will all become the pets of robots, or in a decade or two we will all become cyborgs in a singularity explosion, or whatever).
A final thought, and an explanation to the title of this blog post and its hommage to Terry Pratchett's book The Colour of Magic. If you are ever sitting in a meeting at work or listening to an interview, and suspect someone is being intellectually lazy in relation to the use of the term AI, replace it with the term 'magic' and listen to what their arguments sound like then. The anthropomorphism and narrow-spectrum analysis will suddenly become all too apparent. Therein lies the next revolution, where we wake up and start working out what it means to be truly human. This means seeing ourselves as much, much more than simply a wet form of computer.