What happens when our technology outdoes us?
Mushtak Al-Atabi
Purpose-Driven Leader | Storyteller | Negotiation Educator | Provost and CEO at Heriot-Watt University Malaysia
What is the purpose of technology? I believe that we develop technology (tools, machines and processes) to deliver performance that we cannot deliver ourselves. When we domesticated animals and invented the wheel, we were motivated by the desire to move faster and farther than we could do on our own. When we invented computers, we wanted them to have faster processing capabilities and better memories than ourselves. This has been the general trajectory of all technological advancements.
These advancements had, and continue to have, impacts on all aspects of our lives. They influenced the economy, political systems, our jobs and even the structure of our society. But one thing that endured as technology progressed; humans remained largely in charge of the technology. During the first industrial revolution, for example, when we moved from using human muscles and physical labour to employing steam-powered machines, we still needed people to drive and direct these machines. The machines could not think for themselves, they needed our cognitive labour to drive and direct them, to turn them on and off, and to fix them when they malfunctioned. We were in the driver's seat.
Even as we built our first computers, we were in control of them because these computers needed us to programme them. Humans were the ultimate masters of their technology. With artificial intelligence (AI) and machine learning, computers are getting increasingly more powerful, and have surpassed human beings in games such as Chess and Go as well as in the diagnosis of diseases and the analysing of X-Ray scans. Nick Bostrom, the Swedish-born Professor of Philosophy at Oxford University who founded the Future of Humanity Institute, has long predicted that, in the not too far future, computers will be more intelligent than humans, achieving what he calls "superintelligence”.
The big question is, when computers develop superintelligence, how can we be sure that it won’t all end in disaster? How will computers know how to behave? This might sound like a science fiction question, but it is one that deserves consideration, alongside the regular questions about jobs to be lost to automation and other economic and political questions surrounding Artificial Intelligence.
Professor Bostrom’s view, elaborated on in his TED talks, books and papers, is that we need to be confident that when super-intelligent AI escapes the control of humans, it remains safe. How? He says that we need to ensure that AI ‘is fundamentally on our side because it shares our values’. Make machines share our values? This will require a paradigm shift towards an area that we, rather than the machines, are superior at, i.e., our value system, self-awareness and emotional intelligence.
After all, machines learn from Big Data sets that are generated by us. They include the content we create and the behavioural breadcrumbs that we leave behind as we surf the net and make choices and decisions. When training Amazon's Alexa using content available on the web, a team of researchers from my own university, Heriot-Watt, discovered that Alexa developed a nasty personality. It is now very well established that AI algorithms (eg the ones used to select the right people to hire) pick up human biases in decision making, even when humans are unaware of these biases themselves.
Garry Kasparov was the human Chess champion when he lost to Deep Blue, an IBM computer, in 1997. He now advocates the development of protocols and skills that enable humans to work collaboratively with intelligent machines. "We must face our fears if we want to get the most out of technology -- and we must conquer those fears if we want to get the best out of humanity," says Kasparov.
So how can we ensure that we remain relevant, productive and that our technology remains safe? During the first industrial revolution, humans were able to control machines thousands of times stronger than themselves, because they were cognitively superior. To prepare for a world in which machines are cognitively stronger than us, we need to ensure that we are able to reach our ultimate domain, our emotional potential.
To achieve this, Positive Education is key. Positive Education focuses on parallel tracks; cognitive labour, through academic excellence, and emotional labour, through building character, purpose and a value system. Only if our scientists and engineers have the right value systems themselves will they be able to build technology that shares those systems.
That is why, at Heriot-Watt University, we have made a strategic choice to adopt Positive Education across our institution. We are committed to being pioneers in education, and to equip our graduates with emotional skills so that they achieve better academic and professional capabilities, not to mention a positive reception by employers.
Employability is key, and at Heriot-Watt University we know that our focus on Positive Education and the development of a personal values system deliver rounded graduates who will adapt well to the world of work. This focus may also be what is needed to ensure that the technology of the future is safe and delivers the things that the world needs to keep it that way. Personally, that makes me sleep better at night. You?