Future Of Thought: A focus on human intelligence, as we invest in artificial intelligence.
As CES wraps up, one theme that emerged is that the attention and investment in Artificial Intelligence is reaching a fever pitch, from self driving cars, business decision software, and even the AI powered cat litter box. But what about human intelligence? Our tech and our industries are going to become disrupted by the ever accelerating technology around us, but what if our thoughts will be, too? It's clear from the various research projects looking at the future of work that we humans will need to focus on different skills in the future to remain relevant in the workforce, but what if there are more fundamental changes afoot? What if we need to be thinking about how humans think in the future - not just from a productivity or economic perspective, but holistically? After all, what are you going to do with your excess time and cognitive resources when your car drives itself, and the litter box cleans up automatically?
This morning, my former colleague Debra Stack posted a Yuval Harari quote:
"For every dollar and every minute we invest in improving AI, we would be wise to invest a dollar and a minute in exploring and developing human consciousness"
This topic has been the focus of my attention and personal research over the last five plus years, and I have to say that not only are we not at a 1:1 on Harari's ratio, it feels like we are at something like 100:1, or worse, especially as it comes to the ratio of attention and investment on the future of AI compared to that on the future of human consciousness.
A couple of months ago, Katina Michael, of ASU and IEEE, was kind enough to invite me to speak at ASU's DC campus, for a Consortium of Science, Policy & Outcomes workshop, called 'Data Alive'. The forum was academic, and multi-disciplinary. We had the benefit of a large DC snowstorm the night before, so most of the audience couldn't make it into the city, but it was a good set of discussions about the rapidly changing world of data around us today.
Here's a link to a video of my talk, as well as Eusebio Scornavacca's. Mine is the first 30 minutes of this 2 hr clip, then Professor Scornavacca's, and then we do a joint Q&A. I'd love your feedback.
In that talk I try to articulate my concerns and research focus within this crazy tech world we live in. By day, I create and promulgate advanced new technology, but by night (or, more accurately, the wee hours of 4am - 8am), I research and write about the cumulative effects of technology in humans, specifically the effects on how we think, now and in the future.
I've tried before to present views on my research in my two TED talks (2017 & 2018), but this forum was more academic, and the format allowed a slightly deeper dive into my thinking on the topic.
Question 1: Who do you know that is working on the human side of the equation? I'd love to find more people that are working on this, and find out the ratio is more balanced than I currently perceive. I highly recommend Carl Pabo's work on humanity2050.org, for those that are interested in building out a similar map.
Question 2: In the meantime, for those that are interested, I'd love feedback on the message in my ASU talk, especially on how I can improve on it. One computer scientist who watched it said I brought tears to his eyes at the end - hopefully those weren't tears of pain ;)
My central point is that the technology whirlwind around us today is not driven by something inherent in technology itself, nor unethical technology companies (with all due respect to the great work being done by Tristan Harris and others), or inadequate government regulation, but us - the people who are engaging with and consuming the technology.
To understand what we need to do about the future of human cognition in the technology filled world of tomorrow, we first need to understand why we interact with technology the way we do today. After all - our currencies of money and attention are spent on technology, and the scale of technology's impacts are in the trillions of dollars annually, globally, disrupting not only our industries, but our social structures and institutions. They are also, in my opinion, disrupting how we think, which means they are fundamentally rewriting what it means to be us. Given the monumental impact going on in our time, we must set out to understand why we consume technology. I'm not chicken little here - change will happen, and that's OK. But why not gain a sense of perspective on the problem, perhaps a sense of control, at least on our own individual decisions, rather than just throwing up our hands and saying it is just too complex to comprehend? (I'll give you one reason: incentives. Those of us working on tech and AI have a financial incentive, and it's socially "sponsored", understood, accepted, and those who work on it are looked on positively. My experience on the human side of the equation is there are no incentives. There is no money to be had here for commercial folks like me. Academics are not likely to gain tenure in this work - it is too interdisciplinary. For everyone, it is too big and too hard to give it a trivial amount of attention. It requires the kind of Deep Work that Cal Newport talks about. And our society does not reward that. Don't let that stop you. Just sayin', and keeping it real.)
My work has led me to believe that these interactions are driven by evolutionary adaptive traits which shaped our cognition in the past. As EO Wilson says,
"Humanity today is like a waking dreamer, caught between the fantasies of sleep and the chaos of the real world. The mind seeks but cannot find the precise place and hour. We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology. We thrash about. We are terribly confused by the mere fact of our existence, and a danger to ourselves and to the rest of life.”
Yuval Harari takes a similar approach, spending 30 pages in his book Sapiens, going back 50,000 years to highlight how our stone age ancestors' history informs our time. But neither Harari or Wilson go back far enough, I believe. I'm drafting two books right now (and these are available for review/feedback, if anyone is interested, or has trouble sleeping at night). The first, The History of Thought - Book 1, goes back 63 million years and looks at the co-evolution of 'technology' and cognition among pre-human species. My co-author, Chet Sherwood, a leading evolutionary neuroscientist, recently wrote a Scientific American article highlighting why humans are special, but he'd be the first to say that much of what drives humans today evolved far before we were humans. And I'd argue that those changes in the past were adaptive to those past environments, but are now unconscious biases which could be mal-adaptive to our current world. However powerful those forces are today is not as well understood as I think they need to be for us to make effective prescription.
Tim Taylor, the editor-in-chief of the influentual Journal of World Prehistory, is my co-author for The History of Thought - Book 2, which goes by 6-7 million years, looking at the co-evolution of technology and cognition in the human species from then until the present. In that book, we look at the advent of fire, language, writing, and all of the more traditional "technologies" that have happened in recorded human history.
These two books compose the legwork, the 'compulsories before the free skate' (you may have to google that figure skating analogy), to form a research foundation for the third book - The Future of Thought.
Here's the thing - I don't claim to be an expert on the past, present or future. I'm just asking the questions, and trying to dig out the answers. That causes me to track down different experts and people interested in the question. As AI explodes logarithmically, it WILL have effects on us, individually and collectively (see my 2018 TEDx for my thoughts on that expansion, v-a-v humans). We will have to team smarter with technology to be relevant, but our ability to interact with other humans, too, will be more important than ever. Your NQ (networking quotient), especially your ability to reach outside your own disciple and collaborate with thought leaders in other domains, may become more important than your IQ. IQ can be replicated by technology, but NQ may be harder for computers to replicate, at least in the near-term.
A good portion of these future AI/HI (human intelligence) dynamics is predictable, and much of it will not be. The fact that some of it is unknowable, and that the big picture is beyond the scope of our individual expertise is NOT an excuse not to be working on this problem. I'm not an more of an expert on this than anyone else, nor someone calling 'the sky is falling''; I'm raising a siren call for people to rally to these questions, and work together - first to understand the situation better, and then begin to work together towards solutions to leave us better off than we would be if we ignored the questions.
I'd love any feedback on this topic. It is much too important of a topic for me to be afraid of getting my feelings hurt when you guys tell me my talks are too long, unwieldy, disjointed or ineffective.
Question 3: How do we crystallize this issue, and raise some discussions? How do we apply our respective and collective NQ to this issue? We don't need a million followers, or to have the kind of exposure Tristan Harris or Yuval Harari have gotten. By my guesstimate, there are maybe 2,000 people out there who could have the interest and discretionary brain cells to work on this. Let's find those people, and get them talking. I recognize that not everyone will agree with or want to spend the time on the historical/evolutionary foundation I created. That's fine - that was one way to approach the problem and gain perspective on why we interact with technology the way we do. Besides, that work is now largely done. The pressing question now is how are we going to apply those insights into today? And what other approaches are out there will help inform the question, beyond or better than the one with which I started?
My deepest gratitude to all those who've already leaned in on this incredibly complex issue, and given feedback, support, encouragement. Let's not let up. Is there anything more pressing, interesting and potentially impactful that we could be doing than trying to address the question of how technology will impact us in the coming years? And if there is some more interesting demands for our attention than that, what will happen if no one steps up to work on these questions, and these go unanswered?
I feel we'll just continue to, as EO Wilson said, thrash about, and be confused.
What do you think - those 4 people who took the time to read this TL;DR article?
Destination Coffees | KM Digital | Greater Glory Coffee | MeetingResult
5 年Short story - learned about Dr. Robert Carkhuff in the late 1980's via the printing business - and was exposed to one book in particular The Age of New Capitalism - your efforts, Pat, conjured up thoughts on human creativity and preparing for the coming information age (this was 1988 or so) - in any event, did a quick search on Dr. Carkhuff and found what I share here:?https://www.carkhuffgenerativitylibrary.com - this may be useful to your research - and worth a browse for those interested in human interactivity and more...
Pat - great topic and discussion. Thanks for continuing to bring awareness and thought here. We both have young kids. You asked for ways to improve the discussion. Perhaps introduce your kids and their ages, and try to predict what life for them will be like when they reach your venerable age? I find it is a very difficult thought experiment.
Hi-Performance Advisor/Sports & Business leveraging game changing Technologies
5 年Unbelievable and thought provoking on our future to avoid brain atrophy .... u r ahead our time my friend?
Conseiller sénior en gestion du changement et transformation organisationnelle
5 年Great article and great talk as usual Pat! Since the last time we spoke, I have started a group that is concerned about the future of work in light of technology evolution, we call ourselves the Undeletables... Our goal is to help people, prepare, adapt, and thrive in the 4th industrial revolution. We held a first conference in L.A. in November and will probably do the next one in San Francisco in spring 2019.
John C. Havens; Asslam Umar Ali; Dr. Anas Aloudat; Carolyn McGregor AM; Joseph Carvalko; Eusebio Scornavacca; Roba Abbas; Jai Galliott; Gary Marchant; Andrew Maynard; David Guston; Clark Miller; Gary Grossman; Heather Ross