The Jetsons or The Terminator: How Generation X considers technology
A few years ago at a business conference I found myself bouncing in my seat to ask a question. The event was full of high powered professional and executive women from all over the country. We were hearing a number of presentations on technology and the future, from leaders in the tech industry as well as politicians. The message was that artificial intelligence was coming, not for the blue-collar worker, but for us. The marketing message that was being sent repeatedly was that we'd better "get on board." The idea was that artificial intelligence was taking over and directing society and that it was inevitable and that we educated folks should be good hand maidens and help out the new colonial master. As part of the presentation there were lights and special effects. It felt like they were trying to recreate being in "The Matrix." It felt dystopian yet we were being asked to throw ourselves at it.
Today, reading about the debate between technology pioneers, Elon Musk and Mark Zuckerberg, on concerns about the potential impact of artificial intelligence on humanity, I was reminded of this conference and the questions I finally got to ask as it ended. With knots in my stomach I picked up the microphone. I knew that much of the conference was funded by companies that were investing in artificial intelligence and that asking what I had to ask would be politically incorrect to some people. Conflicted, and raising my hand intermittently during the question and answer session, I was shocked when the microphone was placed before me when my hand was down. It was a sign. My voice shaking, I asked, "why would I ever be for technology that was against humans?," I continued, "Shouldn't technology be our tool--not our master? I ended, "Are we looking to build a future that looks like The Jetsons--or The Terminator?" Drop the mic--or was it snatched. I don't remember.
There was a moment of silence as the political celebrity, sitting closest to me turned and studied me. For a moment he had an expression on his face that reminded me of the hour glasses or spinning circles on a computer screen as input is being processed. Quickly, he responded, "that's a good point."
Although I am far from a Luddite, and part of me resists generalizations regarding target market categories, when it comes to issues of technology, I find myself a dyed-in-the-wool Gen Xer. Truth be told, we are the best when it comes to technology. We learned how to think and solve problems in an analog world. We were the developers and beta testers of the technology revolution of the 1990s and early 21st century. Now in our prime, we are productive in a world highly influenced by information technology. And when things breakdown, we still know how to get a task done. Technology is our tool. We are not it's tool. The tail does not wag the dog. Often described as being "better" with technology, millennials, to a Gen Xer might more likely seem dependent on technology.
Gen Xers were also the latch key kids. We grew up understanding responsibility. We reached puberty after sexual liberation...and the AIDS epidemic-- so we grew up understanding consequences. When I hear Zuckerberg, a millennial, comment that we should not consider the possible negative impact of artificial intelligence, I have to wonder if his perspective is not informed by his having grown up in a time when people were more groomed to accept digital options without question. He also works in an industry, marketing, that is less regulated and which has lower stakes consequences. Musk, on the other hand, still, a technology giant in a category of his own, works in highly regulated industries with higher stakes consequences. As a Gen Xer, he grew up in a time when children were more exposed to responsibility and impact. It should be no surprise that he might consider option Jetson or option Terminator--as possible outcomes. For me, a Gen Xer, in the highly regulated industry of medicine, I get these concerns, too.
What do you think? Is it irresponsible not to consider both the risks and benefits of how and why we develop and implement artificial intelligence in our society?