"WARNING, Will Robinson!" & Other Musings About A.I.
Bob Britton
The "Revenue Maestro" - SMB & Midmarket Top-Line Revenue Improvement ? Management Consulting ? Sales/Marketing/CS Strategy ? Sales/CS Training & Coaching ? Leadership & Team Development ? Corporate Storytelling ? CX ? AI
Me: “I love you, Siri.”
Siri: “You are the wind beneath my wings.”
Anyone else had that “conversation” with your iPhone? We’re amused, and even a little amazed, at how “she” responds to us. We ask Alexa to turn down the lights in the living room. We say “Keanu Reeves” to our remote control and the DVR immediately offers up John Wick, Point Break, and Speed. Looks like Gene Roddenberry got a few things right when he wrote Star Trek.
Closer to our business world, we have systems that will harvest data from the Internet and offer up a list of prospects to call. We have BI (business intelligence) software which grabs data, presents a hoard of information and statistics on a given company, and offers up comparisons with other companies. There are programs which will create and send out emails, and even call somebody on the phone, have a “conversation” with them, and set the appointment based upon the responses of the human on the other end of the telephone line. Our “smart” phones are now letting us know when it’s time to leave to make it to our next appointment.
And we call this, incorrectly, AI, or artificial intelligence.
RPA vs. AI
It seems most of what is being touted as AI, isn’t. We need to understand some of the differences between AI and RPA, or robotic process automation. The lines are becoming blurrier each day, thanks to the conflation of the two terms by those who don’t know the difference and/or are incorrectly using the term AI in their marketing to capture your attention.
First, RPA. A robot is a machine, sometimes ascribed the qualities of a human, which carries out repetitive tasks. Modern usage of the term “robot” has expanded beyond physical machines to include computer programs and “bots” which harvest information from the Web. We need to rethink our Isaac Asimov anthropomorphic vision of what constitutes a robot, and not think of them as only things we can physically touch, whether it's a robot that looks like a person, a dog, or the shiny red thing you see flailing about on the assembly lines.
Generally, we view robots as being “dumb”, meaning they only do what they’re programmed to do. We view robots as being useful to automate the tasks a human would normally do themselves. Speaking into a microphone to turn on lights or change TV stations is RPA; the microphone substitutes our using a keyboard or a remote, increasing our speed and efficiency, though not our effectiveness, since the results would be the same whether we turned on the lights or Alexa did. Robots free up a human’s time and reduce the errors introduced by humans into repeatable processes.
The programming behind RPA tasks can be simple or extremely complex, depending upon how many tasks are chained together. The more complex they are, the more they mimic what we mistake for AI. RPA can also execute its complex programs so quickly that it further adds to what we perceive as intelligence; how long does it take your car’s self-diagnostics to check its own tire pressure, oil level, and battery charge, for example, compared to you having to check all of those on your own? Both we and marketers are now calling our cars “intelligent” because of such things. But again, the results are the same, whether the “bot” in the car does it or a human does it; efficiency is improved because it saves time and is more accurate, but effectivity doesn’t improve since the end result is the same.
Next is AI. AI is viewed even more anthropomorphically than robots – we liken it only to a human brain, and incorrectly expect it to behave as such. We probably even conjure up some physical representation of AI, and maybe picture a box with all sorts of wires attached. In reality, AI is just line after line of computer code. One of the key differences between RPA computer code and AI code is heuristics, or self-learning; whereas RPA is an algorithmic program which, once written by a programmer, never changes, AI is a program which rewrites itself (sort of), based upon available data as well as its own results. RPA results are consistently the same, and AI results may (and likely will) vary.
Do you have auto-correct turned on when you’re texting? If so, you’re seeing a tiny glimmer of how AI works, the results of which can be “interesting”. You may also notice it gaining accuracy over time as it “learns” the words you use more frequently. Those who use voice recognition software (Dragon, for example, or your phone’s speech-to-text function) are also familiar with AI, since it adapts to your voice patterns. Many (most/all?) voice recognition programs have a reset button, so you can “retrain” the program to become more precise in understanding your unique voice, tonal inflections, and accent. If the program starts interpreting your Bostonian “bars” as the sound a sheep makes, then you erase the learning and start from scratch again. I’m not sure if there’s a requirement in AI design that there be a reset or “unlearn” function, but to me it sounds like a good idea. The point is, if you simply follow the auto-correct interpretation of your voice and press “Send Text”, you’re potentially altering the effect you have on the receiver of the text, vs. if you had taken the time to type it in correctly without auto-correct turned on. In the case of auto-correct, the change in effect could be funny, or it could be negative, or perhaps more positive if it suggests a better word than what you had originally typed – but the result is different from what you originally intended.
To recap, increasing efficiency through robotics is RPA, while a change in effectivity, for better or worse, is AI.
So, What’s the Catch?
The catch is our expectations of AI, and our incorrect assumptions that AI can be a one-for-one substitute for another human being. My intent here is not to discuss the morality or ethics of AI; that’s another topic, plus AI is here already, and probably here to stay. We do, however, need to understand what it is we’re dealing with.
First, AI is evolving, rapidly. There is a tremendous amount of research and work being done to emulate the human brain in lines of binary code. But we need to understand that AI, despite it’s seemingly human interactions with us, is just that, lines of code, an ever-changing series of 1’s and 0’s. The genesis of AI code, the reason it does what it does, is not based upon billions of years of evolution; AI is born out of human beings tapping on keyboards. AI is an attempt to recreate something which we do not adequately understand – the human brain. AI is a program, a robot, a tool. It doesn’t have DNA. Guiding principles such as morality, ethics, politics, and intuition must be programmed into it. It is not subject to random and evolutionary variations unless they’re programmed in. There’s nothing natural about AI. it’s not something you can touch, and you shouldn’t be having a conversation with it as though it were human. It is something to be controlled, something which must ultimately serve to better humans, not something which itself is to be served.
Ultimately, business is about decision making between humans – whether to sell, whether to buy, whether we can, whether we should, who we can and should interact with, and why... AI decision making is different from human decision making. AI makes decisions based upon logic and probability. Humans, comparatively, may get to their decisions points using logic and probability, but the human act of deciding is emotional[i], and chock full of cognitive bias. Those decisions, once made, are then supported based upon available data, information, and knowledge, and are subject to confirmation bias which skews the value placed on all data. Human decision making is messy, to say the least. AI, in contrast, doesn’t make intuitive leaps, it powers through mountains of data and makes its decisions accordingly. Comparatively, humans don’t, and in fact can’t, look at the same quantity of data as AI; we instead arrive at our decisions based upon things which are relevant to us at the time. It’s the difference between the way a computer and a human will play chess: Computers use a brute force approach, calculating all probable outcomes some given number of moves away, selecting the moves with the highest probability of success; humans, on the other hand, aren’t just playing to the probability of the move, they’re also playing the irrational human across the board from them – which is why people can still beat computers at chess, not because they’re “smarter” than the computer, but because the computer didn’t predict some random, irrational move by the human. In business, our goal is to meet the needs and wants of other humans, with profit being both a by-product and very accurate barometer of our efforts.
Humans, too, think about probability, but don’t always go with the solution which offers the greatest chance of success. Why? Because of the dynamic of risk vs. reward. Entrepreneurs take what some consider to be an irrationally high degree of risk, rather than playing it safe. AI advocates would argue that AI can be programmed to accept lower probability solutions like an entrepreneur might, but what’s missing from AI is human intuition, which is a thing that one knows or considers likely from instinctive feeling rather than conscious reasoning.
To bring this back to business decisions, we are flawed in assuming those we deal with are rational players. There’s a certain amount of irrationality in every person, and most definitely a more limited capacity to absorb and process data than the computers of today. So, suppose we do release the Kraken of AI upon the top of our sales funnels, filling them with ten times or more the number of prospects we could do manually in the same amount of time, which supposedly are filtered and better-qualified. What will the seller do then? They’ll most likely take the “advice” of the AI engine and begin engaging with the prospects having the highest probability of purchasing. Sounds perfect. Except…
Except that if the decisions our clients made were rational, there’d be no reason for the customer not to sign on the dotted line after we’ve laid out all the reasons it made sense to do so. Given the (declining) success rate of quota attainment of late, clearly there are some X-factors, human factors, irrational and emotional factors at play here causing our clients to choose another vendor over us, such as the previously mentioned confirmation bias. Our customers don’t always buy simply because the numbers make sense. To put it more simply, the customer may need what we have to offer, but wants something else. I suspect this might cause AI to blow a fuse. In the immortal words of B9, the robot from Lost In Space, “Does not compute!”
Let’s flip the coin over and look at this from the customer’s side. If we in sales are chasing down AI, how long before the customers begin using the same technology? Our customers have been evolving faster than us for the past 20 years, so there’s no reason not to expect they’ll be ahead of us in the AI race very soon, if they’re not already. They’ll use their AI to filter out our AI and will use the data presented by their AI to get to the point where they need to make – an emotional decision. Then begins the irrational, biased rejection of the perfectly rational reasons our AI presented to them. Where are we then? Back at square one?
And allow me to present one more scenario, if I may. At what point does the seller’s AI simply start interacting with the buyer’s AI, making all decisions based upon probability, cutting out the slow-poke humans who have limited bandwidth and a penchant for irrational emotion? Isn’t AI-to-AI commerce, vis a vis B2B, simply RPA? I’m not trying to paint some type of Michael Crichton Terminator-Skynet apocalyptic scenario here, although it’s not a far stretch of the imagination at this point. What I am saying is this, philosophically: Our ability as humans to make intuitive, irrational leaps is both what propels us forward, while at the same time throttles us back a bit. People are a two-steps-forward, one-step-back species. AI has the potential to be a thousand steps forward with limited, if any, steps backward in some alternate concept of reality. And consideration of the impact of AI on the human race, by the way, is hardly new. Back in 1942, we were introduced to Isaac Asimov’s Three Laws of Robotics from his watershed work I Robot:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I’m wondering how many designers when engaged in robotics take Asimov’s prescient Three Laws to heart when they create AI for the purposes of sales; are they considering humanity’s benefit and/or preventing humans from harm caused by these AI engines, or do they simply try to improve top- and bottom-line performance?
The Unintended(?) Consequences of RPA and AI
RPA and AI have enormous potential to impact business processes, hopefully for the better. From medicine, to education, to space exploration, the positive impacts have been and will continue to be tremendous and wonderful. However, we must keep in mind that if the intent of AI is to increase top- or bottom-line performance for a company using a competitive paradigm, then AI will execute its programming and make those performance increases dispassionately, ruthlessly, predictably. There’s an obvious and rapid shift toward automation within businesses, driven by the complexity of the world around us and the need to wrangle and make sense of the almost limitless amount of information available to us. RPA is intended to make us more efficient, thereby reducing errors and freeing up time. AI is intended to make us more effective, so we better able to meet the wants and needs of other people. Which leads to the early 19th century argument of the Luddites, who feared technology would displace workers. Well, yes, that happened, and continues to happen. In fact, RPA is priced and sold according to the equivalent number of people it replaces. Once an employee has free time, utilization drops, they become prohibitively expensive, and they then need to find other, meaningful work. But, the top- and bottom-line performance of the company is improved... We can only hope those implementing RPA and AI will keep this in mind as they move forward.
A Final Thought
One of my favorite places on earth is an art museum. Whether it’s the Tate Modern, MOMA, the Vatican, the Guggenheim, or some little gallery in Newport overlooking the Atlantic Ocean, you’ll find something created from the imagination of another human being which catches your eye, causes you to linger on it for a couple of extra seconds, gets your imagination going, tells a story – resonates with us. Sometimes we can’t even put our finger on it – we just like it. What would resonate with AI? What would AI ‘like’? Given Munch’s The Scream, Dali’s The Persistence of Memory, Picasso’s Le Reve, Rousseau’s The Sleeping Gypsy, Mondrian’s Composition II in Red, Blue, and Yellow, or Jackson Pollock’s No. 5, 1948, which would AI prefer? More importantly, as it relates to our topic of AI in sales, what would cause AI to purchase one of these? Honestly, which one or two of these classic works of art do you suppose AI would lay down millions of dollars for – and, most importantly, why?
[i] Damasio, Antonio R. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. New York: G.P. Putnam,?
Creating Communities of Business People | Director | Fan of Women on Boards
6 年AI looks interesting Bob, look forward to hearing more about it.