A BAKER’S-DOZEN THINGS TO KNOW ABOUT THE IMPACT OF AI
Baker's-Dozen on the Impact of AI

A BAKER’S-DOZEN THINGS TO KNOW ABOUT THE IMPACT OF AI

1.    The term Artificial Intelligence (AI) was coined on September 2, 1955 during a Dartmouth College conference by computer scientists who proposed a study to advance the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. Charmingly optimistic (or na?ve), they suggested they would need a 2-month, 10-man study to accomplish this feat!

2.    Sixty plus years later, we are finally seeing advances in AI, but these are raising worries about the implications of AI. Computer scientist, Ray Kurzweill, has even proposed the idea that given the trends in computer power, in another two decades, we will come to witness the event known as “The Singularity” when computers will exceed the intelligence capabilities of humans, and presumably will be able to take control and become our overlords. Similar concerns have been expressed by other luminaries such as Elon Musk and the recently deceased Stephen Hawking. This claim is not new. Many decades ago experts argued that Artificial Intelligence would be possible once we were able to build a computer the size of a ten-story building. However, we now have more computer power in our smart phone than all the NASA computers of the sixties, and we still aren't seeing Space Odyssey’s HAL bossing us around.  

3.    The fact is, today we are not much closer to the goals put forth by the Dartmouth conference. This is not due to a lack of computing resources, despite all the hype, but because no one really knows how it is that we humans think!  

You have probably heard of the Turing Test which basically states that if you were to find yourself conducting a meaningful conversation with a computer—all the while believing you are talking to a human— then the computer can be said to be intelligent. Still, this test does not explain just what exactly intelligence is. We can borrow the dictionary definition: “[intelligence is] the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria”. I believe the most relevant feature from this definition is that intelligence is the ability to model reality via the use of abstraction (“the quality of dealing with ideas rather than events.”)

I don’t mean to offend those who can rightly point out specific examples of animal behavior that seems intelligent, but under this definition animals can be smart, but not intelligent. Yes, animals have been seen manipulating their environment, and even doing things that give the appearance of planning, but they have not yet been shown to be able to think in abstract terms.

Also, at the risk of offending Robot-lovers, by this definition, AI will not be taking over the world anytime soon. The best we can expect in the foreseeable future are Artificially Smart Systems (I’ll spare you the acronym), with computers able to perform extreme automation. And that is not anything to be cavalier about.

Really. You should be more worried about HS (Human Stupidity) and HG (Human Greed) than AI. Human Greed’s most likely impact will be the continued proliferation of automation without corresponding political or social adjustments to make it work for all those who will lose their jobs as a result. As far as movie portrayals are concerned, you should be more frightened of a “Wolf of Wall Street” scenario rather than one resembling “Terminator”. 

The next points will explain why . . .

4.    Initial advances with “AI” have not been the result of machines emulating human thinking. Take IBM’s Deep Blue that beat Garry Kasparov at chess. The computer programming utilized techniques that do not in any way resemble how we humans think. So, impressive as that feat might have been, no one could claim Deep Blue was proof AI had finally arrived. Approaches that try to program the way humans solve problems are confounded by the fact that we acquire and utilize knowledge in a manner that is often inaccurate, biased, or even intuitive. This knowledge is also dependent on contextual, subjective, and often inexpressible decision-making rules.

Others think that copying the way our brain works is the way to go about it. This belief has motivated multi-million-dollar projects such as the European “Human Brain” project[1] and the American-sponsored Brain Activity Map Project (BRAIN). The initiatives are attempting to map the neurons in the human brain to hopefully figure out how this three-pounds of goo works. Good luck with that. It is estimated that our brains have about a hundred billion neurons, yielding 100 trillion neural connections on average. The complexity this estimated number of connections can generate is mind boggling. While brain mapping initiatives are worth pursuing on a purely scientific research basis, one must question whether trying to reproduce intelligence by mapping the neurons in the human brain doesn’t have the markings of a fool’s errand. After all, this approach is reminiscent of the times when men tried to build flying machines that were based on the flapping of a bird wings.

5.    On the other hand, the last ten years have seen advances in pattern recognition algorithms, such as Google’s AlphaGo that defeated a World Master GO player by implementing “Deep Learning” neural networks that appear to emulate the way human brains work. This approach yields effective AI applications, provided these applications focus on the recognition of patterns, such as GO game board arrangements, facial recognition, voice recognition, and street views from self-driving cars.

Machine learning is the one AI development that has many people so excited (and others so worried). But before we call on the services of Sarah Connor to save us from the Terminator’s Skynet, we need to put everything into perspective. While pattern recognition is a necessary attribute for intelligence, it’s worth remembering that pattern recognition is something that even the lowliest cockroach can also do.

6.    Much of what makes learning possible is the capability to correctly classify the various elements of reality. We might not even be aware of this, but much of what we do instinctively as we go about our daily lives deals with classifying things and then fitting them into patterns (this is the reason why we tend to see faces or identifiable shapes in in clouds). For our purposes, we can say that machine learning as it exists today is mostly about Classification[2].

7.    At its core, machine learning classification occurs in two forms: The first is via Supervised Learning where you give the algorithm a set of training data that essentially serves as an example for the algorithm (“Fans with red shirts are more likely to be rooting for Manchester United; fans with white shirts most likely support Real Madrid”). The second is with Unsupervised Learning where we essentially allow the algorithm to run loose and create potential groups or clusters based on auto-generated hypothesis that can then be statistically tested for accuracy (“Every time Manchester United scores, a larger percentage of fans wearing red shirts celebrate; likewise, every time Real Madrid scores, many more fans with white shirts cheer”).

Supervised Learning is about training; unsupervised about discovery. Even then, since real life classification often encounters ‘fuzzy’ scenarios, and most popular machine learning algorithms are statistical in nature, results can sometimes be wrong. Biased training data can also make computers fail, and using data samples incorrectly may cause misleading results. There are case of machine learning processes that resulted in the computer making racist decisions because the sample data had such implied biases. To err is not only human, but it is beginning to look as though accepting inaccuracy and fallibility from machines might be a necessary trade off on the road to AI.

8.    Because of this ‘fuzziness’, many argue that machine learning can actually be capable of “creativity”. Again, if we go by the definition for “creativity”, that it is the use of the imagination or original ideas, especially in the production of an artistic work, then we must concede that some AI applications are, in fact, creative. For example, work done by scientist-musician, David Cope, has produced beautiful computer-generated compositions that closely resemble the styles of Bach and Beethoven among others[3]. However, if one goes by an alternative view that creativity is the expression of feelings from the experience of living through aesthetically engaging output, then these machine learning experiments should be viewed as akin to paint-by-the-numbers exercises. As far as I’m concerned, if we are prepared to accept Jackson Pollock’s paint splashes as creative, I see no reason why we can’t say machine learning algorithms aren't as well. But let the debate rage on!

9.    In any case, the main nugget behind the reason machine learning has become much more practical and effective of late is this: The availability of huge amounts of data that can be classified. There are now an estimated one trillion pages on the Web. It is estimated that 2.5 Exabytes of data are generated everyday world-wide. You would need to purchase two and a half billion 1Gb thumb drives at Staples to store this amount of data. Put in another way, 2.5 Exabytes represents 500 billion U.S. photocopies, 610 billion e-mails, or 7.5 quadrillion minutes of phone conversations. In fact, as figured out by those who have taken the time to do these calculations, all the words ever spoken by mankind amount to ‘only’ 5 Exabytes[4]. Every two days we are generating as much data as all the words that have ever been spoken by the entire human race!

Something’s got to give with so much data. The era of machine learning is fundamentally the era of Big Data.

10. No doubt, machine learning can make computers appear smart. For example, Google Translator uses machine learning techniques to translate the sentence, “I liked that man very much, but then I noticed the ring around his finger, so I knew he was not a good choice for me.”, into an impressively accurate translation to Spanish[5]. But let’s now ask whether Google Translator would be able to answer these questions about the sentence: “Why did the ring cause the speaker to assume that?”, “What does this say about the belief systems of the speaker?”, “What is the speaker looking for?” Now Google could, no doubt, train its machine learning algorithm to somehow produce reasonable answers to these questions, but that training would only relate to the sentence pattern provided by this specific question. It would stumble when asked to answer a question with a similar pattern, but a different meaning. Something like this: “I liked that shirt very much, but then I noticed the ring around its collar, so I knew it was not a good choice for me.”

11. In other words, for all its successes, machine learning systems do not really understand the world any better than your dog Fido understands the “fetch the newspaper” command when he goes to grab the paper. What is needed for true AI to occur is for the machine to be able to understand the world. 20th Century philosopher Ludwig Wittgenstein suggested that the road to understanding goes through language (I am paraphrasing, he said this in a more highly sophisticated manner, as philosophers are wont to do)[6].

Ironically, the commercial success of machine learning has taken the focus away from the more traditional AI ‘Symbolists’ approach. Symbolist Learning systems focus on language understanding and use of predicate logic to try and understand the meaning of language. Obviously, some natural language processing takes place in both Siri and Alexa, but the level of understanding they appear to have is based on clever back-end searching tricks and not on actual understanding of language. Significant work continues in Natural Language Processing, primarily in academia, but progress continues to be slow. This might or might not be a good thing since I personally believe that true AI (also known as “Strong AI”)—the one that could give us the Terminator world, isn’t going to happen unless we crack the natural language nutshell.

12. Even if machine learning never achieves AI understanding, this does not mean that it can’t have negative effects on society. The broad term “Automation” refers to computer technologies that might or might not be based on AI, but that leverage advances in computing to the extent that they are able to perform the job previously done by humans. And indeed, machine learning is making automation much more practical and effective. Examples are Robotic Automation Processes, or ‘Bots’, capable of handling basic language requests that then utilize available back-end information to formulate programmed responses ala Siri—in other words, the kind of things that could easily replace your typical DMV front desk person, but sans the attitude.

Even if modern automation applications do not meet the AI criteria defined by Dartmouth, their results may still appear magical. Most importantly, their impact can be very real. It is estimated that with the automation of knowledge work, advanced robotics, and autonomous vehicles, the economic impact of automation can range from $7.1 trillion to $13.1 trillion dollars. This represents 10% to 20% of the world’s total economic output!

McKinsey’s research suggests that as much as 45% of the activities individuals are paid to perform can be automated by adapting currently demonstrated automation technologies. In the United States alone, these job activities represent about $2 trillion in annual wages or 10% of the U.S.’s GDP. A study by the Executive Office of President Obama on AI, Automation, and the Economy, indicated that the scale of threatened jobs over the next two decades ranges from 9% to 47%[7]

Now, this is something we really need to be worried about. Especially if you work for a living.

13. Lastly, let’s talk about consciousness. Even if we manage to build a truly intelligent machine, we will still have the question as to whether this machine is conscious. The issue is that we do not even know what consciousness really is. Whether we are religious, philosophical or pursue the materialistic view that consciousness is strictly ‘the juice the brain excretes’, we are either ten Noble prizes away or in need of an immersible dive into an endless stream of new-age meditative sessions with white-bearded gurus to be able to figure that one out. 

Of one thing I’m certain, is that mechanistic approaches won’t work. Take for instance the field of genetics. Not long ago it was believed that once the human genome was successfully sequenced, science would be able to precisely pinpoint the specific workings of the any organism. This mechanistic view held that we would be able to map genes to specific proteins and proteins to specific functions.

What has been discovered instead is that most of our genes do not encode for proteins, and that it is even unclear whether our genetic code exclusively determines who we are entirely. Most surprisingly, it was found that human beings have far fewer genes than either onions and worms!

Who knows, it might well be that consciousness, and even our brand of intelligence will not be able to be recreated by traditional computers. . . ever. Whether this feat will be feasible with future quantum computers is another matter. In my view, the most promising course of research regarding consciousness comes from outlier theories by reputable scientists, such as the Penrose-Hameroff model of consciousness[8] . This model posits consciousness as a quantum physics related phenomenon; one not amenable to traditional computation methods.

In my mind, this makes sense. Once you enter the realm of quantum physics, you’re dealing with a host of other weird phenomena. And that would just fine because, after all, what can be weirder than consciousness?

Footnotes:

[1] See this link for a status on this project: https://www.bbc.com/news/science-environment-28193790

[2] “Regression” or the ability to predict from data is another machine learning feature.

[3] https://artsites.ucsc.edu/faculty/cope/mp3page.htm or check his Bach-like chorale in Youtube: https://www.youtube.com/watch?feature=player_detailpage&v=PczDLl92vlc

[4] “How much information?”—Hal Varian and Peter Lyman. https://www2.sims.berkeley.edu/research/projects/how-much-info/print.html

[5] “Me gustó mucho ese hombre, pero luego noté el anillo alrededor de su dedo, así que supe que no era una buena opción para mí.”

[6] “4.01 A proposition is a picture of reality. 4.001 The totality of propositions is language. 4.11 The totality of true propositions is the whole of natural science.” From “Tractacus Logico-Philosophicus” by Ludwig Wittgenstein

[7] The original research was posted in whitehouse.gov but has been archived by the new administration to this address: https://obamawhitehouse.archives.gov/blog/2016/12/20/artificial-intelligence-automation-and-economy.

[8] https://en.wikipedia.org/wiki/Orchestrated_objective_reduction



Greg Holmsen

The Philippines Recruitment Company - ? HD & LV Mechanic ? Welder ? Metal Fabricator ? Fitter ? CNC Machinist ? Engineers ? Agriculture Worker ? Plant Operator ? Truck Driver ? Driller ? Linesman ? Riggers and Dogging

6 年

It’s obvious that you’ve done a lot of research on this topic Israel, great share!?

Daniel Barbu

Project Manager bei INFORM GmbH - Optimization Software

6 年

Great introduction into such complex thema , not only for the bakers ...

James Ochiai-Brown

Advisory Business Solutions Manager at SAS | Analytics Solutions Architect

6 年

Fascinating article covering such a wide range of perspectives. As we come to understand and categorise the different aspects of artificial intelligence it should give us insight into the nature of human intelligence and even that strange phenomenon that we call consciousness.

Romeo Peay

Chief DevSecOps Engineer

6 年

Great job, and done without scaring the hell out of folks!!!! Interesting thoughts on quantum computing and consciousness. Quantum entanglement certainly has it's uses and works consistently, but if singularity is achieved with it, what consciousness might we encounter? We don't yet understand what we're reaching into to get the results...what other phenomenon would allow us to take more out than we put in.

要查看或添加评论,请登录

Israel del Rio的更多文章

社区洞察

其他会员也浏览了