AI Eats the World
Md. Aftab Uddin, SFC, CSM?, CSPO?,A-CSPO?
Senior ???????????????? ?????????????? @ BRAC | Agile Technology ?????????????? ?????????????? | Agile Practitioner | Tech Business Analyst | Aspiring PhD Researcher in Data Science & IT Project Management
Artificial intelligence (AI) has catered for an immense leap in development in business practice. AI is also increasingly addressing administrative, dispositive and planning processes in marketing, sales and management on the way to the holistic algorithmic enterprise.
AI and the Fourth Industrial Revolution
If big data is the new oil, analytics is the combustion engine (Gartner 2015). Data is only of benefit to business if it is used accordingly and capitalized. Analytics and AI increasingly enable the smart use of data and the associated automation and optimization of functions and processes to gain advantages in efficiency and competition.
AI is not another industrial revolution. This is a new step on the path of the universe. The last time we had a step of that significance was 3.5 billion years ago with the invention of life.
In recent years, AI has catered for an immense leap in development in business practice. Whilst the optimization and automation of production and logistics processes are focused on in particular in the scope of Industry 4.0, AI increasingly also addresses administrative, dispositive and planning processes in marketing, sales and management on the path towards the holistic algorithmic enterprise.
AI as a possible mantra of the massive disruption of business models and the entering of fundamental new markets is asserting itself more and more. There are already many cross sectoral use cases that give proof of the innovation and design potential of the core technology of the twenty first century. Decision-makers of all industrial nations and sectors are agreed. Yet there is a lack of a holistic evaluation and process model for the many postulated potentials to also be made use of. This book proposes an appropriate design and optimization approach.
Equally, there is an immense potential for change and design for our society. Former US President Obama declared the training of data scientists a priority of the US education system in his keynote address on big data. Even in Germany, there are already the first data science studies to ensure the training of young talents. In spite of that, the “war of talents†is still on the rampage as the pool of staff is still very limited, with the demand remaining high in the long term.
Furthermore, digital data and algorithms facilitate totally new business processes and models. The methods applied range from simple hands-on analytics with small data down to advanced analytics with big data such as AI.
At present, there are a great many informatics-related explanations by experts on AI. In equal measure, there is a wide number of popular scientific publications and discussions by the general public. What is missing is the bridging of the gap from AI technology and methodology to clear business scenarios and added values. IBM is currently roving around from company to company with Watson, but besides the teaser level, the question still remains open about the clear business application. This book bridges the gap between AI technology and methodology and the business use and business case for various industries. On the basis of a business AI reference model, various application scenarios and best practices are presented and discussed.
After the great technological evolutionary steps of the Internet, mobiles and the Internet of Things, big data and AI are now stepping up to be the greatest ever evolutionary step. The industrial revolution enabled us to get rid of the limitations of physical work like these innovations enable us to overcome intellectual and creative limitations. We are thus in one of the most thrilling phases of humanity in which digital innovations fundamentally change the economy and society.
AI Development: Hyper, Hyper…
If we take a look at business articles of the past 20 years, we notice that every year, there is always speak of the introduction of “constantly increasing dynamisation†or “shorter innovation and product cyclesâ€â€”similar to the washing powder that washes whiter every year. It is thus understandable that with the much-quoted speed of digitization, a certain degree of immunity against the subject has crept into one person or the other. The fact that we have actually been exposed to a non-existing dynamic is illustrated by Fig. 1.1: On the historic time axis, the rapid peed of the “digital hyper innovation†with the concurrently increasing effect on companies, markets and society becomes clear. This becomes particularly clear with the subject of AI. The much-quoted example of the AI system AlphaGo, which defeated the Korean world champion in “Go†(the world’s oldest board game) at the beginning of 2016 is an impressive example of the rapid speed of development, especially when we look at the further developments and successes in 2017.
The game began at the beginning of 1996 when the AI system “Deep Blue†by IBM defeated the reigning world champion in chess, Kasparow. Celebrated in public as one of the breakthroughs in AI, the enthusiasm among AI experts was contained. After all, in the spirit of machine
Fig. 1.1 The speed of digital hyper innovation
learning, the system had quite mechanically and, in fact, not very intelligently, discovered success patterns in thousands of chess games and then simply applied these in real time faster than a human could ever do. Instead, the experts challenged the AI system to beat the world champion in the board game “Goâ€. This would then have earned the attribute “intelligentâ€, as Go is far more complex than chess and in addition, demands a high degree of creativity and intuition. Well-known experts predicted a period of development of about 100 years for this new mile- stone in AI. Yet as early as March 2016, the company DeepMind (now a part of Google) succeeded in defeating the reigning Go world champion with AI. At the beginning of 2017, the company brought out a new version of AlphaGo out with Master, which has not only beaten 60 well- experienced Go players, but had also defeated the first version of the system that had been highly celebrated only one year prior. And there’s more: In October 2017 came Zero as the latest version, which not only defeated AlphaGo but also its previous version. The exciting aspect about Zero is that, on the one hand, it got by with a significantly leaner IT infrastructure, on the other hand, in contrast to its previous version, it was not fed any decided experience input from previously played games. The system learned how to learn. And in addition to that, with fully new moves that the human race had never made in thousands of years. This proactive, increasingly autonomous acting makes AI so interesting for business. As a country that sees itself as the digital leader, this “digital hyper innovation†should be regarded as the source of inspiration for business and society and be used, instead of being understood and repudiated as a stereotype as a danger and job killer.
The example of digital hyper innovation shows vividly what a nonlinear trend means and what developments we can look forward to or be prepared for in 2018. In order to emphasis this exponentiality once again with the board game metaphor: If we were to take the famous rice grain experiment by the Indian king Sheram as an analogy, which is frequently used to explain the underestimation of exponential development, the rice grain of techno- logical development has only just arrived at the sixth field of the chess board.
I want to share here a quick look at some of the most critical events in AI since its beginning and some interesting links.
1943
Warren McCullough and Walter Pitts published “A Logical Calculus of Ideas Immanent in Nervous Activity.†The paper proposed the first mathematic model for building a neural network.
1949
In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb offers the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they’re used. Hebbian learning continues to be an essential model in AI.
1950
Alan Turing publishes “Computing Machinery and Intelligence, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent.
Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer.
Claude Shannon publishes the paper “Programming a Computer for Playing Chess.â€
Isaac Asimov publishes the “Three Laws of Robotics.â€
1952
Arthur Samuel develops a self-learning program to play checkers.
1954
The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English.
1956
The phrase artificial intelligence is coined at the “Dartmouth Summer Research Project on Artificial Intelligence.†Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of artificial intelligence as we know it today.
Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program.
1958
John McCarthy develops the AI programming language Lisp and publishes the paper “Programs with Common Sense.†The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans do.
1959
Allen Newell, Herbert Simon, and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving.
Herbert Gelernter develops the Geometry Theorem Prover program.
Arthur Samuel coins the term machine learning while at IBM.
John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.
1963
John McCarthy starts the AI Lab at Stanford.
1966
The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translation research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects.
1969
The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.
1972
The logic programming language PROLOG is created.
1973
The “Lighthill Report,†detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for artificial intelligence projects.
1974–1980
Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year’s “Lighthill Report,†artificial intelligence funding dries up and research stalls. This period is known as the “First AI Winter.â€
1980
Digital Equipment Corporations develop R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first “AI Winter.â€
1982
Japan’s Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.
1983
In response to Japan’s FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA funded research in advanced computing and artificial intelligence.
1985
Companies are spending more than a billion dollars a year on expert systems, and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp.
1987–1993
As computing technology improved, cheaper alternatives emerged, and the Lisp machine market collapsed in 1987, ushering in the “Second AI Winter.†During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.
Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier.
DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations.
1991
U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.
1997
IBM’s Deep Blue beats world chess champion, Gary Kasparov
2005
STANLEY, a self-driving car, wins the DARPA Grand Challenge.
The U.S. military begins investing in autonomous robots like Boston Dynamic’s “Big Dog†and iRobot’s “PackBot.â€
2008
Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.
2011
IBM’s Watson trounces the competition on Jeopardy!
2012
Andrew Ng, the founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in a breakthrough era for neural networks and deep learning funding.
2014
Google makes the first self-driving car to pass a state driving test.
2016
Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a significant hurdle to clear in AI.
2017
In October 2017, Sophia, a social humanoid robot developed by Hong-Kong based company Hanson Robotics, became the first robot to receive citizenship of any country and named the United Nations Development Programme’s first-ever Innovation Champion and is the first non-human to be given any United Nations title.
2018
Jair Ribeiro leaves IBM to become a Senior AI Business Analyst at the AI & ML Center of Excellence in Volvo Group — maybe one day it will be written in the books of history. ??
2019 — Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, the godfather of the modern AI, won the Turin Award for their work developing the AI subfield of deep learning.
2020
What breakthrough in 2020 do you think will enter to this list?