Artificial Intelligence – A Walk Through the Ages
The Beginnings
“Nobody phrases it this way, but I think that Artificial Intelligence is almost a humanities discipline. It is really an attempt to understand human intelligence and human cognition.”- Sebastian Thrun, computer scientist, entrepreneur, and educator.
Artificial Intelligence (AI), like the name suggests, is intelligence exhibited by artificial entities (machines/software). Marvin Minsky, one of the pioneers of AI defined it as the science of making machines perform tasks that would require intelligence if performed by humans.? By this definition, if we created a robot that could make TikTok videos, it would not qualify!
The notion of AI has existed in the public imagination since ancient Greece in mythology, art, and fiction. However, the origins of AI research go back to the 1950s, when Alan Turing, a British polymath suggested the possibility of building machines that use available information and reason in order to solve problems and make decisions in the same way that humans do [1].
A few years later, the first proof-of-concept AI software emerged: the Logic Theorist, written by Allen Newell, Herbert A. Simon, and Cliff Shaw [2]. It was designed to mimic the problem-solving skills of human mathematicians. The Logic Theorist was debuted at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in 1956, which was organised by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. This workshop is considered by many to be the founding event of AI as a research discipline, and McCarthy is credited with introducing the term “Artificial Intelligence” in the project proposal [3].
The Boom
The couple of decades following the Dartmouth workshop saw AI flourish – computer programs were performing feats that seemed impossible at the time. A notable example is ELIZA, a chatbot, created at Massachusetts Institute of Technology (MIT) by Joseph Weizenbaum from 1964 to 1967 [4]. Despite the fact that ELIZA simply generated canned responses using pattern matching and substitution, it was able to engage in conversations that seemed very realistic.
ELIZA and other influential successes of the time led to intense optimism in the research community. Many prominent AI researchers made predictions, which, in hindsight, were over-ambitious and outrageous. For example, in 1970, Minsky was quoted as saying, “In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable.” [5]. This kind of overt optimism was not unusual at the time. The beginnings of computer vision, an important branch of AI was also dripping with unrealistic expectations. In 1966, The Summer Vision Project set at MIT by Seymour Papert was described as “...an attempt to use our summer workers effectively in the construction of a significant part of a visual system.” [6].
This level of optimism and advocacy among the leading AI researchers led to funding of AI research at many institutions. For example, in 1963, the Defence Advanced Research Projects Agency (DARPA) granted $2.2 million to MIT to fund Project MAC [7]. DARPA continued to support the project until the 1970s by providing $3 million a year.
The Ebb and Flow of AI
The hype bubble created during the formative years of AI would end up being burst in the 1970s. This was in no small part due to exaggerated expectations created by the high optimism of the AI researchers of the time. It became clear that real-world problems are much more complicated than previously thought. More importantly, the limited processing power and memory of the computers of the time meant that the AI programs created during that time were pretty much “toys”. Funding agencies like the British government, DARPA, and the National Research Council (NRC) stopped funding AI programs based on various reports critiquing the failure of AI research. In addition, there were philosophical, ethical, and technical concerns within the research community itself that also rocked the boat. For example, the 1969 book Perceptrons written by Minsky and Papert showed that single-layer perceptrons (a form of neural network introduced in 1958 by Frank Rosenblatt, which had created much excitement and press coverage) were incapable of learning non-linearly separable functions like the XOR (exclusive OR). This called into question the suitability of these models in learning complicated real-world tasks, when they could not learn a basic function. This book is widely considered to have contributed to an almost immediate end to research involving perceptrons.
It was not until the 1980s that interest in AI was reignited. In the earlier era of the AI boom, the focus of research was on heuristic computational methods, with attempts to develop very general-purpose problem solvers. However, in the 1980s, the focus shifted to knowledge-based approaches for solving problems in a specific domain of knowledge. This led to the development of “expert systems” that emulated the decision-making ability of a human expert by reasoning through bodies of knowledge and rules derived from human experts. Expert systems were widely used in industries, e.g., R1 [8] was used in Digital Equipment Corporation to assist in ordering of their VAX computers by automatically selecting components based on the customer’s requirements. The 1980s also saw a revival of academic interest in artificial neural networks due to algorithmic developments. During this period, money started being poured back into AI across the world. e.g., the Fifth Generation Computer Systems project in Japan, the Alvey project in the UK, and the Strategic Computing Initiative in the US, were some of the programs kicked off during the 1980s.
But history repeats itself. Another “AI winter” arrived in the late 1980s. This time too, unrealistic expectations was one of the reasons for this downward trend. Also, the initially successful expert systems, which were highly specialised purpose-built machines, became expensive and difficult to maintain. Such specialised AI hardware fell out of favour in the market as general-purpose desktop computers started to become faster, more powerful, and more affordable. AI programs started disappearing from the budget of funding agencies, and by 1993, over 300 AI companies had vanished [9].
AI is Currently Riding the Wave
Despite AI’s collapse in the commercial world in the early 1990s, the field in general continued to make advances and took off through the 2000s. This was mainly due to the increasing computing power and storage. Other contributing factors were the shift of focus from general to specific problems, and the use of machine learning (ML) techniques. Perhaps the most crucial factor behind AI’s current success has been the availability of relevant high-quality datasets. Figure 1 shows some of the breakthroughs in AI between 1994 and 2015, along with the datasets and algorithms used for each. A breakthrough happened on average, 18 years after an AI algorithm was first proposed. But the average elapsed time between key dataset availability and a breakthrough was only about 3 years.
Indeed, rapid developments in the areas of computer vision (e.g., image recognition), and natural language processing (e.g., speech recognition) in the past decade have only been possible due to the availability of large datasets.
?The past few years have seen tremendous progress in “generative AI” which is capable of generating text, images, music, etc. Notable examples are ChatGPT (that generates text based on natural language prompts), and DALL-E (that generates images based on natural language descriptions). The models that power these kinds of AI systems have been trained on enormous amounts of data.
AI in the Manufacturing Scene
Today, we see AI-based systems in nearly every aspect of our daily lives. A few examples are route prediction in Google Maps [11], personalised recommendations in Netflix [12], search by people/things/places in Google Photos [13], and writing assistance in Grammarly [14].
AI also has many applications in different industries. Even within an industry, the applications of AI are diverse. Figure 2 shows some applications of AI across the value chain of a typical life sciences organisation.
Throughout history, humans have strived to come up with faster, efficient, and more cost-effective ways of producing commodities. The industrial revolutions of the 18th and 19th centuries saw the introduction of machines, the mechanised factory system, and assembly-line techniques, which paved the way for mass production of goods. The electrification of factories in the early 20th century further increased their outputs. In the 1950s, robots made their way into factories, kicking off a new era of automation.
The adoption of AI has been relatively slow in the manufacturing sector. However, according to a survey conducted in China by Deloitte in 2019, 93% of companies believe AI will be an important technology to drive growth in manufacturing [16]. Indeed, there are a lot of large manufacturers that have already started using AI in some form. A couple of years ago, John Deere started leveraging vision-based AI for automatic defect detection in the automated welding process in its manufacturing process [17]. Pfizer began incorporating AI-based predictive maintenance capabilities into its continuous clinical manufacturing processes [18]. Another interesting example comes from the bicycle industry: bicycle component manufacturer SRAM embraced generative design (a form of AI that leverages cloud and ML to accelerate the design-to-make process) to rethink the bicycle crankarm [19].
Despite the optimism in the manufacturing community towards AI adoption, there are some challenges. Firstly, there is a shortage of specialised AI expertise internally. Secondly, there is a scarcity of relevant data for building reliable AI systems, since the manufacturing sector is spread across multiple industries, each with data localised to its domain. Therefore, when trying to incorporate AI, companies need to focus more on data and less on the algorithmic models.
Final Thoughts
AI as a field is extremely broad, and is often used interchangeably with subfields like ML. This is because ML has been responsible for AI’s current wave of success. But not all AI is ML. AI is an umbrella term that does not refer to any specific method or value proposition but due to its vagueness, it has become a buzzword. In order to attract funding, many startups falsely advertise product features as AI when the truth is that they are simply basic automation features. This is so prevalent that regulatory/enforcement agencies have started taking a more stringent look at businesses that market using the term [20]. Therefore, we must be cautious about terminology so that we don’t oversell and overpromise.
Every decline in AI over the years has been caused by inflated expectations brought on by overselling the capabilities of AI. Time and again, it has been shown that AI is not a silver bullet. Ultimately, AI is just one of the tools available. Given the complexity and challenges involved in implementing AI-based solutions, they should only be considered when they have been proven to perform better than simpler/more tractable solutions. In short, focus on the problem, not the solution.
领英推荐
References
1.????? Turing, A.M. (1950) ‘I.—Computing Machinery and intelligence’, Mind, LIX(236), pp. 433–460. doi:10.1093/mind/lix.236.433. Available at: https://academic.oup.com/mind/article/LIX/236/433/986238
2.????? Newell, A. and Simon, H. (1956) ‘The logic theory machine--A complex information processing system’, IEEE Transactions on Information Theory, 2(3), pp. 61–79. doi:10.1109/tit.1956.1056797. Available at: https://shelf1.library.cmu.edu/IMLS/BACKUP/MindModels.pre_Oct1/logictheorymachine.pdf
3.????? McCarthy, J., Minsky, M. L., Rochester, N. and Shannon, C. E. (1955) ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’, Republished in AI Magazine, 27(4), p. 12. doi: 10.1609/aimag.v27i4.1904. Available at: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1904
4.????? Weizenbaum, J. (1966) ‘Eliza—a computer program for the study of natural language communication between man and Machine’, Communications of the ACM, 9(1), pp. 36–45. doi:10.1145/365153.365168. Available at: https://dl.acm.org/doi/pdf/10.1145/365153.365168
5.????? Darrach, B. (1970) ‘Meet Shaky, the first electronic person’, Life, 20 November, p. 58. Scanned copy available at: https://books.google.ie/books?id=2FMEAAAAMBAJ&lpg=PP1&pg=PA58#v=twopage&q&f=false
6.????? Papert, S. (1966) ‘The Summer Vision Project’. Scanned copy available at: https://people.csail.mit.edu/brooks/idocs/AIM-100.pdf
7.????? Lee, J.A.N., Fano, R.M., Scherr, A.L., Corbato, F.J. and Vyssotsky, V.A. (1992) ‘Project MAC (Time-Sharing Computing Project)’, IEEE Annals of the History of Computing, 14(2), pp. 9–13. doi:10.1109/85.150016. Available at: https://ieeexplore.ieee.org/document/150016
8.????? McDermott, J. (1980) ‘R1: an expert in the computer systems domain’, Proceedings of the first AAAI conference on artificial intelligence, pp. 269–271. doi: 10.5555/2876590.2876666.? Available at: https://dl.acm.org/doi/10.5555/2876590.2876666
9.????? Newquist, H.P. (1994) The Brain Makers: Genius, ego, and greed in the quest for machines that think. 1st edn. Indianapolis, Indiana: Sams Publ.
10.? (2016) Datasets over algorithms, Space Machine. Available at: https://www.spacemachine.net/views/2016/3/datasets-over-algorithms
11.? Lau, J. (2020) Google maps 101: How AI helps predict traffic and determine routes, Google Blog. Available at: https://blog.google/products/maps/google-maps-101-how-ai-helps-predict-traffic-and-determine-routes/
12.? Recommendations, Netflix Research. Available at: https://research.netflix.com/research-area/recommendations
13.? Search by people, things & places in your photos, Google Photos Help. Available at: https://support.google.com/photos/answer/6128838
14.? How does Grammarly work? | Grammarly spotlight, Grammarly Blog. Available at: https://www.grammarly.com/blog/how-does-grammarly-work/
15.? Kudumala, A., Ressler, D. and Miranda, W. (2020) Scaling up AI across the life sciences value chain, Deloitte Insights. Available at: https://www2.deloitte.com/us/en/insights/industry/life-sciences/ai-and-pharma.html
16.? (2020) Deloitte Survey on AI Adoption in Manufacturing, Deloitte China. Available at: https://www2.deloitte.com/cn/en/pages/consumer-industrial-products/articles/ai-manufacturing-application-survey.html
17.? (2021) At John Deere, ‘hard iron meets artificial intelligence’, Intel Newsroom. Available at: https://www.intel.com/content/www/us/en/newsroom/news/john-deere-hard-iron-meets-artificial-intelligence.html
18.? (2021) AWS Helps Pfizer Accelerate Drug Development and Clinical Manufacturing, Pfizer News. Available at: https://www.pfizer.com/news/press-release/press-release-detail/aws-helps-pfizer-accelerate-drug-development-and-clinical
19.? Vinoski, J. (2021) SRAM and Autodesk reimagine the bicycle crankarm, Forbes. Available at: https://www.forbes.com/sites/jimvinoski/2021/05/19/sram-and-autodesk-reimagine-the-bicycle-crankarm/?sh=389e01261088
20.? Atleson, M. (2023) Keep your AI claims in check, FTC Business Blog. Available at: https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check
About the Author:
Girish Mallya is Senior Software Developer with over 12 years’ experience developing image analysis solutions. He started as a Marie Curie Researcher in UCD, developing image analysis tools for discovery/validation of prognostic biomarkers for cancer. He then worked on commercial products such as the pathology image analysis portfolio of Leica Biosystems, where he was an image analysis engineer for 4+years. He also has experience with C++, Python, and Machine Learning for image analysis.
Contact us today to learn how InnoGlobal Technology can transform your manufacturing operations. [email protected]
I empower businesses to apply machine learning, image analysis, and predictive forecasting to automate tasks, optimize decisions, and develop AI solutions that accelerate growth and drive innovation.
1 个月This sounds like a fascinating read! The evolution of AI is nothing short of remarkable, and it's exciting to explore its origins and the journey ahead. Looking forward to diving into "Artificial Intelligence, A Walk Through the Ages" and learning more about its transformative impact on digital transformation. Can’t wait to see what other insights are coming in this series!