Artificial Intelligence Definition and Scope
Ferhat SARIKAYA
MSc. AI and Adaptive Systems — AI Researcher, MLOps Engineer, Big Data Architect
In the wake of artificial intelligence (AI), AI has captured minds from a range of disciplines across science and technology. But for these days, what is exactly AI, and which fields define it differently? Next, we will focus on the truly multifaceted nature of AI, it’s multiple definitions across multiple domains, and the ongoing debates about its limitations and its potential.
Definitions of AI Across Multiple Disciplines
1. Computer Science:
AI is defined in computer science as the study and building of intelligent agents, systems that are able to perceive their environment, and actions that allow them to produce goals. From McCarthy et al. (1955): John McCarthy, "often called the father of AI," defined artificial intelligence as "the science and engineering of making intelligent machines.".
Example: A chess playing program capable of analysing millions of possible moves, then picking the best one.
2. Philosophy:
These people are more of a philosophical bent when approaching AI. In the case of them, they define AI as something that tries to build machines with behaviors similar to human intelligence. The philosopher John Searle created the term "strong AI" vs. "weak AI" along with discussions of whether machines can (indeed) think (Searle 1980).
Example: The Renaissance of the Machine that is the famous "Chinese Room" thought experiment, which asks whether a machine that can appear to understand its way around in Chinese knows in any way.
3. Psychology:
AI is generally defined by psychologists in terms of cognitive processes. Cognitive scientist Marvin Minsky, co-founder of the MIT AI Lab, wrote that AI is "the science of making machines do things that would require intelligence if done by men" (Minsky, 1968).
Example: Something along the lines of an AI system that’s capable of identifying emotions in the human face, just like humans identify facial expressions.
4. Linguistics:
Artificial intelligence is usually seen through the lens of natural language processing in the field of linguistics. The study of generative grammar by Noam Chomsky has been a major influence on the ways in which AI approaches language (Chomsky, 2002).
Example: Google Translate is an AI powered translation service that can understand, convert text to, and translate between hundreds of languages.
5. Engineering:
In practical terms, engineers typically define AI. Stuart Russell and Peter Norvig, in their seminal textbook "Artificial Intelligence: A Modern Approach," define AI as the study of agents that receive perceptions from the environment and perform actions.
Example: Self driving cars are able to find their way through complex roads, dealing with signals, and keeping free of obstacles.
6. Neuroscience:
In many ways, neuroscientists consider defining AI in relation to the human brain. Researchers like Geoffrey Hinton are researchers that have pioneered "deep learning" techniques inspired by the structure of biological neural networks (Hinton et al., 2006).
Example: Convolutional Neural Networks (CNNs), the same kind as image recognition uses, which processes visual information in a manner equivalent to the human visual cortex.
7. Economics:
In economics, AI is usually defined by its capacity to automate cognitive tasks and its ability to make economic decisions. Economist Ajay Agrawal has looked at how AI is making prediction cheaper in multiple sectors (Agrawal et al., 2018).
Example: Market trend analysis and stock trading decisions in milliseconds by AI systems.
Discussions of the Possibilities and Constraints of AI
1. Narrow AI vs. General AI:
Right now, most of the AI systems are narrow or weak AI. Agreeing that we can never achieve Artificial General Intelligence (AGI) is also something people are debating about. AGI is predicted by 2029 by futurist and Google engineer Ray Kurzweil, and by more skeptical Rodney Brooks, former director of MIT's Computer Science and AI Laboratory (Kurzweil 2005; Brooks 2017).
2. The Ethics of AI:
As AI systems get more sophisticated, the more ethical questions there are. For example, Kate Crawford’s work on AI ethics emphasizes bias in AI decision making and in privacy concerns (Crawford, 2022).
Example: Hiring tools that are hinged on AI and could, by mistake, discriminate against certain groups of applicants.
3. AI and Consciousness:
领英推荐
"Is AI ever truly conscious?" A philosophical debate rages. These questions are quite closely related to the work of David Chalmers on the "hard problem of consciousness" (Chalmers, 1995).
4. The Singularity:
There are some of the theorists that propose the concept of technological singularity. Superintelligence, by Nick Bostrom, addresses some of the risks and benefits involved with such a scenario (Bostrom, 2014).
5. AI and Creativity:
Also, there’s great debate on whether AI can be creative at all. Valuable insight into this debate is provided by Margaret Boden’s research on computational creativity (Boden, 2004).
Example: DALL-E type AI systems that thrive on creating pictures from text descriptions.
6. The Black Box Problem:
Many advanced AI systems do not work the way they are supposed to. In all of this, Cynthia Rudin's (2019) work on interpretable AI attempts to tackle the same problem.
Example: Medical diagnosis models that use deep learning, which can't always explain what's behind their decision making process.
7. AI and Human Augmentation:
Researchers are of the opinion that the AI may soon find its future working to augment human intelligence. In this field, known as intelligence augmentation (IA), also referred to as cognitive augmentation, they seek to improve human cognitive functioning through the use of AI. Shneiderman (2020) shows some possibilities of brain computer interface and technology to support human intelligence backed up by researchers and potential work being done towards that aim.
Conclusion
Naturally, we’re seeing the definition and scope of what AI is evolve very quickly. As AI is already impressively good at some things, we are just getting started on finding out what it is not good at too. Going forward, we must take the enthusiasm for AI development but the cautionality too, and do interdisciplinary projects only and not too much discussion about what’s happening with the development and fail because of silence about what the AI is doing.
AI is forcing us into a reevaluation of intelligence, consciousness, and our place in a digitally expanding world. But as we further extend the limits that can be achieved using AI, we may discover that the biggest things that we uncover aren't necessarily about machines, they may even be about ourselves.
Let this be the first of many such motivating stories you tell to your little computer friends around your house!
Boldly go where no human or AI has gone before!
Ferhat Sarikaya
References:
[1] Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction machines: The Simple Economics of Artificial Intelligence. Harvard Business Press.
[2] Boden, M. A. (2004). The creative mind: Myths and Mechanisms. Routledge.
[3] Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.?
[4] Brooks, R. (2024, August 22). The seven Deadly Sins of AI predictions. MIT Technology Review. https://www.technologyreview.com/2017/10/06/241837/the-seven-deadly-sins-of-ai-predictions/
[5] Chalmers, David (1995). Facing up to the problem of consciousness.?Journal of Consciousness Studies?2 (3):200-19.
[6] Chomsky, N. (2002). Syntactic structures. Walter de Gruyter.
[7] Crawford, K. (2022). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
[8] Hinton, G. E., Osindero, S., & Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527–1554. https://doi.org/10.1162/neco.2006.18.7.1527
[9] Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Viking. https://paisdospuntocero.files.wordpress.com/2018/04/book-kurzweil-singularity-is-near-1.pdf
[10] McCarthy, J., Jr., Minsky, M. L., Rochester, N., Shannon, C. E., Dartmouth College, Harvard University, I.B.M. Corporation, Bell Telephone Laboratories, & Rockefeller Foundation. (1955). A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE. https://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf
[11] Minsky, M. L. (1968). Semantic Information Processing. MIT Press.
[12] Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
[13] Russell, S., & Norvig, P. R. (2021). ARTIFICIAL INTELLIGENCE: A Modern Approach (4th ed.), Global Edition. Pearson.
[14] Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. https://doi.org/10.1017/s0140525x00005756
[15] Shneiderman, B. (2020). Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. International Journal of Human-Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118