A beginner's review of Artificial Intelligence
Anand R Nair
Assistant Professor at TIFAC-CORE in Cybersecurity, Amrita Vishwa Vidyapeetham | Doctoral Researcher at Dept of CSE | Member at inSIG, ISOC and ICANN
The current generation of human beings is plagued with a tendency to overhype and fear-monger all nuances of life, which leads to the formation of a surreal perception about an idea or a concept in the popular notion. One such quintessential notion could be the overhyping of artificial intelligence and its progeny to anthropomorphize them.?
“Will Artificial Intelligence vanquish humanity and usher in an age of the Homo deus?”. This seems to be a central idea for popular fiction as well as a question that looms in the back of the minds of people who are enticed by the technological advancements in the realm of machines and computers.
These kinds of apprehensions are significantly augmented by the lack of awareness or proper discourse about the ontology, functioning, and heuristics of such advanced computers. Hence, it is imperative to separate the background noise from the information about the domain to foster an informed and educated society that has very little misconceptions or misgivings about research and technological breakthroughs in this field.?
Also, the overexpectation from the technology leads to winters, whereby the research hits roadblocks. Subsequently, the funding for research in the area gets pulled off resulting in a technological impasse.?
Over a period of time, a gap has slowly crept into the daily communication of human beings, wherein machines are increasingly being considered as agents that are capable of functioning without an agency. For instance, we are aware of the common usage “The flight will arrive shortly!” without considering the nuance that the flight cannot arrive on its own; but has to be flown in by a human being. This distinction is more bleary when it comes to the functioning of computers and their level of autonomy. Most people are unaware of what computers are capable of and what they are not. At least, not as clearly as they are familiar with the abilities and inabilities of common machines such as cars or microwave ovens.
Artificial Intelligence (AI) can be broadly classified as Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). ANI is the ability of an intelligent system to function smartly to solve a particular problem. Most of the AI implementations in the real world today, such as Google Maps and Voice Assistants, are mere assimilation of multiple ANI functions. An AGI is a theoretical system, which when operational, should be intelligent enough to smartly solve almost all dynamic problems that are pitted against it. It is felt that the ChatGPT, with its generative AI, is the closest among distant relatives to the AGI as of today. The dreaded ASI is currently just a mythical figment of imagination that does not seem to be even remotely tangible as per state-of-the-art research. The manifestation of such intelligent systems is at least dozens of Nobel laurel-deserving innovations far away. Humanity still needs significant breakthroughs in the fields of computational mathematics, neuroscience, and cognitive science to spawn such intelligent non-living beings.?
领英推荐
Human intelligence is the faculty that confers an individual with the ability to learn, form concepts, memorise, understand, realise, apply logic, and reason to complete various cognitive processes and operations. Human consciousness is presumed to be distinctive from animal consciousness owing to its faculty of imagination. Human intelligence can, for the scope of current discussion, be defined to have three main aspects - logic, statistics, and brain. The proponents of AI who considered logic to be the cornerstone of intelligence thought of encoding all knowledge in the universe and wiring them into a system. The paradigm of learning brought a shift in the approach where the systems are expected to be able to learn the logic rather than the logic being programmed into the system. These learning are incorporated using statistical measures.
The connectionist school of study strives to mirror the neural connections found in the human brain to implement statistical learning models so that they could emulate the functioning and information processing of a human brain. This led to the design of fundamental models such as perceptrons that mimic a neuron and the subsequent rise of artificial neural networks along with deep learning. The connectionist approach uses data as the fuel for the statistical learning models. The connectionist school of research has been gaining currency in recent times. ChatGPT that learns from approximately 1.7 trillion parameters and other similar language models are proponents of the connectionist school.
The symbolist school of study strives to inspire reasoning by explicitly encoding the semantics about the knowledge of the domain in the form of symbols, rules and formal logic into a system. Such systems are then expected to be capable of processing the information from data pertaining to the domain. For instance, the grammar of a natural language can be wired into a system so that the system is capable of processing queries belonging to the language. Here, each encoded rule is considered as a distinct information. The symbolist approach depends on developing cognition to distinguish between encoded information using logic, and thereby derives intelligence. Such systems rely on symbolic logic and often require human expertise to encode knowledge in the form of rules as well as symbols. Classical AI or the Good Old-fashioned AI systems based on the symbolist school strived for reasoning and cognition rather than learning.
It is felt that the trustworthiness of AI is directly proportional to the explainability of concepts learnt by a particular model. Does the data being fed to the model actually contain relevant information for a classification problem? Is the model learning relevant knowledge from the data? It is always a million-dollar question whether an ML model is actually classifying data based on relevant learning or knowledge. The performance evaluation of such models can both be quantitative as well as qualitative. While most of the research focuses on the quantitative evaluation of such models using a barrage of statistical measurements, the qualitative evaluation of learning and knowledge is also equally important.
There needs to be an accurate evaluation of the abilities of AI and clarity about its potential.