Artificial intelligence
Vanshika Munshi
Senior Consultant-Client Relationship & Delivery Management at HuQuo
Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of human beings or animals. AI applications include advanced web search engines (e.g., Google Search ), recommendation systems (used by YouTube , Amazon , and Netflix ), understanding human speech (such as Siri and Alexa ), self-driving cars (e.g., Waymo ), generative or creative tools (ChatGPT and AI art ), and competing at the highest level in strategic games (such as chess and Go ).[1]
Artificial intelligence was founded as an academic discipline in 1956.[2] The field went through multiple cycles of optimism[3] [4] followed by disappointment and loss of funding,[5] [6] but after 2012, when deep learning surpassed all previous AI techniques,[7] there was a vast increase in funding and interest.
The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning , knowledge representation , planning , learning , natural language processing , perception , and support for robotics .[a] General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals.[8] To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks , and methods based on statistics , probability , and economics .[b] AI also draws upon psychology , linguistics , philosophy , neuroscience and many other fields.[9]
Goals
The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a]
Reasoning, problem-solving
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[10] By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics .[11]
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": they became exponentially slower as the problems grew larger.[12] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[13] Accurate and efficient reasoning is an unsolved problem.
Knowledge representation
Knowledge representation and knowledge engineering [14] allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[15] scene interpretation,[16] clinical decision support,[17] knowledge discovery (mining "interesting" and actionable inferences from large databases),[18] and other areas.[19]
A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by domain of knowledge.[20] The most general ontologies are called upper ontologies , which attempt to provide a foundation for all other knowledge and act as mediators between domain ontologies that cover specific knowledge about a particular domain (field of interest or area of concern).
Knowledge bases need to represent things such as: objects, properties, categories and relations between objects; [21] situations, events, states and time;[22] causes and effects;[23] knowledge about knowledge (what we know about what other people know);[24] default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing);[25] and many other aspects and domains of knowledge.
Among the most difficult problems in KR are: the breadth of commonsense knowledge (the set of atomic facts that the average person knows) is enormous;[26] the difficulty of knowledge acquisition and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally).[13]
领英推荐
Planning and decision making
An "agent" is anything that takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen.[c] [27] In automated planning , the agent has a specific goal.[28] In automated decision making , the agent has preferences – there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision making agent assigns a number to each situation (called the "utility ") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility ": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.[29]
In classical planning , the agent knows exactly what the effect of any action will be.[30] In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.[31] In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning ) or the agent can seek information to improve its preferences.[32] Information value theory can be used to weigh the value of exploratory or experimental actions.[33] The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain what the final outcome will be.
A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way, and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state. The policy could be calculated (e.g. by iteration ), be heuristic , or it can be learned.[34]
Game theory describes rational behavior of multiple interacting agents, and is used in AI programs that make decisions that involve other agents.[35]
Learning
Machine learning is the study of programs that can improve their performance on a given task automatically.[36] It has been a part of AI from the beginning.[d]
There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance.[39] Supervised learning requires a human to label the input data first, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input).[40] In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good".[41] Transfer learning is when the knowledge gained from one problem is applied to a new problem.[42] Deep learning uses artificial neural networks for all of these types of learning.
Computational learning theory can assess learners by computational complexity , by sample complexity (how much data is required), or by other notions of optimization .[43]
Natural language processing
Natural language processing (NLP)[44] allows programs to read, write and communicate in human languages such as English . Specific problems include speech recognition , speech synthesis , machine translation , information extraction , information retrieval and question answering .[45]
Early work, based on Noam Chomsky 's generative grammar and semantic networks , had difficulty with word-sense disambiguation [e] unless restricted to small domains called "micro-worlds " (due to the common sense knowledge problem[26] ).
Modern deep learning techniques for NLP include word embedding (how often one word appears near another),[46] transformers (which finds patterns in text),[47] and others.[48] In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text,[49] [50] and by 2023 these models were able to get human-level scores on the bar exam , SAT , GRE , and many other real-world applications.[51]
Perception
Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar , sonar, radar, and tactile sensors ) to deduce aspects of the world. Computer vision is the ability to analyze visual input.[52] The field includes speech recognition ,[53] image classification ,[54] facial recognition , object recognition ,[55] and robotic perception .[5