Artificial Intelligence

Artificial Intelligence

Artificial intelligence (AI) is the ability of machines to perform tasks that are typically associated with human intelligence, such as learning and problem-solving. AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), and self-driving cars [1].

What is intelligence?

All but the simplest human behavior is attributed to intelligence, while even the most complicated insect behavior is usually not taken as an indication of intelligence. What is the difference? Consider the behavior of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behavior is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside; on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—conspicuously absent in the case of Sphex—must include the ability to adapt to new circumstances. [2]

AI Use Case In Business?

AI is currently being applied to a range of functions both in the lab and in commercial and consumer settings, including the following technologies:

  • Artificial Neural Networks are computational models inspired by the structure and functioning of the human brain. They consist of interconnected nodes (neurons) that process and transmit information, enabling the network to learn patterns and relationships from data through training.
  • Deep learning is an iterative approach to artificial intelligence that stacks machine learning algorithms in a hierarchy of increasing complexity and abstraction. Deep learning is currently the most sophisticated AI architecture in use today.
  • Speech Recognition allows an intelligent system to convert human speech into text or code.
  • Natural Language Generation enables conversational interaction between humans and computers.
  • Computer Vision allows a machine to scan an image and use comparative analysis to identify objects in the image.
  • Expert systems were one of the early AI technologies developed in the 1970s and 1980s. These systems aim to capture the knowledge and decision-making processes of human experts in specific domains and use that knowledge to provide recommendations or make decisions. While expert systems might not be as widely discussed as more recent AI technologies like deep learning and neural networks, they still have practical applications in healthcare, finance, and engineering.

The Four Types of AI

AI can be divided into four categories based on the type and complexity of the tasks a system is able to perform. They are:

  1. Reactive machines
  2. Limited memory
  3. Theory of mind
  4. Self-awareness

?

Reactive Machines

A reactive machine follows the most basic of AI principles and, as its name implies, is capable of only using its intelligence to perceive and react to the world in front of it. A reactive machine cannot store memories and, as a result, cannot rely on past experiences to inform decision-making in real time.

Perceiving the world directly means that reactive machines are designed to complete only a limited number of specialized duties. Intentionally narrowing a reactive machine’s worldview has its benefits, however: This type of AI will be more trustworthy and reliable, and it will react the same way to the same stimuli every time.?

Reactive Machine Examples

  • Deep Blue was only capable of identifying the pieces on a chess board, knowing how each piece moves based on the rules of chess, acknowledging each piece’s present position, and determining what the most logical move would be at that moment. The computer was not pursuing future potential moves by its opponent or trying to put its own pieces in better positions. Every turn was viewed as its own reality, separate from any other movement that had been made beforehand. ?
  • Google’s AlphaGo is also incapable of evaluating future moves but relies on its own neural network to evaluate developments in the present game, giving it an edge over Deep Blue in a more complex game. AlphaGo also bested world-class competitors in the game, defeating champion Go player Lee Sedol in 2016.

Limited Memory

With limited memory, AI has the ability to store previous data and predictions when gathering information and weighing potential decisions—essentially looking into the past for clues on what may come next. Limited-memory AI is more complex and presents greater possibilities than reactive machines.

Limited-memory AI is created when a team continuously trains a model in how to analyze and utilize new data, or an AI environment is built so models can be automatically trained and renewed.?

When utilizing limited-memory AI in ML, six steps must be followed:?

  1. Establish training data
  2. Create the machine learning model
  3. Ensure the model can make predictions
  4. Ensure the model can receive human or environmental feedback
  5. Store human and environmental feedback as data
  6. Reiterate the steps above as a cycle

Theory of Mind

The theory of mind is just that—theoretical. We have not yet achieved the technological and scientific capabilities necessary to reach this next level of AI.

The concept is based on the psychological premise that other living things have thoughts and emotions that affect their behavior. In terms of AI machines, this would mean that AI could comprehend how humans, animals, and other machines feel and make decisions through self-reflection and determination and then utilize that information to make decisions of their own. Essentially, machines would have to be able to grasp and process the concept of “mind,” the fluctuations of emotions in decision-making, and a litany of other psychological concepts in real-time, creating a two-way relationship between people and AI.

Self Awareness

Once the theory of mind can be established, sometimes well into the future of AI, the final step will be for AI to become self-aware. This kind of AI possesses human-level consciousness and understands its own existence in the world as well as the presence and emotional state of others. It would be possible to understand what others may need based on not just what they communicate to them but also how they communicate it.?

Self-awareness in AI relies both on human researchers understanding the premise of consciousness and then learning how to replicate that so it can be built into machines

History of AI

Intelligent robots and artificial beings first appeared in ancient Greek myths. Aristotle’s development of syllogism and its use of deductive reasoning were key moments in humanity’s quest to understand its own intelligence. While the roots are long and deep, the history of AI as The way we think of it today spans less than a century. The following is a quick look at some of the most important events in AI.

1940s

In 1942, Isaac Asimov published the Three Laws of Robotics, an idea commonly found in science fiction media about how artificial intelligence should not bring harm to humans.

In 1943, Warren McCullough and Walter Pitts published the paper “A Logical Calculus of Ideas Immanent in Nervous Activity,” which proposes the first mathematical model for building a neural network.?

(1949) In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they’re used. Hebbian learning continues to be an important model in AI.

1950s

In 1950, Alan Turing published the paper “Computing Machinery and Intelligence,” proposing what is now known as the Turing Test, a method for determining if a machine is intelligent.?

In 1950, Harvard undergraduates Marvin Minsky and Dean Edmonds built SNARC, the first neural network computer.

In 1950, Claude Shannon published the paper “Programming a Computer for Playing Chess.”

In 1952, Arthur Samuel developed a self-learning program to play checkers.?

1954: The Georgetown-IBM machine translation experiment automatically translated 60 carefully selected Russian sentences into English.?

1956: The phrase “artificial intelligence” was coined at the Dartmouth Summer Research Project on Artificial Intelligence. 1956: Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program.?

1958: John McCarthy developed the AI programming language Lisp and published “Programs with Common Sense,” a paper proposing the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans.??

In 1959, Allen Newell, Herbert Simon, and J.C. Shaw developed the General Problem Solver (GPS), a program designed to imitate human problem-solving.?

1959: Herbert Gelernter developed the Geometry Theorem Prover program.

1959: Arthur Samuel coined the term “machine learning” while at IBM.

1959: John McCarthy and Marvin Minsky founded the MIT Artificial Intelligence Project.

1960s

1963: John McCarthy started the AI Lab at Stanford.

1966: The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translation research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects.?

1969: The first successful expert systems, DENDRAL and MYCIN, are created at Stanford.

1970s

1972: The logic programming language Prolog is created.

1973: The Lighthill Report, detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for AI projects.?

(1974-1980) Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year’s Lighthill Report, AI funding dries up and research stalls. This period is known as the “First AI Winter.”

1980s

In 1980, Digital Equipment Corporation developed R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first AI Winter.

1982: Japan’s Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.

1983: In response to Japan’s FGCS, the U.S. government launched the Strategic Computing Initiative to provide DARPA-funded research in advanced computing and AI.?

1985: Companies are spending more than a billion dollars a year on expert systems, and an entire industry known as the Lisp machine market has sprung up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp.?

(1987-1993) As computing technology improved, cheaper alternatives emerged, and the Lisp machine market collapsed in 1987, ushering in the “Second AI Winter.” During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.

1990s

1991: U.S. forces deployed DART, an automated logistics planning and scheduling tool, during the Gulf War.

1992: Japan terminated the FGCS project in 1992, citing failure to meet the ambitious goals outlined a decade earlier.

?1993: DARPA ended the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations.?

1997: IBM’s Deep Blue beats world chess champion Gary Kasparov.

2000s

(2005) Stanley, a self-driving car, wins the DARPA Grand Challenge.

(2005): The U.S. military begins investing in autonomous robots like Boston Dynamics’ “Big Dog” and iRobot’s “PackBot.”

(2008): Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.

2010s

(2011) IBM’s Watson handily defeats the competition on Jeopardy! .?

(2011) Apple released Siri, an AI-powered virtual assistant, through its iOS operating system.?

(2012) Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in a breakthrough era for neural networks and deep learning funding.

(2014) Google makes the first self-driving car to pass a state driving test.?

(2014): Amazon’s Alexa, a virtual home smart device, is released.

(2016) Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to overcome in AI.

(2016) The first “robot citizen,” a humanoid robot named Sophia, was created by Hanson Robotics and is capable of facial recognition, verbal communication, and facial expression.

(2018) Google released the natural language processing engine BERT, reducing barriers to translation and understanding in ML applications.

(2018): Waymo launches its Waymo One service, allowing users throughout the Phoenix metropolitan area to request a pick-up from one of the company’s self-driving vehicles.

2020s

(2020) Baidu released its LinearFold AI algorithm to scientific and medical teams working to develop a vaccine during the early stages of the SARS-CoV-2 pandemic. The algorithm is able to predict the RNA sequence of the virus in just 27 seconds, 120 times faster than other methods.

(2020) OpenAI released the natural language processing model GPT-3, which is able to produce text modeled after the way people speak and write.?

(2021) OpenAI builds on GPT-3 to develop DALL-E, which is able to create images from text prompts.

2022: The National Institute of Standards and Technology releases the first draft of its AI Risk Management Framework, voluntary U.S. guidance “to better manage risks to individuals, organizations, and society associated with artificial intelligence.”

2022: DeepMind unveils Gato, an AI system trained to perform hundreds of tasks, including playing Atari, captioning images, and using a robotic arm to stack blocks.

2022: OpenAI launches ChatGPT, a chatbot powered by a large language model that gained more than 100 million users in just a few months.

2023: Microsoft launches an AI-powered version of Bing, its search engine, built on the same technology that powers ChatGPT.

2023) Google announces Bard, a competing conversational AI.

2023: OpenAI Launches GPT-4, its most sophisticated language model yet.

Scholastica Gomes

Digital Marketing Specialist

1 年

Good information ??

回复

要查看或添加评论,请登录

Mobashshira akter的更多文章

  • Linkedin

    Linkedin

    LinkedIn is a business- and employment-focused social media platform that works through websites and mobile apps…

社区洞察

其他会员也浏览了