AI terminology you must know and understand

AI terminology you must know and understand

Modern information technologies and the arrival of machines powered by artificial intelligence (AI) have already strongly influenced the world. Computers, algorithms and software simplify ordinary tasks, and it is impossible to imagine how, in the near future, most of our life could be managed without them.

To this point it must be clear that we need AI algorithms to support us in finding our way and filtering through the growing amount of test products (data, test cases, etc.).

The following explains the terms in general of the new aspects of information technology that are relevant in today’s world of digital:

Artificial intelligence

Artificial intelligence (AI) is a sub-field to computer science aimed to the development of computers capable of performing tasks that normally done by people, in particular tasks associated with people acting intelligently.

AI is a system, built through coding, business rules, and increasingly self-learning capabilities, that is able to supplement human cognition and activities and interacts with humans naturally, but also understands the environment, solves human problems, and performs human tasks.

The term artificial intelligence was coined in 1956 at a conference at Dartmouth College. The mid-1950s ushered and era of optimism. Many of that era’s leading scientific minds attended the Dartmouth conference and contributed to the early advancement of the technology. Despite the early optimism, achieving artificially intelligent systems proved to be a challenge. Waves of enthusiasm were followed by troughs of disillusionment throughout the 1950s, 60s, 70s, and 80s.

The first test for AI is described by Alan Turing and has become known as the Turing test:

“Can a computer communicate well enough with a human to convince the human that it (the computer), too, is human.”

AI is not required to learn, it could be using pre-programmed rules to handle all possible outcomes. However, for systems with more than basic complexity, this has proved to be a task too large and too complex to handle (it has been tried and failed multiple times since the 1960s). Int that sense, we distinguish between artificial general intelligence and artificial narrow intelligence.

Artificial general intelligence

Artificial general intelligence (or AGI) is an intelligence that can execute all the tasks that a human could execute. The most important aspect of AGI is that it can execute different complex tasks in a sequence. The coffee test by Wozniak should not be a problem for AGI:

“A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.”

When full AGI is here and how it will look is still unclear to this moment.

Artificial narrow intelligence

All AI we use nowadays is categorized as artificial narrow intelligence (or ANI). This AI is focused on one task. It tries to execute this as good as possible. Examples are autonomous driving cars, natural language processing, or facial recognition done by a chatbot. The biggest breakthrough for ANI is neural networks. Neural networks mimic biological processes. The paths that are laid in the brains of animals serve as the basis for this technology. With neural networks, it is possible to build much more complex systems for our AI solutions.

The biggest advantage is the learning capability by feeding information (most used example here is learning to recognize a specific object by feeding it large amounts of pictures and telling it when the object is in the picture). With reinforcement learning, it is possible to add a reward function. ANI is then evolving to a much smarter system and growing towards an AGI solution.

Machine learning

Machine learning is one of the ways to achieve artificial intelligence. It contains different algorithms – each with its own strengths and weaknesses. The last major breakthroughs in the field of AI are based on machine learning or more specifically on “deep learning”, which uses an artificial neural network. Other popular algorithms are: Bayesian networks, Decision Tree, K-Means Clustering and Support-vector machines. Each has its own strengths and weaknesses. These algorithms are often grouped into three categories:

Machine intelligence

Machine intelligence (MI) is a unifying term for what others call machine learning (ML) and artificial intelligence (AI). We found that when we called it AI, too many people were distracted by whether certain companies were “true AI”, and when we called it ML, many thought we weren’t doing justice to the more “AI-esque”-like aspects, such as the various flavors of deep learning. So, machine intelligence is a term that combines “artificial intelligence”, “machine learning” and other related terms.

Cognitive IT

The word cognitive means “knowing and perceiving”. Cognitive information technology is not just rule-based but is able to react based on perception and knowledge.

Robotics

What is a robot? It’s a machine that gathers information about its environment by input from sensors and, based on this input, changes its behavior. Combined with machine learning and machine intelligence the robot’s reactions over time become more adequate. The use of Internet of Things (IoT), Big Data Analytics and cloud technology make a robot versatile.

Robots come in many different shapes and forms. It’s not just the metallic man. Robots may equally be a smart algorithm on social media (for example a chatbot or a digital agent), and autonomous vacuum cleaner, or self-driving car.

---------------

Sources: Tom Ven, R. Marselis and H. Shaukat "Testing in the digital age" book, wikipedia

要查看或添加评论,请登录

Zouhair Jemmaa的更多文章

  • The Psychology of Testing

    The Psychology of Testing

    The mindset to be used while testing and reviewing is different from that used while developing software. With the…

  • THESE ARE THE COGNITIVE BIASES THAT SCREW UP YOUR DECISIONS

    THESE ARE THE COGNITIVE BIASES THAT SCREW UP YOUR DECISIONS

    The cognitive biases are the mistakes that the human mind makes. Far from being minor stumbles, these biases not only…

  • SCRUM in a nutshell

    SCRUM in a nutshell

    All what you need to know before passing the Professional Scrum Master I (PSM I) Assessement. Definition of Scrum:…

    1 条评论
  • HOW READING BOOKS AFFECT YOUR BRAIN?

    HOW READING BOOKS AFFECT YOUR BRAIN?

    This dot represents an idea. A line between dots represents a connection between ideas.

  • GOOGLE I/O 2016 ANNOUNCEMENTS RECAP !

    GOOGLE I/O 2016 ANNOUNCEMENTS RECAP !

    I love keeping up to date with new tech and stuff, so I watched the entire keynote on a live stream and here’s what I…

  • What is Business Process Management (BPM) ?

    What is Business Process Management (BPM) ?

    In a perfect world, an organization has a strategic vision that reflects what objectives it intends on accomplishing as…

  • ITIL or Agile ?

    ITIL or Agile ?

    ITIL or Information Technology Infrastructure Library is a framework or a set of practices that were developed for an…

  • How you can make your life simplier ?

    How you can make your life simplier ?

    1. Stop multitasking.

    3 条评论

社区洞察

其他会员也浏览了