Can we call it intelligence?
What does " #artificialintelligence" do?
Post No. 8
Artificial intelligence will continue to evolve. There is a tremendous amount of hype all around. Much of it is justified since what is happening in the field is utterly amazing.
Like anything new, it is important to study and understand the developments and evolution in order to better segregate the truth from the hype.
In my posts, I will always attempt to provide a nice balance and also guide you towards a few articles or videos that will hopefully raise curiosity and food for thought.
In a recent article by Steve Lohr of the New York Times, he is asking “Why isn’t new technology making us more productive?” The premise of the article is that Cloud Computing and Artificial Intelligence were expected to fuel a tremendous increase in productivity, yet we still find such productivity boosts elusive. Steve goes further to explain that with past innovations, such as the automobile, the electric engine, the PC, and the Internet, productivity gains didn’t materialize for a decade or more once the technology became more prevalent.
So, the answer is - it takes time.
To understand the evolution of AI, let me provide a quick overview based on my own learning as well as the MIT course “xPRO DESIGNING AND BUILDING AI PRODUCTS AND SERVICES.”
Consider whether there is a difference between tiny Indian baffler crickets in the Radiolab Frailmales episode (https://radiolab.org/episodes/frailmales) and the following video, describing a Neural Network to simulate evolution and natural selection - https://youtu.be/N3tRFayqVtk.
AI includes within it Machine Learning as well as Deep Learning. Exactly where the system begins to show signs of intelligence is a bit murky and depends on our definition of intelligence. There are four types of artificial intelligence: reactive machines, limited memory, theory of mind, and self-awareness.?
Reactive machines: The most basic types of AI systems are purely reactive and don’t have the ability to either form memories or use past experiences to inform current decisions. An example of this is Deep Blue, IBM’s chess-playing supercomputer. Deep Blue can identify the pieces on a chess board and know about each move. It can make predictions about what moves might be next for it and its opponent and can choose the most optimal moves from among the possibilities. But it doesn’t have any concept of the past or any memory of what has happened before. This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world.
Limited memory: This type contains machines that can investigate the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in just one moment but rather requires identifying specific objects and monitoring them over time. These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, trac lights, and other important elements, such as curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car. But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel.
Theory of mind: Machines in this class not only form representations about the world but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures, and objects in the world can have thoughts and emotions that affect their own behavior. This is crucial to how we humans form societies because they allow us to have social interactions. Without understanding each other’s motives and intentions and without considering what somebody else knows either about me or the environment, working together is at best dicult and at worst impossible.
领英推荐
Self-awareness: The final step of AI development is to build systems that can form representations about themselves. This is, in a sense, an extension of the “the theory of mind” possessed by type III artificial intelligences. Consciousness is also called “self-awareness” for a reason – “I want that item” is a very different statement from “I know I want that item.” Conscious beings are aware of themselves and know about their internal states; they can predict others’ feelings. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences. While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning, and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. This is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.
On the other hand, a machine learning (ML) algorithm is a procedure that runs on data and is used for building a production-ready machine learning model. ML algorithms can be grouped into three main categories: Supervised, Unsupervised and Reinforcement machine learning algorithms.
The most common Machine Learning algorithms include:?
Na?ve Bayes classifier algorithm, which works on the popular Bayes theorem of probability and are best used if you have a moderate or large training dataset. Its applications include sentiment analysis, document categorization, and email spam filtering.?
K-means clustering algorithm is a popularly used unsupervised algorithm for cluster analysis. It is used by most search engines, such as Yahoo and Google, to cluster web pages by similarity and identify the “relevance rate” of search results.?
Support vector machine (SVM) learning algorithm is a supervised ML algorithm for classification or regression problems where the dataset teaches SVM about classes so that it can classify any new data into different classes by finding a line (hyperplane) which separates the training dataset into classes. SVM is commonly used for stock market forecasting by various financial institutions.
Linear regression machine learning algorithm shows the relationship between two variables and how the change in one variable impacts the other. Linear regression is used for estimating real continuous values. The most common examples of linear regression are housing price predictions, sales predictions, weather predictions, employee salary estimations, etc.
Logistic regression machine learning algorithm is used for classification tasks and help estimate the probability of falling into a specific level of the categorical dependent variable based on the given predictor variables and is applied in epidemiology to identify the risk factors for diseases and plan accordingly for preventive measures as well as for risk management in credit scoring systems.
Decision tree machine learning algorithm is a graphical representation that makes use of the branching methodology to exemplify all possible outcomes of a decision, based on certain conditions and are useful in data exploration and implicitly perform feature selection. In finance, applications of decision trees include banks classifying loan applicants; in medicine, they are used to identify at-risk patients and disease trends.
Artificial neural networks (ANN) algorithms have interconnected “neurons” and can exploit non-linearity in a distributed manner. They can adapt free parameters to the changes in the surrounding environment. They learn from their mistakes and make better decisions through backpropagation. Many bomb detectors at U.S. airports use artificial neural networks to analyze airborne trace elements and identify the presence of explosive chemicals. Google uses artificial neural networks for speech recognition, image recognition, and other pattern recognition (handwriting recognition) applications.
K-nearest neighbors (KNN) uses the prediction of continuous values like regression. Distance-based measures are used in k-nearest neighbors to get the closest correct prediction with high accuracy.
So, back to the premise I laid out at the top of the article, we are now seeing the early stages of the accelerated evolution of Artificial Intelligence. The great productivity is yet to come, and it will be here quite rapidly, even if we experience some form of a recession on the way.
The advancement of AI can simply not be stopped now.