Software Testing in an AI-driven world – Part 1 – Understanding AI/ML
We live in a world that is advancing rapidly and where engineers are making massive strides in the field of artificial intelligence. There is no denying that AI is going to have a massive impact on how we interact with computing systems in the future and what we will be able to do with them. So, it’s important that we start preparing ourselves now for a future where we do not just need to test that AI systems are working correctly, but also how we can best utilize it to improve testing and software quality.
Over the next few articles, I plan to share with some insights around artificial intelligence, how we can prepare ourselves towards better testing of it to ensure it meets the needs of our changing world and also areas where we can take advantage of it to better improve software development and delivery in the future.
To start off though as with most things, before we can learn how to work with technology, it's important to understand it. Too often I see people talking about artificial intelligence or machine learning without fully understanding the different aspects of it or indeed how it works and people rush into wanting to earn these new technologies without taking the time out to understand what the different branches of AI actually mean and even what/if and how it might even apply to them. This latter part is important because we will gain the best understanding of how to work with and implement a technology when we can apply it in the correct context.
So hopefully the below information will help to start you off on your AI journey and provide you with a better understanding of AI systems as we currently understand them and how they are implemented in modern-day solutions.
AI is just code
Firstly, just to mention that Artificial Intelligence is still essentially coded algorithms. Yes, we are training computers on how to learn and perform certain operations, but this learning is still structured code. We tell a system what and how to learn, what rules to apply to its learning and most importantly, what are the fundamental objectives of its learning. This means that AI systems need testing just like any other piece of code.
Now, we might think of AI as a new concept in computer science and software development but in fact, Alan Turing who invented the very first computer, actually laid out a paper in 1950 on the subject of Computing Machinery and Intelligence, setting out goals for how we can get machines to replicate human behaviour. So, even at the infancy of programming and computing, the thought of AI has always been there and essentially the foundation hasn’t changed – it’s just that we finally have computers that can do this sort of processing now with enough data to make many of the algorithms that we are conceiving, possible.
These are all great definition for describing what AI is. However, there is far more to it than that and the world of AI actually consists of multiple components, which I want to briefly explain for you:
Rule-Based Systems
A rule-based system (e.g., production system, expert system) uses rules as the knowledge representation. These rules are coded into the system in the form of logic statements. The main idea of a rule-based system is to capture the knowledge of a human expert in a specialized domain and embody it within a computer system. It’s essentially trying to automate the more routine or trivial based human decisions.
Machine Learning | Learning from experience
Machine learning, or ML, is an application of AI that provides computer systems with the ability to automatically learn and improve from experience without being explicitly programmed. ML focuses on the development of algorithms that can analyze data and make predictions. Beyond being used to predict what Netflix movies you might like, or the best route for your Uber, machine learning is being applied to healthcare, pharma, and life sciences industries to aid disease diagnosis, medical image interpretation, and accelerate drug development.
Deep Learning | Self-educating machines
Deep learning is a subset of machine learning that employs artificial neural networks that learn by processing data. Artificial neural networks mimic the biological neural networks in the human brain.
Multiple layers of artificial neural networks work together to determine a single output from many inputs, for example, identifying the image of a face from a mosaic of tiles. The machines learn through positive and negative reinforcement of the tasks they carry out, which requires constant processing and reinforcement to progress.
Another form of deep learning is speech recognition, which enables the voice assistant in phones to understand questions like, “Hey Siri, How does artificial intelligence work?”
Neural Network | Making associations
Neural networks enable deep learning. As mentioned, neural networks are computer systems modelled after neural connections in the human brain. The artificial equivalent of a human neuron is a perceptron. Just like bundles of neurons create neural networks in the brain, stacks of perceptrons create artificial neural networks in computer systems.
Neural networks learn by processing training examples. The best examples come in the form of large data sets, like, say, a set of 1,000 cat photos. By processing the many images (inputs) the machine is able to produce a single output, answering the question, “Is the image a cat or not?”
This process analyzes data many times to find associations and give meaning to previously undefined data. Through different learning models, like positive reinforcement, the machine is taught it has successfully identified the object.
Cognitive Computing | Making inferences from context
Cognitive computing is another essential component of AI. Its purpose is to imitate and improve the interaction between humans and machines. Cognitive computing seeks to recreate the human thought process in a computer model, in this case, by understanding human language and the meaning of images.
Together, cognitive computing and artificial intelligence strive to endow machines with human-like behaviours and information processing abilities.
Natural Language Processing (NLP) | Understanding the language
Natural Language Processing or NLP, allows computers to interpret, recognize, and produce human language and speech. The ultimate goal of NLP is to enable seamless interaction with the machines we use every day by teaching systems to understand human language in context and produce logical responses.
Real-world examples of NLP include Skype Translator, which interprets the speech of multiple languages in real-time to facilitate communication.
Computer Vision | Understanding images
Computer vision differs is a technique that implements deep learning and pattern identification to interpret the content of an image, including the graphs, tables, and pictures within PDF documents, as well as, other text and video. Computer vision is an integral field of AI, enabling computers to identify, process and interpret visual data. This differs from the example I shared earlier with the cat where it is not just merely trying to make an image associate, but an interpretation of it. Recognition and understanding are two very different things – and these types of systems will require a very different testing approach
Applications of this technology have already begun to revolutionize industries like research & development and healthcare. Computer Vision is being used to diagnose patients faster by using Computer Vision and machine learning to evaluate patients’ x-ray scans.
This is only a short description of the most popular branches of AI we have today and how they work. With the field advancing so quickly, these branches will ultimately row as we discover new methods and algorithms for computer decision making. Each of these different methods will require different types of software design and testing approaches, though we can still pull together some basic criteria that will apply to all, which I will do in my next article.