Moving AI a step forward
Artificial intelligence (AI) practitioners are reaping the rewards of finely tuned image recognition based on the volume of images of data readily available on the Internet. Without too much effort, it is possible to train the machine to identify cats or almost any new image using pattern matching with a high degree of confidence.
Such pattern matching has many application areas, such as in oncology, autonomous driving, chatbots, voice recognition in smart speakers and any time it is necessary to look for patterns in large datasets.
For instance, in January, Intel published an article describing how medical technologies such as computed tomography, magnetic resonance imaging (MRI) and ultrasound provide deep learning algorithms with a source of learning data. With this data, deep learning models can be used to measure tumor growth over time in cancer patients on medication.
But some decisions cannot simply be made by matching against known patterns. This is where physical model mathematical simulations are used.
To understand this, let's just say that with sufficient information about the current situation (context), a well-made physics-based model enables us to understand complex processes and predict future events. Such models have already been applied all across our modern society for vastly different processes, such as predicting the orbits of massive space rockets or the behaviour of nano-sized objects which are at the heart of modern electronics.
However, if there is no direct knowledge available about the behaviour of a system, it is not possible to formulate any mathematical model to describe it in order to make accurate predictions.
This is where machine learning can help by effectively matching an unknown problem with a pattern that has already been learnt, drawing on massive datasets.
Machine learning can be used to learn any underlying pattern between the information about the system (the input variables) and the outcome that the AI needs to predict (the output variables). But machine learning has yet to evolve to a stage where it can confidently predict complex physics.
In an interesting paper entitled Deep learning for physical processes: incorporating prior scientific knowledge, submitted to Cornell University Library in November 2017, researchers Emmanuel de Bézenac, Arthur Pajot and Patrick Gallinari showed how machine learning based on deep learning methods cannot easily be applied to a problem such as predicting sea surface temperature. Despite considerable successes in a variety of application domains, the machine learning field is not yet ready to handle the level of complexity required by such problems.
Commodity AI
It is true the nowdays, AI platforms are being commoditised, so businesses should use the technology provided by the major AI providers. However, these commoditised AI platforms cannot effectively used in atypical application areas where an AI cannot simply rely on the ability to pattern-match.
Instead, domain knowledge is required: matching the inputs of a physical system with the desired outcome – the output variables – is something few people are actually doing in live production scenarios.
One of the most prevalent use of AI is in targeted selling, for instance. The price of being wrong is almost zero, while the value of being right is high, so the AI can be wrong a lot of the time and still do well. But when used to assess the need for repairs on an gas or oil pipeline, the AI cannot afford to be wrong often.
Moving Forward
It is harder to find practitioners of AI than the people who build AI tools. Commoditised toolsets can actually solve large problems, but AI people tend to be more interested in building the tools than solving application problems.
In my experience, a general- purpose AI specialist tends to lack the domain expertise to solve application-specific probles. In conferences if the presenter asks: "How many people have AI projects?", all the hands go up. When the presenter asks: "how many people are doing pilots?" maybe a little more than half the audience put their hands up. But almost zero are actually in production.
The reason businesses are not yet seeing value in AI is because the people building the AI systems are not sophisticated enough to engineer in domain expertise. It is not true that with machine learning you just pump in some data and it works: a lot more work is required. The more domain knowledge built into the AI, the more valuable it becomes.
AI will never be able to give the user the correct answer to the question of what to do next. The only way to do it is through modeling and simulation.
Using AI to preprocess the data, to help humans in identifying the areas of the domain data that are most likely to be important or sensitive. This is a perfect case for machine learning because domain knowledge and domain data can be coupled together to feed a machine learning tool. The more data pulled in, the more the AI can learn from humans identifying problems and features.
This is effectively using AI to augment a human, where the expert trains the machine, so it can identify more complex patterns.
We believe that deep learning can be combined with physical modelling data. Knowledge and techniques accumulated for modelling physical processes in certain domains could be useful as a guideline to design efficient learning systems.