How to scale Predictable AI in logistics?
The use of AI in logistics can be cumbersome to scale because of the need for extensive training and huge data loads. But by prioritising only needed information and ignoring the rest, we can reduce a 10 MB sized problem into a 0.5-1 MB instead, meaning vastly reduced cost of training and operation as well as increased scalability.
As most realise by now, AI is way wider than just Generative (which in itself is built on Machine Learning AI, which is built on Neural Networks AI), and encompasses everything from Computer Vision, Image Manipulation, and Robotics, to such exotic topics as Swarm Intelligence.
From an observer’s point of view, the common trait amongst all of these AI technologies is that they appear at times to act non-deterministically – that is to say that from time to time they will react unexpectedly to certain stimuli – be it a written text or an image observed.
The underlying technologies are, by the way, largely deterministic, however, the appearance of seemingly random behaviours is caused by the variance of inputs, which speaks to the real challenge of most Machine Learning type AI:
AI can only react with 100% predictability on previously seen input data.
What happens when environments change?
Now, when we are trying to deploy a scalable solution across a wide range of potentially dynamic environments, the above statement is sure to spell disaster. The most prolific examples of this are to be found in the fields of Self Driving Cars and Autonomous Robotics.
Consider a car AI which has been trained to operate in California, where the weather is relatively consistent, and you take that same AI and ask it to drive a car in the middle of a winter snowstorm in Finland. Result? It’s pretty obvious – 5 minutes and it ends up in a ditch.
The solution? Train the car to operate in every climate – simple…?
How many types of traffic cones are there in the world? We need to recognise them. How about traffic signals? What about different types of clothing for people? Dog? Cat? Goats?
And here is the problem – achieving 100% predictability just on the input analysis is a task of infinite size, and we haven’t even started discussing how to achieve adequate reasoning and behaviour based on those input stimuli.
领英推荐
Robotics have been, and are being, deployed for automation tasks at scale in manufacturing, pharmaceuticals, retail, and logistics at scale – but they suffer from a similar problem when moving to full autonomy – in particular around the input stimuli when mobile around a physical facility.
The problem is that every facility is different in some respects, which means that to achieve full autonomy, specific training in that facility is necessary. Furthermore, every time the physical environment is changed, for example by moving a shelf system or production line, the system needs to be retrained, and whilst recalibrating, the system cannot function with a very high level of predictability.
That means that as a particular Autonomous scales to a multitude of facilities, this again becomes a problem of infinite complexity, thus requiring an infinite amount of computing power to achieve 100% predictability.
Is there a solution?
Yes, and no.
Achieving 100% predictability on anything based on Machine Learning, with any kind of complicated data is prohibitively expensive. The good news is that in most cases it is not necessary.
Typically, any complex AI is based on a hierarchy of reasoning on top of the input analysis layer. When those layers are made robust towards fluctuations in input analysis results, we can – just like humans do by the way – compensate, and still make reasonable conclusions and thus determine good courses of action.
Simultaneously, there is a trick to working with complex data, which is to reduce the information contained in the input data set – simply put we filter out irrelevant data to the point that we extract very simple conclusions from highly complex data.
Our Sentispec Inspector solution extracts volumetric fill rates of trailers based on a single image of the trailer contents (see below for example). In doing so, we filter out everything outside of the trailer as the first step, because it is irrelevant in our determination of the fill rate inside. This in itself reduces a 10 megabyte(MB) sized problem into a 5 MB problem. On top of that, we don’t need 4K resolution so we downscale it by a factor of 10 – which means that instead of dealing with 5 MB of information we are now looking only at 0.5-1 MB instead.
As we can do these operations with minimal loss of accuracy, we achieve two things:
1)????? Vastly reduced cost of training and operation.
2)???? Increased scalability as the scope of our training is reduced to only the key factors in the images.
The key to success with scalable AI is really in determining what information is needed, and ignoring the rest – not unlike what the human brain does.
Absolutely, simplifying the input data set is a game-changer! As Albert Einstein once said - The definition of genius is taking the complex and making it simple. Your approach mirrors the wisdom of focusing on what truly matters to achieve scalability in AI for logistics. Keep pushing the boundaries! ??? #Innovate #SimplifyToAmplify