The 3 pain points you can solve better with AI in engineering than any physical method
Richard Ahlfeld, Ph.D.
Founder & CEO of Monolith | Engineering and Intractable Physics solved with Machine Learning
Engineering is creating ever bigger data sets. Whether you are looking at different flight scenarios, driving scenarios, or usage scenarios, optimising a product for its complex usage conditions is becoming ever harder for engineers. Engineers need to think of absolutely everything that could happen. Let's look at 2 examples.
Let's say you want to build a smart meter to measure gas usage. This device will have to work for about 20 different flow rates, for 30 different temperatures, for 10 different gases and you are probably thinking of building 3 versions of the product for different markets. That means you have to go through 18,000 different scenarios to make it work, each of which means analysing a lot of noisy time series data. If you click on the picture it will take you to a link by Texas Instruments that will give you a good idea of the challenges around design and tests for metering devices that engineers face.
The same problem occurs for all dynamic or complex systems once end users get their hands on them. Imagine what car companies have to go through to make their cars work in different climate zones. Currently, the latest rage is figuring out the range of electric vehicles all around the world. Have a look at the article above to get an idea for range testing of EVs in Norway.
Similar problems Monolith has encountered in:
I would likle to note that the data does not necessarily have to come from real physical tests. The results can come from simulations, too. Aircraft simulations, driving simulations, crash test simulations, etc. virtual development is getting better every day. And for engineers that is not only good news because it is even easier to create really big data using simulations. Long story short, here is problem number 1:
Problem 1: You have a lot of data and you need to understand how your system works in any scenario
Let's take a driving dynamics as an example where somebody has dumped a couple of gigabyte of data on you. Looking at the data and plots in Matlab is probably your first thought.
Let me tell you, I spent years doing that myself. The two things I found hard was: Matlab is not built as an interactive data exploration tool, it's quite clunky when it comes to that and it's also not really well suited to big data. (we started out building Monolith in matlab but gave up because matlab's versioning was too much of a pain). You would have a much easier time if you managed to learn JupyterLab (Python) and started working with PyTorch on an elastic cloud. Sadly, I know PhDs in data science who spent months setting up systems like this to make them work (and they still require a lot of upkeep) an the most test engineers I know don't have the time to learn python . That's basically why we built Monolith. Here are some things you could do in Monolith:
You can quickly have an algorithm check for physical relationships between all variables and when you click on a field you can immediately see the relationship the function. Red means there is a relationship here, white means not.
I probably spent 3 months at NASA doing nothing but plotting and replotting data so I could get a hang of things. Tools like this, makes it really easy. Dr Will Jennings one of our data scientists developed it while researching astrophysics big data and now implemented it in Monolith for everybody to enjoy.
You can also use build interactive plots where you can see how specific parameters affect the results, see below for how different parameters impact the z force on the wheels of a car.
There is lots more tools that I could show you, but in this post I want to focus on the problem, not the solution.
Long story short: I wouldn't want to solve any scenario analysis problem without advanced and interactive data analytics including machine learning models. I can solve the NASA use case I spent 3 months in 2016, in an afternoon now, and not just me, anybody in the team who doesn't even know the physical properties can do it too and the same holds for the many large engineering companies already using Monolith for the use cases above. How did Einstein say? Things should be made as simple as possible :)
Let's continue with the classic story of an engineer getting a job to design something brand new, and it turns out the physics here is really hard, nobody knows how to do this and simulation doesn't work. Simulation experts want to charge you a lot, but do not actually have an answer that you can validate. The only solution seems to do tests but those are really expensive and the budget will only allow very limited testing, so you need to make it count! Problem 2 is about how you can make it count.
领英推荐
Problem 2: I need to model this but the physics is too hard
Machine learning can really help you here because it allows you to build models based on data. There was a big 'Aha' moment early on in Monolith history when we were doing a project with McLaren Automotive, which you can find more information on here . The idea we had back then was really simple: Engineers do tests in different scenarios that make it simple for them to understand and model them, like going through design of experiments, or analysing long check lists. AI models do not need that. They can digest random information when it comes in. So when you are testing a car, you do not necessarily need to make a plan of all the tests you need to do. You can just drive around like a crazy person with an AI model listening in and by the end of the day the AI has understood your car. You might say, well, I have not. So what you can ask the AI model is to show you how the car woult behave in the simplified scenarios that you can understand. The result: you spend about 80% less money on testing.
If machine learning sounds or AI sound too much like hype for you, then think of it as really advanced 'system identification', where using data you can identify how a physical system works without modeling its physics using equations. When you learn how to build autopilots (PID controllers) at university (I should know I did it about 100 times when I tought the course at Imperial College London) you don't ask students to understand an A320 using mass spring damper systems - no point - you ask them to measure the systems response and identify a mathematical function the describes it. It's much easier than figuring out the physics and much more realistic because it is coming from the real thing. It also is very simple machine learning, you identify a model from data.
Well, advanced machine learning with random forests, deep neural networks or sparse Gaussian Processes is exactly the same, only that your system doesnt have to be as simple as 2nd order polynomial. You can have super complex non-linear time series, hundreds of inputs and outputs and you can still model it!
It's not exactly straightforward though. When I stumbled across this when I was working at Airbus Space it took me my entire master thesis to build a model that reliably and automatically identified a system. To be fair, I didn't know the most important basics about machine learning then. For example, I did not do train/test splits so I hopelessly overfitted my models and then did not understand why they worked on my computer but not in the lab. Anyway, all of this is also automated in Monolith so that you can build models for freakishly hard systems in no time.
Here are some examples you could use this for:
You are probably seeing a pattern here, as systems with a higher degree of randomness (humans, weather) definitely come up a lot. But also mechanical systems where the physics is just really hard (turbulence, complex system dynamics, vibrations,... ) can be solved.
Below is an example of the space launch system, where the guys at NASA where friendly enough to leave me with some surrogate data so I could demonstrate this. Here we have built a large bayesian neural network that learns how changes in the payload affect the structural vibrations and it is predicting results for both test and simulation data.
This is one of the things I love about machine learning compared to physical models. Your data can be anything. Having a model trained on both FEA and test data can show you
Long story short: if you are trying to model a complex system and you are finding it hard, you should think of using machine learning. It doesn't have to be a system that you do not understand at all. It can be a system that you understand partially. Like in aircraft engineering you can build surrogate models on simulation data for cruise flight because it is well understood, and ML models for take off and landing which is not well understood. Hybrid modelling is definitely the name of the game going forward!
Let's talk about the last and definitely my personal favourite problem because I love automation. One of my friends once said, I would never hire an engineer who is not lazy. There is a direct correlation between how lazy you are and how good an engineer you are apparently. While I still do not fully agree with that, I do think it's true that the smartest engineers are good at automation and right now everybody is trying to achieve hyper automation of their virtual work flows.
Problem 3: I have a repetitive problem and I want to automate it
Imagine that for every bottle that stands on your dinner table , somebody needs to do test whether it will be stable and not fall over when somebody hits the table. As a bottle manufacturer with a portfolio of bottles that all have slightly different manufacturing processes and maybe materials, this quickly creates a long list of repetitive tests to cover all operating conditions, manufacturing conditions, and designs. If you want to figure out which designs or manufacturing processes cannot be manufactured, you need to do a lot of testing (or simulation). Or, you can build an AI model in Monolith that predicts this for new designs and fully automate the process. Why is this better?
Well, for one people who do not have the data cannot do it. So if you produce the most bottles you now have a data-driven competitive advantage that is hard to beat, just like Google.
Another great thing I have seen is that such AI models can beat physical models to the astonishment of engineers. They can better predict complex experiments than calculations and at least in Monolith it now takes only a few days to set up a system like this. It used to take us 6 months 2 years ago, but we have come a long way. Y
ou can use this tool to get your products to market faster, to bettere design them and eventually you can the amount of testing you do by quite a lot. I have seen this become as much as 95% in optimal conditions (you can create a great model) but even 10% usually reduces your monthly bill massively.
This is it. I hope you learnt a lot and please leave feedback for me in the comments.
Economist, Author, Business Consultant, Software & EU Research Projects Management, MSc, SPM, Member of Economic Chamber of Greece
3 年Just read this article which I found truly interesting, multi-sided, approaching complicated concepts in a clear structured way. Philosophically speaking my sense is that all the spotted chaos of the dynamic analogue world with its numerous variables interacting with each other, you are trying to order through your company's AI systems and ML tech for optimising engineers work. Congratulations and keep it up! Greetings from Athens
Modeling & Simulation Manager at OPmobility (hydrogen)
3 年Thanks Richard for sharing your thoughts in this article (and the other ones). I would add a problem 4: I have a reliable simulation which could prevent me from doing physical tests if only it was running faster. This is a stringent issue particularly during quotations: there is a need to propose a cost estimation of the best possible product without spending a lot of time, and even without making prototypes (too long and expensive). Exploring the possible designs with simulation can be too computationally expensive, so a AI surrogate model trained from a database of past simulation results on similar products is a good solution.