What do the Apollo missions, drones and Onera have in common?
We don’t think much about computer controlled systems that routinely help everyone. But they are the heroes behind the scenes. For example, traction control systems in cars help save lives everyday. People who fly drones never think much about how the drone even stays up in the air despite winds blowing in one direction or another. And going back to the self-driving cars of the 60s - how did computers put humans on the moon? In fact one of the big differences between the Saturn rockets that Werner von Braun designed for Hitler, for use against the Allies, and the Saturn V that he later built for NASA that powered humans to the moon, was computer guidance systems. The basic idea and technology was then used for fly-by-wire systems in fighter planes, such as the F-16, where the computers do the actual flying using inputs from humans to the point where the plane can not even stay up in the air if the computer controlled systems failed [1]. Such systems (mechanical or electronic) popped up all over the place - for example targeting for shooting from an airplane during WWII, torpedoes traveling in a straight line to the target when fired from a moving object and needs to hit a moving object, etc
If you were a developer today tasked with writing code for one such problem, say to keep the drone flying in position, one approach that you could take is to use AI which has been super popular in the last few years. The way this would look would be to say that I have a whole bunch of measurements at any point in time and I can take a whole bunch of actions at that point. Depending on the actions, I get new measurements and you can ask what is the series of actions to take to keep the drone in place. One way to formulate it is as a Markov Decision Process (MDP) and get the drone to crash or at least fly whacky for a long time before you hope that it learns automatically to stay in the air. A more traditional approach to take would be to just try to create a prediction variable that estimates the next position for every action. Then you select the one that you predict will improve the position the most.
As appealing as this may sound, none of the above systems took these approaches of course. Instead what you do while designing these algorithms is to come up with a set of features that you know you need to control to maintain the trajectory you want. But determine analytically or through estimation how much to alter these controls in real-time and iterate.
If you absolutely wanted to geek out, you can read about the guidance systems that were used in the Apollo missions [2]. These are not that different from the ‘systems’ that sailors used back in the day to navigate from one part of the earth to another.
At a high-level, all these systems have three components: navigation, guidance and controls.
Navigation gives you the position analytics of the current time period. For sailors, this came from looking at the stars and measuring distance from the horizon using a sextant. In the case of Apollo, there was no GPS of course, and in fact they used stars in the navigation systems to figure out where they were (besides other methods). In the case of drones, navigation systems includes gyroscope measurements as well as GPS.
Guidance, the second system, determines how to alter course for the next time period. In the case of sailors, once stars tell you where you are in latitude & longitude and you already know where your destination is, you would pull out your map and magnetic compass to determine the direction in which to sail to get there. Apollo, as with the case before the invention of the compass, will continue to use stars as guide points to set direction.
Controls, the last system, performs adjustments that alters course. Such as asking your crew to set their sails or moving control surfaces of the Saturn V or landing module or adjusting the speeds of the rotors of the drone.
First, this guidance-navigation-controls is a beautiful paradigm for all walks of life but especially for real time systems. You can think of the above as streaming machine learning. Traditionally machine learning is thought of as a system where you have a large amount training data, you estimate the parameters and then you put the model into production. But the above systems, comes from a very different community and takes a very different but practical approach.
In this guidance-navigation-control paradigm, the greatest advance of Apollo, F-16s and drones is simply taking the humans out of the loop (as compared to back in the day sailing). By taking humans out, you have a streaming ‘AI’ process that can now perform super-human, super-fast adjustments that were never before possible. Although not viewed as stereotypical AI or machine learning, this was among the first sets of automation that was not mundane (such as factory automation), required complex real-time estimation, required constant adjustments to changing conditions and replace the human mind with a mind that was super-human.
Now you can ask how can these systems work outside the world of control systems? How would they relate to pure software systems and specifically the problems Onera tackles? We will talk about all of that and even dive into some math behind the Apollo guidance systems and how they can be translated to math and software inside Onera’s systems.
[1] https://fas.org/man/dod-101/sys/ac/f-16.htm
[2] https://klabs.org/history/history_docs/mit_docs/1697.pdf