AI learned to easily solve the most complex equations that describe the structure of the universe
Partial differential equations are found in a wide variety of physical and mathematical models. They allow one to calculate the states of very complex systems, but solving them has always been a very resource-intensive task. Thanks to a specially created neural network, this process has been significantly accelerated and the power of supercomputers can be redirected to other important scientific tasks.
Most engineering students encounter mathematical physics equations, or partial differential equations, only once, albeit rather painfully. After going through them during their studies, this complex but powerful tool is often forgotten. Few engineers use them regularly, albeit with very important complex scientific and technological applications such as modeling of air flows in aerodynamics, describing the movement of tectonic plates, calculating the position of planets in astrophysics or complex atmospheric interactions in meteorology.
Normally, powerful computing systems are used to solve these equations – either discrete supercomputers or distributed computing networks. For many scientists working in institutions that are not well-funded, such calculations have always been an impediment, since the massive computing resources required to solve them are expensive. Realizing the importance of the emergence of a new tool for performing such tasks, American mathematicians and programmers turned to AI.
A team of scientists from the California Institute of Technology (CalTech) and Purdue University has developed a highly efficient neural network algorithm for working with partial differential equations. Using this algorithm, it was possible to achieve a huge speed increase in solving these equations - in some cases by several orders of magnitude. For example, for a specific computation on a 256x256 matrix, their Fourier Neural Operator (FNO) gave the result in 0.005 seconds when solving the Navier-Stokes partial differential equations. The most common algorithm used earlier calculated the same solution in 2.2 seconds on an equivalent computer. In practical applications, these kinds of computations are repeated millions of times over and the 440x faster algorithm saves hours of expensive supercomputer time.
These differential equations are found everywhere. More precisely, they can be leveraged to describe almost any dynamic system. The emergence of an accessible and effective method for solving them can significantly advance diverse areas of science. Moreover, the applicability of such "artificial intelligence" in engineering will not wait long, as the scientists published a full description of their work on the arXiv portal, meaning that it can be utilized by others in the field almost immediately.
This is not to say that the creators of FNO were the first to figure out how to solve partial differential equations using neural networks and machine learning. While it has been done before, existing algorithms had to be retrained for each new set of calculations, for example even when properties of the modeled fluids changed. The development by researchers from Caltech and Purdue allows one to perform the process of training the system only once and then calculate a variety of models. The secret to FNO's effectiveness is ingenious and simple at the same time.
The basis of any neural network is the functional approximation. Artificial intelligence operates its calculations not with exact values, but with a range of values that allows one to make a decision or generate a result without resorting to resource-intensive and complex refinements. In other words, during training, neural networks generate simplified formulas, the results of which are accurate enough to be applied in practice.
Usually, neural networks working with graphs operate with values in the Euclidean space, what is commonly understood to be “regular geometry”. In order to simplify the task, the authors of FNO decided not to translate the wave functions into the common Euclidean graphs, but to "teach" the algorithm to work directly with Fourier transforms. This allowed not only to increase the speed of calculations, but also to reduce the number of errors: now there are 30% fewer of them than in previous algorithms.
Thus, the new algorithm allows researchers and engineers to solve complex partial differential equations significantly faster than with conventional methods and utilize the same pre-trained neural network for a wide variety of tasks within an engineering problem, all while generating “clean” results with an acceptable accuracy level.
Custom Software, Ecommerce and Team Augmentation
1 年Ilya, thank you for sharing ??