STM32Cube.AI v7.2, Now With Support for Deeply Quantized Neural Network and Why It Matters
#STM32Cube.AI v7.2, released recently, brings support for deeply quantized neural networks, thus enabling more accurate machine learning applications on existing microcontrollers. Launched in 2019, STM32Cube.AI converts neural networks into optimized code for #STM32 #MCUs. The solution relies on STM32CubeMX, which assists developers in initializing STM32 devices. STM32Cube.AI also uses X-CUBE-AI, a software package containing libraries to convert pre-trained neural networks. Developers can use our Getting Started Guide to start working with X-CUBE-AI from within STM32CubeMX and try the new feature. The added support for deeply quantized neural networks already found its way into a people-counting application created with Schneider Electric.
STM32Cube.AI: From Research to Real-World Software
What Is a Neural Network?
In its simplest form, a #neuralnetworks is simply a series of layers.?There’s an input layer, an output layer, and one or more hidden layers in between the two. Hence, deep learning refers to a neural network with more than three layers, the word “deep” pointing to multiple intermediate layers. Each layer contains nodes, and each node is interconnected with one or more nodes in the lower layer. Hence, in a nutshell, information enters the neural by the input layer, travels through the hidden layers, and comes out of one of the output nodes.