IMAGIMOB AI - Bringing to life your Tiny ML models on Edge Devices
IMAGIMOB AI

IMAGIMOB AI - Bringing to life your Tiny ML models on Edge Devices

?Imagimob AI has a lot of interesting and useful functionality, spanning across many different parts of building the best edge AI or tinyML models. This newsletter is designed to highlight some practical features of Imagimob AI to help you get the most out of your models and to improve your workflow. This newsletter also draws some attention to recent material that could be beneficial or interesting for you.

?

Enjoy the reading!


Our colleague, Angelo Di Marco, Ph.D, Machine Learning Engineer at Imagimob, is helping to make the ML model more accurate by adding hands movements through video files, that helps to understand the data better.


Our colleague, Angelo Di Marco, Ph.D, Machine Learning Engineer at Imagimob, is helping to make the ML model more accurate by adding hands movements through video files, that helps to understand the data better.?        

?TIP 1 - Metadata

?

For complex signals, such as multi-dimensional inputs from a radar or a touchpad, it’s important to have a visual aspect that allows you to put the data into context and to easily annotate/label it.


Our colleague, Angelo Di Marco, Ph.D, Machine Learning Engineer at Imagimob, is helping to make the ML model more accurate by adding hands movements through video files, that helps to understand the data better.?        


Using Imagimob AI you can add video files alongside your data files to help you understand your data better. Simply add the video in the same folder as your data file and when you use the batch import feature it’s automatically detected. You can then easily synchronize them in the tool if they are not already in sync.

Our colleague, Angelo Di Marco, Ph.D, Machine Learning Engineer at Imagimob, is helping to make the ML model more accurate by adding hands movements through video files, that helps to understand the data better.


No alt text provided for this image

?Tip 2 - C Conversion

?

Imagimob AI supports conversion to C code (ANSI C99) for Tensorflow based models (see here for currently supported layers, architectures and formats).

This means you don’t have to build your model inside of Imagimob AI to be able to utilize the Edge Conversion part. You can build models using your preferred tool then easily convert them to raw C code using Imagimob Studio. This can then be integrated into any platform capable of running C code.

No alt text provided for this image


?Recent Highlights

?

Ph.D. @Johan Malm, the Product Owner at Imagimob, has recently written a white paper about quantization. This white paper is great for those interested in delving into the maths of quantization, the theory behind it, and its implementation in Imagimob AI.?You can read more about it here:

Our friends at @IAR SYSTEMS have put together a video showing how you can deploy Imagimob AI models on the

ST SensorTile.box using their IAR Embedded Workbench. This can be used as inspiration to learn how to deploy your own models in the same way.?You can watch the video here.

要查看或添加评论,请登录

Alina Negrutu的更多文章

社区洞察

其他会员也浏览了