The next big thing is Tiny (ML)!
Marco van Hurne
I build Agentic AI companies | Data Science Strategist @ Beyond the Cloud | Data Strategy Certified | AI Compliance Officer Certified
During our daily online scroll, Most ML models we come across are created to recommend that you want to see 50% Memes and 50% chihuahuas. To do just that they use huge clusters of computers using CPUs and GPUs and even LPUs (Croq) to deliver these outstanding state-of-the-art AI recommendation technologies to you.
Before we start!
If you like this topic and you want to support me:
Much more computational hardware is used for training. For example, GPT-4 alone costs tens of millions of dollars in electricity bills to train. Let alone the carbon footprint. But most of the time, running inference that means predicting on these models is computationally expensive too. A request costs 0,09 dollar cent. Do the math.
This is making these types of services pretty costly. Also, the operations happen mostly in data centers far away from your phone or computer. So, additionally to costing a lot of energy it also takes a lot of time for your request to the Machine Learning model to be returned to you. That is called latency.
Enter TinyML
Now enter TinyML: Tiny (= cute) - Machine Learning. TinyML is a subset of machine learning technologies and techniques specifically designed to run on small, low-power devices such as microcontrollers and embedded processors.
These devices are often part of the Internet of Things (IoT). They can be found in smart functionality in everyday objects like wearables, household appliances, and industrial sensors. The core objective of TinyML is to bring the capabilities of AI and ML to hardware with severe constraints in terms of computing power, memory, and energy.
The general idea is that instead of getting your internet recommendations by sending a request around half the world until it finally reaches the model, you do the computation on your own device let’s say a phone, and in some cases on a Raspberry Pi or other small microcontroller.
What is TinyML
TinyML is a domain in Machine Learning that explores the types of models you can run on tiny low-powered devices like microcontrollers. It is all about low-latency, low power, and low bandwidth model inference on the opposite of devices you are used to. All of you have trained and predicted models by now and know how long that can sometimes take now imagine doing it with less. To illustrate why this is difficult let's compare the power consumption of some devices.
Standart CPUs consume anywhere from 60 to 120 W, of course, this heavily depends. If you have ever done machine learning predictions with a single-core, you know this is rarely enough, I personally often use 50 cores.
GPUs just for a rough comparison is roughly twice as costly but of course, you can compute most matrix operations faster. And again when I train or fine-tune the bigger models I personally need 4 of them for 24 hours or so.
Now typically microcontrollers and mobile phones are the center of Tiny ML. Microcontrollers consume power in the order of milliwatts or microwatts. This is more than one thousand times less which enables them to run on batteries.
Subscribe to the TechTonic Shifts newsletter
Think environmental sensors, Internet 4.0, and all these use cases that just can’t send their data to the cloud all the time to do the Machin Learning for them. May it be because there is no reliable internet connection, or electricity, or if you think about your smartphone it’s annoying if Machine Learning drains all your energy. Alternatively, you have to send each request to the cloud which would make you wait every time you use it and the user experience becomes a nightmare.
Examples of tiny Ml have been around for years and it’s not just your average strange dude living on a mountain of the grid with a fetish for Machine Learning.
Catchy phrases such as “Ok Google” “Hey Siri” and “Alexa” all summon the powerful Tiny ML. The point here is that you want your device to respond immediately and not wait for it to send the response to the cloud.
Advantages of TinyML
领英推荐
In summary, TinyML excels when operating in environments where:
Now I hope you understand when the term Tiny ML is used and in which environment these solutions thrive.
Significance of TinyML
The significance of TinyML lies in its ability to perform data processing and inference locally, at the edge of the network, rather than relying on cloud-based services. This local processing capability reduces latency, saves bandwidth, enhances privacy, and allows devices to operate in environments with intermittent or no internet connectivity. Moreover, by minimizing energy consumption, TinyML enables continuous, real-time AI applications on devices powered by small batteries or even energy harvesting technologies, extending operational lifetimes and opening up new possibilities for smart applications.
Applications of TinyML
TinyML has a wide array of applications across various sectors. In healthcare, TinyML-powered wearable devices can monitor vital signs and detect anomalies in real-time, offering personalized health insights and early warnings of potential issues. In agriculture, sensors equipped with TinyML can monitor soil moisture and nutrient levels, optimizing irrigation and fertilization to improve crop yields. In industrial settings, TinyML enables predictive maintenance by analyzing vibrations, temperatures, and other signals to foresee and prevent equipment failures. Moreover, in consumer electronics, it powers features like voice recognition in smartwatches, gesture control in household appliances, and energy-saving modes in smart thermostats.
Do try it yourself!
If you are interested in learning more about this exciting domain.
Software: The best starting point is Tensorflow lite which has been created with everything we discussed so far in mind. Their documentation is excellent and I would argue it is the most state-of-the-art framework to get started with.
Hardware: there are many devices that could potentially benefit from tiny ML however if you are looking for a Machin Learning Project I would recommend anything from the Arduino family or raspberry pies. You can buy them for as little as 20 dollars
Literature: creatively titled the book TinyML from Pete Warden, who is a technical lead for mobile and embedded TensorFlow, and Situnayake, Daniel who leads developer advocacy for TensorFlow Lite at Google. So basically if they don’t know it's probably an open research question. Pete Warden also has an interesting blog at petewarden.com about everything related to deep learning.
Often Machine Learning develops in the most unexpected areas. While the big trend still is bigger is better and an increasing number of advocates show us that we can also do it with less. I firmly believe this development will not just make Machine Learning more accessible, but also teach us to use our bigger models more efficiently.
Well, that's a wrap for today. Tomorrow, I'll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ??
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn appreciates your likes by making my articles available to more readers.
Signing off - Marco
Top-rated articles:
??I Support Companies In Their IT and Digital Transformation Projects | AI/ML Product & Transformation
7 个月Wow, TinyML is fascinating! It's amazing how AIML models can run on embedded systems with low energy consumption. Looking forward to seeing how this technology evolves! ??
Shall we make a difference together???????????
7 个月Thanks for sharing! Interesting!