More Isn't Always Better in ML Training
The law of diminishing returns, first articulated by economists in the 18th century, states that in any production process, adding more of one factor of production, while holding all others constant, will at some point yield lower incremental per-unit returns. This principle is not only applicable to economics but also significantly impacts fields like agriculture, finance, and machine learning.
Time for an experiment...
Here’s an example using standard TensorFlow and TensorFlow Datasets that highlights an important concept in machine learning.?
Experiment Overview
I ran two sets of training epochs to illustrate a key principle.
In machine learning:
First Set
5 epochs using 128 neurons.
Second Set
5 epochs using 256 neurons
领英推荐
The result?
The second set showed an improvement of just 0.0007. Adding more neurons would further diminish the returns while increasing the time and cost of training.
Why does this happen? It's called the law of diminishing returns. Even minor adjustments can trigger this effect, where developers assume that more means better or faster. However, this isn't always the case in model training.
Model Training Tips
Why Callbacks are Useful?
Callbacks are powerful tools that are executed during or at the end of training epochs. They help monitor training, make adjustments, and save models automatically. For instance, you can stop training early if the model reaches a desired accuracy, saving time and resources.
Here is an example Python callback class using TensorFlow to modify the behavior of a Keras model during training to stop training a model that reaches 94% accuracy.
#MachineLearning #TensorFlow #DataScience #AI #DeepLearning #TechTips #ModelTraining #NeuralNetworks #LawOfDiminishingReturns #keras