Ensemble models: What we can learn about keeping a friend group and eventually becoming better humans
Blessing Oludele
Data scientist | Communicator experienced in Streamlining Tasks and Boosting Efficiency Through Data and AI | Proficient in Python, SQL, and Data Visualization Tools | First-Class Degree in Electrical Engineering
While the concept of ensemble learning is considered a heavy technical concept, it has a significant psychological undertone from which we all can learn without drowning in the technicalities.
Quick housekeeping:
Machine learning is the process wherein a computer, when shown several examples in the training phase, comes up with a model (a mathematical equation) on its own to solve problems it hasn't been explicitly programmed to solve.
Ensemble learning, however, is the process wherein several mathematical models are created and then combined to solve those problems.
These mathematical models used in ensemble learning do not necessarily have to be individually great; they are usually referred to as weak/base learners to signify the low effort/attention placed on the individual trainings. However, the magic happens when these models or weak learners are combined to create an ensemble.
Several ensemble models like Adaboost, Xgboost, and Random Forest gained popularity in machine learning competitions because of their surprising performance.
The rationale, however, with the ensemble approach is that with many components making up a whole, the errors of one will usually cancel out another.
Two things are particularly important to achieve this, and they are responsible for the difference between the many variations of ensemble models we have today:
1. How to create diverse individual learners: If there is no diversity in the creation of individual models, the errors of one cannot cancel out another. Truthfully, you don't need several learners who are all saying the same thing.
2. How the individual models are combined: There are many techniques for combining these models, e.g., boosting, bagging, weighted averages, stacking, etc. A good strategy in combining the individual models can lead to very good results in little time.
Main techniques for combining individual models (base learners):
So much technical concepts, but what can this actually teach us!?
For maintaining a friend group: Just like we pay attention to the diversity of individual learners either by creating bootstraps of the data to train on or using foundationally different algorithms (linear models, decision trees, etc.), we have to pay attention to crafting a diverse friend group. The variation of perspectives is necessary! Our friends don't have to be totally different like regression models and classification models, but they do need to bring a different perspective to the problem being considered.
领英推荐
Truthfully, you don't need several learners who are all saying the same thing.
On becoming better humans: This comes from the boosting technique! Just as the first base learner starts with a weak model but continuously learns from its errors with each incremental model and becomes a good model in no time, this "learnability" is a key skill we need to be better humans.
The growth mindset to just start and then review and improve on our errors and loopholes leveraging feedback, just as the boosting algorithm does, is what we need to be better humans.
I'm signing off here to continue to practice how to introduce diversity to my base learners and how to choose superior techniques of combination, while I leave you to figure out how to bring that diversity into your friend group and develop a growth mindset.
References
2. Ammar Mohammed, Rania Kora, (2023) A comprehensive review on ensemble deep learning: Opportunities and challenges
3. Jacob Murel Ph.D., Eda Kavlakoglu (2024) What is Ensemble learning?
Intermediate Machine Learning Engineer actively seeking Internship || Graduate Engineer from Landmark University
7 个月Awesome read ?? ??
Senior Analytics Engineer | Sears
10 个月The variation in perspective is truly important