There are many ways to create and combine ensembles, but they can be broadly classified into two categories: bagging and boosting. Bagging, or bootstrap aggregating, involves creating multiple models from different subsets of the training data, and then averaging or voting their predictions. This reduces the variance and overfitting of the individual models, and improves the stability and robustness of the ensemble. A common example of bagging is random forest, which creates a collection of decision trees from random samples of the data and features. Boosting, on the other hand, involves creating multiple models sequentially, where each model tries to correct the errors of the previous ones, and then weighting their predictions according to their performance. This reduces the bias and underfitting of the individual models, and improves the accuracy and precision of the ensemble. A common example of boosting is gradient boosting, which creates a series of weak learners, such as decision stumps, and updates their weights based on the gradient of the loss function.