Bias Variance Tradeoff

Bias Variance Tradeoff

Bias and variance trade off is one of the?must-know?concepts for every data scientist. In this article we will talk about and bias and variance tradeoff and its importance in machine learning.

When we talk about prediction errors of models, it can be decomposed into two main subcomponents - error due to "bias" and error due to "variance".

Lets say we are given X, and are trying to predict Y. Relation between them Y=f(X)+ε

We create model f'(X) as estimate of f(X).Expected error at point x can be defined as follows:

No alt text provided for this image

This error may then be decomposed into bias and variance components.

No alt text provided for this image

Err(x)= Bias^2+ Variance + Irreducible error

Lets understand bias and variance first.

What is Bias?

Bias is the error that calculates the difference between the average prediction of our model and actual value that we are trying to predict.

What is Variance?

The variance is how much the predictions for a given point vary between different realizations of the model. It can be defined as the model’s sensitivity to fluctuations in the data.?It captures how much your model changes if you train on a different training set. How "over-specialized" is model to a particular training set (overfitting)?

No alt text provided for this image
fortmann-roe.com/docs/BiasVariance.html


Okay now we have looked at bias and variance .But why is there trade off between two? How important it is to choose one of available algorithms given data? Lets try to understand this.

If we train 10 models using similar algorithm on different training sets, we might get different results. Consider following two scenarios:

In scenario 1,all models gave similar output but are far away from actual expected output. In scenario 2,trained models give different outputs. These shows model is sensitive to training dataset and is not capable of generalizing well on unseen data.

So here scenario1 is referring to High bias and low variance algorithms and scenario2 is referring to High variance and low bias algorithms. High bias and low variance algorithms are consistent but inaccurate on average. High variance and low bias algorithms are accurate but inconsistent.

Low variance algorithms(eg regression, naive bayes) are less complex with simple underlying structure. Low bias algorithms(eg decision trees) are more complex with flexible structure. So basically increasing bias decreases variance and vice versa. We are actually looking for low variance and low bias model. If we see the error equation above to reduce error, we have to reduce bias and variance both. As we can see they conflict each other ,hence we can do trade off .We can find sweet spot where overall error is reduced. So here is the answer to why there is there’s a tradeoff in bias and variance – an algorithm cannot simultaneously be?more complex and less complex.

Bias and variance tradeoff is similar to concept of underfitting and overfitting. When model is not able to do good on training data we are underfitting. In this case we will have high training and test error. When model is doing good on training data but not on test data then model is overfitting. In this we will have high test error. We can define best model as one which is not underfitting and not overfitting.

No alt text provided for this image
Variation of bias and variance with model complexity Src:https://scott.fortmann-roe.com/docs/BiasVariance.html

Now we know about this bias and variance tradeoff lets see how can we deal with it.

Referring to graph above, if we are on right side of graph (where test error is high), we can say then model is suffering from high variance. We can address this high variance issue by adding more training data, reducing model complexity ie to build simpler models, use bagging etc. If we are on left side of graph, then model is not doing good on training data, it is suffering from high bias. We can address this high bias issue by adding more features, building complex models, use boosting etc.

Happy learning!

要查看或添加评论,请登录

Pooja Palod的更多文章

社区洞察

其他会员也浏览了