Machine Learning: Key error definitions from Machine Learning:


Mean Percentage Error (MPE): like MAPE, it is the average of the percentage errors (take the actual value and the forecasted value, calculate what percentage the ex-post forecast data point is off by). This error measure uses actual percentages as opposed to absolute ones, so it factors in positive and negative percentages that can offset each other. Therefore, it can also be a measure of bias in a forecast, or whether the forecast typically overestimates or underestimates values overall.

Mean Absolute Percentage Error (MAPE): one of the most widely used measures of forecast accuracy. It measures the (absolute) size of each error in percentage terms, then averages all percentages. MAPE and MPE are typically not ideal for low-volume data, as being off by a few units can skew the final percentage results significantly, and also because the formula divides by the actuals quantity, so an actual demand of zero means that MAPE cannot process it properly. MAPE’s output can be interpreted as such: if the MAPE is 4%, you can say that you were off by 4%.

Mean Square Error (MSE): measures the average squared difference between the forecasted and actual values. This tells you how close you are to getting the most accurate “line of best fit,” and the higher the value, the worse the line fits. MSE can be skewed if just 1 forecast value is Very Bad because all errors are squared. This same effect can make this error measure problematic if there is a lot of noisy data.

Root of the Mean Square Error (RMSE): this simply takes the square root of the MSE. The purpose of originally squaring the errors was done so that negative errors did not cancel out positive errors. Square rooting them after the fact makes it so that the values of the RMSE have the same units as those plotted on the vertical axis (the line between the forecasted value and the actual value), thus the results can be interpreted as the absolute distance between the line of best fit and the data point.

Mean Absolute Deviation (MAD): the average absolute distance between each data point and the mean; in other words, it measures the size of the error in units. MAD is good for measuring the error of a single item, but when aggregated across multiple items it should be used cautiously, as it may be skewed due to high-volume data dominating the numbers and obscuring important information about lower-volume items.

Error Total (ET): simply a summation of the differences between the actual values and the forecasted values.

Mean Absolute Scaled Error (MASE): this is a ratio between the average absolute errors of the given forecasting method and the average absolute errors of something known as a “na?ve” forecast, in which you take the last period’s actuals and use them as the next period’s forecast. A MASE of more than 1 suggests that the forecasting method does a worse job of predicting values than if you had just recycled the exact same values of the prior period.

Weighted Mean Absolute Percentage Error (WMAPE): WMAPE is similar to MAPE, but it prevents lower-volume items from being considered equal to higher-demand items by weighing them differently.

要查看或添加评论,请登录

Deepak Chaubey的更多文章

社区洞察

其他会员也浏览了