Logistic Regression: Basics, Obscurities and its Membership as a Classifier


1) Logistic regression (LR) is a regression. And yes, it's also a classifier, insofar as the predicted log odds is a continuous variable, which, when cut at a certain threshold, allows one to classify into either of two categories (LR can classify into more than two categories too but that is beyond the scope of this article.)


2) If you look at the formula in the image, it tells you what the dependent variable (outcome variable, in common ML parlance) is. That is the log odds, which is based on the probability of the event, condition, or generally speaking, outcome.

No alt text provided for this image

Source: https://www.reneshbedre.com/blog/logistic-regression.html


3) The probability of this outcome is calculated as per the image above. Normally, software classifiers may default to categorizing a prediction into one category or the other based on a 0.5 threshold, but the AUROC exists for a reason, which is to select an optimal tradeoff between precision and recall, or sensitivity (recall) and specificity. The threshold resulting in this optimal tradeoff, which is often not 0.5, is picked. Theoretically, this doesn't make sense, as surely you'd pick 'male' if there's more than a 0.5 chance of the person predicted to be male? Yes, except models are often biased, but this is OK especially if the bias just shifts everyone's log odds up or down monotonically (so it's even OK if not everyone is shifted by the same amount especially if you just want the ordinality of predicted log odds to be preserved). Bias is OK if you're not interested in the log odds or probability of the outcome but in the prediction of ranks especially. Ranks rather than real probabilities are important in applications like whom salespeople might want to call up first, given a limited amount of time per day.


4) Why is the log odds used instead of outcomes 1 and 0? This is to ensure that the predicted probabilities fall between 0 and 1. Predicted values falling within the 0-1 range is not guaranteed if you predict using outcomes 1 and 0 in the training set. However, a crucial scalability tip is that the transformation between log odds and 0 to 1 retains ordinality, meaning that the ranks of outcomes will be the same whether you predict using 0 and 1 or log odds. Hence, for prediction purposes alone, you can speed up your logistic regression a lot by using a linear model predicting outcomes 0 and 1.

This is not standard Statistics curriculum fare because Statistics curricula tend to focus on statistical inference (a pity, in my opinion) rather than prediction even while using models that are perfectly capable of prediction, as shown. Since the inner machinery of logistic regression uses the iterative Newton-Raphson algorithm, logistic regression can be quite slow compared with linear regression, and when you're using a generalized linear mixed effects model with a logit link (just like a logistic regression but with fixed and random effects), the logit link can really slow you down. Hence, you might want to stick to a linear mixed effects model predicting 0 and 1 if your sole purpose is prediction.

4a) Scalability tip: Spark can speed up a logistic regression by 100x, compared with 15x for SVMs. Scaling logistic regressions is often not thought of 'coz people see it as 'traditional regression' rather than 'ML,' the former connoting 'small data' and the latter, 'big data.' This is an instance of how knowing the inner machinery of methodologies can give you obscurer scalability insights and advantages.


5) The AUROC is not the only metric to assess predictive accuracy for the logistic regression. In fact, one of the time-honored methods is the Hosmer-Lemeshow test, which evaluates whether the observed event rates match expected event rates in subgroups. Looking at this rather than overall predictive accuracy, F1, precision or recall can easily give you an idea of where your predictions aren't doing well. A lift or gains chart can also be useful for picking the optimal threshold to achieve your desired tradeoff, as well as give you insights like the Hosmer-Lemeshow test.


6) Also, don't forget what regressions can do, which is to give you coefficients, standard errors, confidence intervals and p-values. Please view the image above again to see how the coefficients relate to the log odds.


7) A beta coefficient is such that, if you increase X by one unit, exp(B) is the odds ratio (odds of outcome Level 1/odds of outcome Level 0 - again, see the image for the definition of odds).


Reasons for Labeling Logistic Regression a 'Classifier'

And, if anyone's interested, I have some theories about why logistic regressions are often mistaken to be classifiers. First, I have no problem with such 'mistakings' if it's a matter of lexicography rather than misunderstanding concepts. If one is calling it a classifier just because one is aware it models continuous outcomes for one's almost exclusive purposes of classification, then that's fine. However, not understanding that it's a regression that models log odds and thus missing out on tuning thresholds to derive optimal predictive accuracies would be a huge pity.

1) Regressions are often mistaken as pure classifiers partly because commercial software/platforms feel compelled to organize models into neat, non-overlapping categories. Most of these platforms have dashboard-like interfaces, and no dashboard ever had overlapping categories (what would that even look like?) or repeats across multiple categories. The concept of non-overlapping categories is just the order of the day across almost all facets of our daily lives. As for why 'classifier' rather than 'regression' is chosen as LR's class, that's probably because commercial software are focused on applications, and the prevailing application of LR is classification.

2) There's almost certainly a marketing dimension to calling LR a classifier rather than regression since most commercial software/platform peddle ML and, very crucially, its cachet. ML is often understood, rightly or wrongly, to be a whole category of mostly classifiers like RandomForest, XGBoost, GBM, SVM, etc. Besides, classifiers have cachet because some posit that the way we think mimics our brain physiology, which apparently is bifurcated (though some say trifurcated). In everyday language, we have ready words for 'Yes' and 'No,' but not a whole continuum of words indicating the different degrees of 'maybe.' Yesses and no's also convey authority, something that's seemingly prized in most of our world, whether we wield that authority or not. Even a 'no' (a negative outcome) is apparently more comforting than a 'yes' because happiness research has shown that humans are 'happier' or more appeased to know with certainty a bad outcome than be kept hanging indefinitely. (This also explains the ubiquitous wrath against recruitment ghosting. Hear this? Hear this?) In short, the neatness of binary classification especially appeals subliminally to our unconscious.

Excluding LR from this arbitrarily elite pantheon of classifiers would be really impractical as it has its advantages even if one often overlooks its ability to output easily interpretable effect sizes in the form of coefficients. The overlooking of statistical inference is nowhere clearer than the need to issue extra commands, etc. just to output statistical inference results. However, even while overlooking statistical inference, the commercial gods that be might be savvy enough to recognize that logistic regression is comparatively fast and might also do better on smaller datasets, smaller being not that small (perhaps even up to 1m rows) after all. Besides, LR is also a household name among experienced practitioners, whom they wouldn't want to alienate. Hence, bringing LR into the fold of 'classifiers' makes commercial sense and seems, on the surface, to appeal to all camps. Well, almost all (inside or quite public joke :P)!

Edit: It was recommended that I remind one that maximum likelihood estimation (MLE) can be used in place of Newton-Raphson. And then, after that, I was told that this didn't make sense, because Newton-Raphson is one method for estimating the MLE. It's odd because I've written elsewhere that 'causality' can be found using Granger's causality or vector autoregression, meaning vector autoregression in general (besides Granger's causality). This is probably a linguistic oversight that suggests I don't think they're overlapping, but I hope you'll understand my hesitation to respond to one of the comments on this especially as it contains an essay with lots of red herrings on neural networks, making me fear going down a really lengthy rabbithole like that which has occurred over the 'MLE, not Newton-Raphson' debate in another comment. It's also strange that no one went after him for saying he should've said 'MLE methods besides Newton-Raphson.' It's my first time attracting a relatively large audience by my standard with lots of 3rd-degree connections whose communication styles I'm not used to (e.g. long essays with lots of irrelevant details making it hard for me to track the point), so I think I'll refrain from engaging further with styles that bewilder me for the moment till I'm ready (though I don't think anyone is truly ever ready or obliged to entertain long, mostly irrelevant comments that might hint at a desire for pedantic pageantry).

A very in-depth write up of logistic regression! It is interesting to think that it's often used as a classifier, because it pushes results to 0 and 1, but it indeed is a regression similar to linear regression.

回复
Stas Kolenikov

[ Social Science | Survey ] [ Data Scientist | Statistician ] :: 2021 American Statistical Association Fellow

3 年

The final edit is confusing. Newton-Raphson is a general class of gradient-based maximization methods when the first and the second derivatives can be meaningfully computed. It is easy for logistic regression since the explicit form of the function is there, and you can write out the derivatives on the back of an envelope; for some other other methods, you need three sheets of A4/letter for derivatives, or you can't do them at all and have to use numeric derivatives that slow you down by a factor of 20x. For neural nets, you cannot really push it to second derivatives of the connection weights as the thing is so damn complicated, so while technically you can probably write out the derivatives by the chain rule, it won't be superhelpful as the likelihood surface is just a hedgehog with tons of local spikes. MLE is the most meaningful way to obtain the parameter estimates for logistic regression. You can do somewhat different things with say generalized method of moments (GMM) or empirical likelihood, the mainstream econometric methods, but there's theory that says, basically, there is no point in trying these as they would be close to MLE anyway (asymptotic efficiency). Newton-Raphson is a common way to maximize that likelihood numerically, and it works very well because the likelihood is concave, the maximum is guaranteed to be unique, and N-R is arguably the fastest way to get to that top of the hill. Other numeric methods to find the maximum do exist, of course, such as Nelder-Mead that uses a version of the simplex method and does not need to take derivatives; or BFGS or BHHH methods that are Newton-Raphson style but build an approximation for the Hessian matrix differently -- but would make relatively little sense for logistic regression where the second derivative can be computed analytically lightning fast (relative to other methods; that's still a fair amount of matrix multiplication). There's an interesting extension of logistic regression called Firth logistic regression that adds a penalty term shrinking coefficients towards zero (roughly equivalent adding half an observation at zero values of variables, split between 0 and 1 outcome). It is especially useful when the sample sizes are small, and/or there's perfect separation (a situation when you have a linear combination of variables, in the simplest case one single variable, such that all the high values of that combination/variable about a certain threshold are associated with one outcome, and all values below that threshold are associated with the other outcome; in that case, the likelihood is maximized (to infinity) by infinite values of parameters, which isn't practical.) With Firth correction, the derivatives are not nice anymore, and you have to compute them numerically, and that is 20x slower (and Newton-Raphson with numerical second derivatives may not be as useful as BFGS that only uses the first derivatives; the actual constant depends on the dimension because the number of likelihood evaluations necessary to compute the second derivatives is quadratic in the number of variables, so it would be worse for a model with 100 variables than for a model with 10 variables). See library(logistf) in R; it has not percolated statsmodel in Python, as far as I can tell.

Niccolò Bassani

Senior Principal Statistician at Worldwide Clinical Trials

3 年

Nice one! Just two questions/comments: 1) "Hence, you might want to stick to a linear mixed effects model predicting 0 and 1 if your sole purpose is prediction." I do recall suggesting a colleague of mine to use the GLMM with the identity link to get predicted probabilities, and then to truncate values outside of the 0 - 1 range for obvious reasons...but is that really unbiased/appropriate in all situations (I never really 'checked' whether values outside of the range happen and how often)? 2) the Hosmer-Lemeshow test, as I recall, has the little drawback that it can be sensitive to the number of subgroups you split your data into, so it can be used to fool your audience (not that I would rely simply on a gof test, but not everyone's as transparent as I am ??)

Venkat Raman

Co-Founder & CEO at Aryma Labs | Building Marketing ROI Solutions For a Privacy First Era | Statistician |

3 年

Well articulated A.S.H./?sc/? Wong . I liked how you brought the commercial angle on 'Why LR is branded as Classification' .

Rodney Beard

International vagabond and vagrant at sprachspiegel.com, Economist and translator - Fisheries Economics Advisor

3 年

This is the linear probability model, not maximum likelihood based logistic regression, might be worth mentioning that.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了