Logistic and locally-weighted regression
Roberto Battiti
University of Trento / LION (Learning and Intelligent OptimizatioN), trustable special-purpose AI with measurable goals (no AGI)
Special nonlinear models
Chapter 7 of LIONbook v 4.0, get the PDF from?https://intelligent-optimization.org/LIONbook/
In this chapter, we continue along our path from linear to nonlinear models. Before considering the most general and powerful methods, we start with gradual modifications of the linear model (scalar product), first to make it suitable for predicting probabilities (logistic regression), then by making the linear models local and giving more emphasis to the closest examples, in a kind of smoothed version of K nearest neighbors (locally-weighted linear regression).
After this preparatory phase, we will be ready to enter the holy of holies of flexible nonlinear models for arbitrary smooth input-output relationships like Multi-Layer Perceptrons (MLP), Deep Learning, and Support Vector Machines (SVM).
领英推荐
Gist
Linear models are widely used but insufficient in many cases. Two examples of simple modifications have been considered in this chapter. First, there can be reasons why the output needs to have a limited range of possible values. For example, if one needs to predict a probability, the output can range only between zero and one.
Passing a linear combination of inputs through a “squashing” logistic function is a possibility. When the log-likelihood of the training events is maximized, one obtains the widely-used logistic regression.
Second, there can be cases when a linear model needs to be localized, by giving more significance to input points that are closer to a given input sample to be predicted. This is the case of locally-weighted regression. This model demands more computation (a new fit for every new input to be evaluated) but provides more flexibility to model also nonlinear functions. A kernel-width parameter regulates the input area in which examples have a non-negligible effect on the local fit.