Categories
Data Mining

Adding regularization into Linear Regression model

The Regularization is applying a penalty to increasing the magnitude of parameter values in order to reduce overfitting. When you train a model such as a logistic regression model, you are choosing parameters that give you the best fit to the data. This means minimizing the error between what the model predicts for your dependent variable given your data compared to what your dependent variable actually is.

See the practical example how to deal with overfitting by the regularization.

  1. Normal model; the approximation function (red) of a trained model. It’s quite close to the true dependence (green),

y(x) = w0 + w1*x1 + w2*x2 + w3*x3

2. Overfitted model; its approximation function (red) fits ideal to the given train dataset (blue dots), yet fails to correctly predict future data.

y(x) = w0 + w1*x1 + w2*x2 + … + w9*x9

3. The solution to that overfitting/overtraining would be adding a penalty to the MSE/MAE function thus issuing in regularization L. The latter we are to optimize.

L(w, x) = Q( w, x ) + λ|w| —> min(w)

  • L1 (Lasso regression). The penalty is proportional to the sum of weights.

    L_{1}=\sum _{i}{(y_{i}-y(t_{i}))}^{2}+\lambda \sum _{i}{|a_{i}|}.
  • L2 (Ridge regression or Tikhonov regularization). The penalty is proportional to the sum of squared weights.

    L_{2}=\sum _{i}{(y_{i}-y(t_{i}))}^{2}+\lambda \sum _{i}{a_{i}}^{2}.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

This site uses Akismet to reduce spam. Learn how your comment data is processed.