Regularization is applying a penalty to increasing the magnitude of parameter values in order to reduce overfitting. When you train a model such as a logistic regression model, you are choosing parameters that give you the best fit to the data. This means minimizing the error between what the model predicts for your dependent variable given your data compared to what your dependent variable actually is.

**Normal**model; the approximation function (red) of a trained model. It’s quite close to the true dependence (green),

**y(x) = w**_{0} + w_{1}*x_{1} + w_{2}*x_{2} + w_{3}*x_{3}

_{0}+ w

_{1}*x

_{1}+ w

_{2}*x

_{2}+ w

_{3}*x

_{3}

2. **Overfitted **model; its approximation function (red) fits ideal to the given train dataset (blue dots), yet fails to correctly predict future data.

**y(x) = w**_{0} + w_{1}*x_{1} + w_{2}*x_{2} + … **+ w**_{9}*x_{9}

_{0}+ w

_{1}*x

_{1}+ w

_{2}*x

_{2}+ …

_{9}*x

_{9}

3. **The solution to that overfitting**/overtraining would be adding a penalty to the MSE/MAE function thus issuing in regularization **L**. The latter we are to optimize.

#### L(w, x) = **Q( w, x ) + λ|w| —> min**(w)

**L1**(lasso regression). The penalty is proportional to the**sum of weights**.

.**L2**(ridge regression or Tikhonov regularization). The penalty is proportional to the**sum of squared weights**.

.