Electron microscopy
 
L2 Regularization/Ridge Regularization/Tikhonov Regularization
- Python and Machine Learning for Integrated Circuits -
- An Online Book -
Python and Machine Learning for Integrated Circuits                                                           http://www.globalsino.com/ICs/        


Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

The regularization term in SVM is given by,

          Working Principle --------------------------------- [3810a]

The regularization term is a form of L2 regularization, which is also known as "ridge regularization" or "Tikhonov regularization." L2 regularization aims to prevent overfitting in machine learning models by adding a penalty term that discourages large values in the weight vector w. In the SVM formulation, you aim to maximize the margin between classes while minimizing the norm of the weight vector (||w||) to prevent overfitting.

Table 3810a shows linear regression with and without L2 regularization.

Table 3810a. Linear regression with and without regularization.

Algorithms Without regularization With regularization
Linear Regression Set g(z) = z (identity function).
This simplifies the equation to hypothesis fuction, which is the formula for linear regression.

In linear regression, a goal is to minimize the least squares (OLS) or mean squared error (MSE) term below,

          hypothesis fuction

With regularization, the goal is to minimize the term below,
    hypothesis fuction
The second part is the L2 regularization term, where is the regularization parameter and ||θ||2 is the squared L2 norm of the parameter vector .
hypothesis fuction
(Code)

Figure 3810 shows the comparison between bias without and with regularization.

Variance in ML

Figure 3810. Comparison between bias without and with regularization. (code)

Regularization tends to reduce overfitting, which means it helps in reducing variance rather than bias. While regularization might slightly increase the bias in some cases due to the penalty on complex models, the primary purpose of regularization is to control variance and improve the model's generalization to new data.

============================================

         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         

 

 

 

 

 



















































 

 

 

 

 

=================================================================================