Electron microscopy
 
RSquare (R^2) versus RASE (Root Average Squared Error)
- Python for Integrated Circuits -
- An Online Book -
Python for Integrated Circuits                                                                                   http://www.globalsino.com/ICs/        


Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

The RSquare (R^2) and RASE (Root Average Squared Error) values for the training and validation sets in a predictive model are used to assess the model's performance. These values are typically different because they serve different purposes:

  1. Training Set:

    • RSquare (R^2): This metric measures the proportion of variance in the response variable (Yield, in this case) that is explained by the model. It ranges from 0 to 1, with higher values indicating a better fit. An R^2 of 1 means that the model perfectly explains the variance.
    • RASE: This metric quantifies the average error between the actual and predicted values for the training data. Smaller RASE values indicate better predictive accuracy.

    The RSquare for the training set tends to be higher because the model was trained on this data. In essence, the model is designed to fit the training data as closely as possible, which often leads to a higher R^2 and a lower RASE for the training set.

  2. Validation Set:

    • RSquare (R^2): When you apply the trained model to the validation set (data that the model hasn't seen during training), the R^2 measures how well the model generalizes to new, unseen data. It assesses whether the model can explain the variance in the validation set as effectively as it did for the training set. The R^2 for the validation set is typically lower than that for the training set because the model may not fit the validation data as closely.
    • RASE: Similar to R^2, the RASE for the validation set evaluates the predictive accuracy of the model on new data. It quantifies the average prediction error for the validation set. The RASE for the validation set is typically higher than that for the training set because the model may make less accurate predictions on data it hasn't seen during training.

In summary, the training set metrics are typically better because the model is designed to fit that data. The validation set metrics are used to assess how well the model generalizes to new, unseen data, which can be more challenging and lead to slightly worse performance. These differences help you evaluate the model's ability to avoid overfitting (fitting the training data too closely) and its effectiveness in making predictions on new data.

============================================

         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         

 

 

 

 

 



















































 

 

 

 

 

=================================================================================