Electron microscopy
 
PythonML
Precision, Recall, False Positive Rate,
and False Negative Rate (Miss Rate or False Negative Proportion)
- Python Automation and Machine Learning for ICs -
- An Online Book -
Python Automation and Machine Learning for ICs                                                           http://www.globalsino.com/ICs/        


Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

Precision is defined as the proportion of positive identifications that were correctly classified, while recall is the proportion of actual positive classifications that are correctly identified. 

Confusion matrix (code) is as below:

        

The precision of the model can be given by,           

 ----------------------------------------------- [3600a]

where,

 TP is true positive.

 FP is false positive. 

The precision of a classification model is a measure of its accuracy when it predicts the positive class.  Equation 3600a calculates the ratio of correctly predicted positive instances (True Positives) to the total instances predicted as positive (sum of True Positives and False Positives). In other words, precision tells us the percentage of cases predicted as positive by the model that were actually positive.

The recall of the model can be given by,

 ----------------------------------------------- [3600b]

where,

FN is false negative.

The recall, also known as sensitivity or true positive rate, is another performance metric for classification models. Equation 3600b calculates the ratio of correctly predicted positive instances (True Positives) to the total actual positive instances (sum of True Positives and False Negatives). In other words, recall tells us the percentage of actual positive instances that were correctly predicted by the model. It's a useful metric when the cost of false negatives is high, and we want to minimize the number of actual positive instances that are missed by the model.

False Positive Rate is given by,

 ----------------------------------------------- [3600c]

Equation 3600c calculates the ratio of false positives to the sum of false positives and true negatives. It measures the proportion of actual negatives that are incorrectly classified as positives.

False Negative Rate is given by, 

 ----------------------------------------------- [3600d]

False Negative Rate is also referred to as the Miss Rate or False Negative Proportion. Equation 3600d  calculates the ratio of false negatives to the sum of false negatives and true negatives. It represents the proportion of instances that are actually positive but were incorrectly predicted as negative by the model. In other words, the False Negative Rate is a measure of how many positive instances were missed or "falsely omitted" by the model. It is particularly relevant in scenarios where the cost or consequences of missing positive instances (false negatives) is high.

Figure 3600a shows the distribution of TN, FN, FP and TP of an example when the threshold is set at the middle.    

Figure 3600a. The distribution of TN, FN, FP and TP with the threshold at the middle. Here, TN = 9, FN = 3, TP = 8, and FP = 4. The TN is in red, while the TP is in green. 

Therefore, we can get,

Precision =  8/(8+4) = 66.7%

Recall = 9/(9+3) = 75.0% 

If we increase the threshold as shown in Figure 3600b, then we have more FN, but less FP.   

Figure 3600b. The distribution of TN, FN, FP and TP with a higher threshold. Here, TN = 12, FN = 4, TP = 7, and FP = 1. The TN is in red, while the TP is in green. 

Therefore, we can get,

Precision =  7/(7+1) = 87.5%

Recall = 12/(12+4) = 75.0% 

As we can see, the precision has been increased when the threshold increases; however, the recall has not changed.  In this case, the ones which have been classified to positive become more correct; however, it missed more positive examples.      

============================================

         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         

 

 

 

 

 



















































 

 

 

 

 

=================================================================================