Electron microscopy
 
Negative Log Likelihood (NLL)
- Python and Machine Learning for Integrated Circuits -
- An Online Book -
Python and Machine Learning for Integrated Circuits                                                           http://www.globalsino.com/ICs/        


Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

Negative log likelihood (NLL) is a commonly used mathematical function in the field of statistics and machine learning, particularly in the probability models and maximum likelihood estimation. It is used to measure the goodness of fit between a probability distribution (usually a model's predicted distribution) and a set of observed data points because of:

  1. Likelihood: In statistics, the likelihood function measures how well a probability distribution or statistical model explains the observed data. Given a set of observed data points (often denoted as x) and a probability distribution or model parameterized by θ, the likelihood L(θ | x) measures the probability of observing the given data under the assumed model.

  2. Log Likelihood: To simplify calculations and avoid numerical underflow/overflow issues, it's common to work with the logarithm of the likelihood function. This is called the log likelihood and is denoted as log L(θ | x).

  3. Negative Log Likelihood (NLL): To turn the measure of fit into a loss function (something to be minimized), the negative log likelihood is often used. It's simply the negative of the log likelihood: -log L(θ | x).

The idea behind using the negative log likelihood as a loss function is to find the model parameters (θ) that maximize the likelihood of observing the given data. Maximizing the likelihood is equivalent to minimizing the negative log likelihood.

Figure 3863a shows the negative log likelihood (NLL), where you would need to negate the log-likelihood values. The NLL is often used in optimization problems because minimizing it is equivalent to maximizing the likelihood.

Concave nature of the log-likelihood function in an exponential family distribution

Figure 3863a. Negative log likelihood (NLL). (Python code)

============================================

         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         

 

 

 

 

 



















































 

 

 

 

 

=================================================================================