PythonML
Generative Learning Models
- Python Automation and Machine Learning for ICs -
- An Online Book -

Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix
http://www.globalsino.com/ICs/  


=================================================================================

Generative learning models are a class of machine learning models that aim to generate new data that is similar to a given dataset. These models learn to capture the underlying patterns, structures, and statistical properties of the data and then use that knowledge to create new, previously unseen data samples. Generative learning models are used in a wide range of applications, including image generation, text generation, speech synthesis, and more.

Table 3849. Generative learning models.

Types   Models Principles Applications
Explicit Generative Models These models explicitly model the probability distribution of the data, allowing them to generate new samples by sampling from this distribution.
  • Image generation: Creating realistic images that resemble those in a training dataset.
  • Text generation: Generating human-like text, which is used in chatbots, language translation, and content generation.
  • Data augmentation: Creating synthetic data to increase the size of a training dataset.
  • Anomaly detection: Identifying unusual or outlier data points by generating samples and comparing them to the real data.
  • Style transfer: Altering the style of an input image or text while preserving its content.
Probabilistic Models Gaussian Discriminant Analysis (GDA)    
Gaussian Mixture Models (GMMs) model data using probability distributions  
Hidden Markov Models (HMMs)  
Autoencoders Variational Autoencoders (VAEs)   Can be used for generative purposes by sampling from the latent space.
Other autoencoder variants  
Implicit Generative Models These models do not explicitly model the data distribution but instead learn a mapping from a simple, usually random noise distribution to the data distribution.
Generative Adversarial Networks (GANs) GANs consist of two neural networks, a generator and a discriminator, which are trained in a competitive manner. The generator tries to produce data that is indistinguishable from real data, while the discriminator tries to tell real data from fake data.  
Normalizing Flows These models aim to learn a series of invertible transformations that map a simple distribution (e.g., Gaussian) to the target data distribution.  
Naive Bayes    
PixelCNN and PixelRNN These models are used for image generation by modeling the conditional probability of each pixel given the previous pixels in an autoregressive manner.  

The joint likelihood for generative learning models can be given by,

          Joint Likelihood -------------------------------- [3849a]

                                                         Joint Likelihood -------------------------------- [3849b]

Gaussian Mixture Model. Code:

         Upload Files to Webpages

In the script:

         1. "plt.contour" is used to plot the decision boundary:

                  plt.contour(xx, yy, decision_boundary.reshape(xx.shape), levels=[0.5], linewidths=2, colors='green').

         2. The decision boundary, where Z is closest to 0 (i.e., where the probability is approximately 0), is shown as a solid green line.

         3. The n_components (the number of Gaussian components in the GMM) and covariance_type (the type of covariance for the components) parameters can be changed to influence the linearity of the decision boundary. If you set n_components to a higher value, it might result in a more complex and nonlinear decision boundary. Conversely, setting it to a lower value can lead to a simpler and more linear decision boundary. The options include "full" (for full covariance matrices), "tied" (where all components share the same covariance), "diag" (diagonal covariance matrices), and "spherical" (spherical covariance, where variances are the same in all dimensions). Using "diag" or "spherical" can lead to a more linear decision boundary compared to "full" or "tied."

         4. The decision boundary here is not linear because GMMs, by default, model data using Gaussian components, which can capture complex, non-linear relationships.

         5. The contour lines represent iso-values, which are lines connecting points of equal values. Whether higher or lower contour values are considered "better" depends on the specific context and the data you are visualizing:

                  For Probability Distributions: The contour lines represent the probability density of the Gaussian Mixture Model (GMM). Higher contour values typically represent regions with higher probability density. Higher contour values are associated with areas where data points are more concentrated, and lower values correspond to areas with lower data density. Therefore, higher contour values are often considered "better" because they represent regions where the GMM model assigns higher probabilities.

                  For Error Surfaces in Optimization: In some optimization contexts, like minimizing loss functions in machine learning, the interpretation is different. In such cases, lower contour values are typically better. These contour lines represent the loss or error function, and you aim to find the parameters that minimize this error. So, you want to reach the lowest contour values in the optimization process.

         6. The contour_levels variable controls the number of contour lines, and thus, the decision boundary lines.

         7. The same covariance matrix (sigma) is used for both the positive and negative classes. Using the same covariance matrix for both classes in the synthetic data is a simplification for the purpose of illustration. It helps create a clear and visually distinguishable separation between the two classes. In practice, it's common to have different covariance matrices for different classes when dealing with real-world data where the data distribution of each class may vary.

         8. Two multivariate normal distributions are generated to represent the positive and negative classes (Positive Examples from a multivariate normal distribution with a mean of [3, 3] and a covariance matrix [[1, 0.5], [0.5, 1]], and Negative Examples from a different multivariate normal distribution with a mean of [7, 7] and a different covariance matrix [[1, -0.5], [-0.5, 1]]).

         9. Covariance Matrix Choice: In the script, the same covariance matrix is not used for both classes. Instead, the choice is made to use different covariance matrices to create distinct distributions for the positive and negative classes. This is important because it helps illustrate how the GMM can capture two different distributions with different shapes and orientations.

         Note: If the GMM is specified with covariance_type='spherical', which means that it assumes that all components of the GMM have spherical covariance matrices. In other words, each component is modeled as a circle with a uniform covariance in all dimensions.

                 If the GMM does not specify a particular covariance_type, which means it uses the default option, which is 'full'. In this case, the GMM allows for more flexible covariance matrices, meaning that each component can have its own full covariance matrix, which includes information about the correlation between dimensions.

       

        

=================================================================================