Electron microscopy
 
PythonML
ResNet (Residual Network)
- Python Automation and Machine Learning for ICs -
- An Online Book: Python Automation and Machine Learning for ICs by Yougui Liao -
Python Automation and Machine Learning for ICs                                                           http://www.globalsino.com/ICs/        


Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

Table 3363a lists some standard machine learning algorithms to choose.

Table 3363a. Some "standard" machine learning algorithms to choose.

ML task Standard algorithms Description 
Image classification ResNet (originally by Microsoft Research, and implementation open-sourced by Google) ResNet, which stands for Residual Network, is a type of convolutional neural network (CNN) that introduced the concept of "residual learning" to ease the training of networks that are substantially deeper than those used previously. This architecture has become a foundational model for many computer vision tasks.
Text classification FastText (open-sourced by Facebook Research) FastText is an algorithm that extends the Word2Vec model to consider subword information, making it especially effective for languages with rich morphology and for handling rare words in large corpora. It’s primarily used for text classification, benefiting from its speed and efficiency in training and prediction.
Text summarization Tansformer and BERT (open-sourced by Google) The Transformer model introduces an architecture that relies solely on attention mechanisms, dispensing with recurrence and convolutions entirely. BERT (Bidirectional Encoder Representations from Transformers) builds upon Transformer by pre-training on a large corpus of text and then fine-tuning for specific tasks. Both are effective for complex language understanding tasks, including summarization.
Image generation GANs or Conditional GANs GANs consist of two neural networks, a generator and a discriminator, which compete against each other, thus improving their capabilities. Conditional GANs extend this concept by conditioning the generation process on additional information, such as class labels or data from other modalities, allowing more control over the generated outputs. This methodology has been revolutionary in generating realistic images and other types of data.

         

===========================================

         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         

 

 

 

 

 



















































 

 

 

 

 

=================================================================================