Electron microscopy
 
Energy Consumption in Computation of Machine Learning
- Python and Machine Learning for Integrated Circuits -
- An Online Book -
Python and Machine Learning for Integrated Circuits                                                           http://www.globalsino.com/ICs/        


Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

The layers that consume the most computation energy in a neural network can vary depending on the network's architecture, the size of the layers, and the specific task it is performing. In general, the layers that tend to consume the most computation energy are often the ones with the most parameters and the most computation-intensive operations. Here are some general observations:

  1. Fully Connected Layers (FC): Fully connected layers, also known as dense layers, typically have the highest number of parameters because each neuron in one layer is connected to every neuron in the previous layer. As a result, they often require a significant amount of computation energy, especially when they have a large number of neurons.

  2. Convolutional Layers (CONV): Convolutional layers can also be computationally intensive, particularly when they have many filters (kernels) and the input data has a high spatial resolution. The convolution operation involves a large number of multiply-accumulate (MAC) operations, which can be energy-intensive.

  3. Pooling Layers: Pooling layers, such as max-pooling or average-pooling, are generally less computationally intensive than convolutional and fully connected layers. They downsample feature maps by taking the maximum or average of values in local regions, which is a simpler operation in comparison.

  4. Recurrent Layers (in recurrent neural networks, RNNs): In RNNs, recurrent layers can be computationally expensive, especially when the network is processing sequences with a large number of time steps. This is because recurrent layers involve repeated operations across time steps, which can accumulate computational energy consumption.

  5. LSTM and GRU Layers: Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) layers, which are commonly used in sequence modeling tasks, have a more complex structure than standard recurrent layers and can be more computationally intensive due to their gating mechanisms.

  6. Attention Mechanisms: Attention mechanisms, often used in models like Transformers, can be highly computational due to the extensive inter-token interactions and the large number of parameters involved.

Table 3826. Hidden layer in deep learning neural network.

Application example   Hidden Layer
Feedforward neural network for image classification Details
  • Number of Nodes: Let's say we have one hidden layer with 128 nodes. The number of nodes in the hidden layer is a hyperparameter that can be adjusted based on the problem and complexity of the task.
  • Activation Function: ReLU (Rectified Linear Unit) activation function is commonly used in hidden layers. So, ReLU activation is applied to the output of each node in this layer.
Variables /nodes 128 nodes (ReLU activation)
Included layers Convolutional Layer (CONV)
Pooling Layer (e.g., MaxPooling)
Convolutional Layer (CONV)
Pooling Layer
Fully Connected Layer (FC)
Energy usage 99% energy are used by Convolutional Layer (CONV) and Fully Connected Layer (FC)
* Feedforward neural network for image classification: this builds a neural network to classify handwritten digits from the MNIST dataset, where each image is a 28x28 pixel grayscale image of a handwritten digit (0 through 9).

 

============================================

         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         

 

 

 

 

 



















































 

 

 

 

 

=================================================================================