Electron microscopy
 
Perceptron Algorithm
- Python for Integrated Circuits -
- An Online Book -
Python for Integrated Circuits                                                                                   http://www.globalsino.com/ICs/        


Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

The Perceptron algorithm is a type of supervised learning algorithm used for binary classification tasks. It is one of the earliest forms of artificial neural networks, which was developed by Frank Rosenblatt in 1957. The Perceptron is a simplified model of a biological neuron and serves as a building block for more complex neural network architectures.

The key overview of perceptron algorithm is:

  1. Input: The algorithm takes a set of input features, each with an associated weight. These weights represent the importance or influence of each feature on the classification decision.

  2. Weighted Sum: It computes the weighted sum of the input features and weights, which can be expressed as:

  3.                     Weighted_sum = w1 * x1 + w2 * x2 + ... + wn * xn -------------------------------- [3870a]

    Where:

    • w1, w2, ..., wn are the weights for each feature.
    • x1, x2, ..., xn are the input feature values.
  4. Hypothesis function is given by,
  5.                   Hypothesis function  -------------------------------- [3870b]

    Where:

    • w1, w2, ..., wn are the weights for each feature.
    • x1, x2, ..., xn are the input feature values.

    The update rule of θj is given by,

    general update rule for θj ------------------------ [3870c]

    Equation 3870c indicates that the weight updates is given by, 

    wi := wi + α(actual value - estimate) xi  ------------------------------- [3870cb]

  6. Activation Function: The weighted sum is passed through an activation function. In the original Perceptron algorithm, this activation function is a step function, which means the output is binary (typically -1 or 1), depending on whether the weighted sum is above or below a certain threshold (as shown in shown in Figure 3870a (b)).

                        Output = 1                   (if Weighted_sum >= Threshold) ---------------------------- [3870d]
                                      -1                   (otherwise)

  1. Learning: The Perceptron is trained using a supervised learning approach. During training, it compares its output to the target output (the correct classification) and adjusts its weights to reduce the error. The weight adjustment is done using a simple learning rule:
  2.                 Δwi = Learning_rate * (Target - Output) * xi

    where:

    • Δwi is the change in weight for feature i.
    • Learning_rate is a hyperparameter that controls the step size of weight updates.
    • Target is the correct classification (1 or -1).
    • Output is the Perceptron's current prediction.
    • xi is the value of feature i, e.g. x1 and x2 shown in Figure 3870a (a).
  3. Iteration: The training process involves iterating through the dataset multiple times (epochs) until the Perceptron converges to a solution where it correctly classifies all the training examples or a predefined stopping criterion is met.

         Upload Files to Webpages

(a)

         Upload Files to Webpages

(b)

Figure 3870a. Perceptron algorithm: (a) 2-dimentional (Python code), and (b) 1-dimentional (Python code). The Threshold, mentioned in Equation 3870b, is about 1.8. The boundary in (a) is decision boundary (or called hypothesis), where H(x), the prediction or output, is equal to 0.

Note that Perceptron algorithm does not use Sigmoid function as its activation function. The Perceptron algorithm uses a step function (also known as a threshold function) as its activation function.

A perceptron, as a single-layer neural network, is only capable of learning linearly separable patterns in data. It cannot effectively handle datasets that require non-linear decision boundaries for accurate classification. However, the limitations of perceptrons can be overcome by using multi-layer perceptrons (MLPs) or more complex neural network architectures, allowing for the learning of non-linear relationships in data. 

============================================

         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         

 

 

 

 

 



















































 

 

 

 

 

=================================================================================