Adaline Example and Exercise

Example:

In the [1], they have the same example that they have considered before for the Perceptron with multiple output neurons. We use bipolar output neurons and the training set:

(class 1)



(class 2)



(class 3)



(class 4)



It is clear that N = 2, Q = 8, and the number of classes is 4. The number of output neuron is chosen to be M = 2 so that 2M = 4 classes can be represented.

Our exact calculation of the weights and bias for the case of a single output neuron can be extended to the case of multiple output neurons.
One can then obtain the following exact results for the weights and biases:




Using these exact results, we can easily see how good or bad our iterative solutions are.

It should be remarked that the most robust set of weights and biases is determined only by a few training vectors that lie very close to the decision boundaries. However in the Delta rule, all training
vectors contribute in some way. Therefore the set of weights and biases obtained by the Delta rule is not necessarily always the most robust.

The Delta rule usually gives convergent results if the learning rate is not too large. The resulting set of weights and biases typically leads to correct classification of all the training vectors, provided such a set exist. How close this set is to the best choice depends on the starting weights and biases, the learning rate and the number of iterations. We find that for this example much better convergence can be obtained if the learning rate at step k is set to be α = 1/k.



Exercise:


  1. Classify the next patterns set using the LMS algorithm and an ADALINE net

The initial parameters are:


a) Calculate the weights vector W and the threshold b


SOLUTION

First Period:

For the first pair of Input/Output


Step 1: It calculates the output using the activation own function of the network:


Step 2: It calculates the error:


Step 3: Using the Learning rule of Widrow - Hoff, it modifies the weights vector and the threshold




For the second pair of Input/Output


Step 1: It calculates the output using the activation own function of the network:



Step 2: It calculates the error:




Step 3: Using the Learning rule of Widrow - Hoff, it modifies the weights vector and the threshold




References:

[1] [1] K. Ming Leung. ADALINE fro Pattern Classification. POLYTECHNIC UNIVERSITY.



Comentarios

Entradas más populares de este blog

Learning Algorithm (LMS rule or Widrow - Hoff)

ADALINE Architecture