Entradas

Mostrando las entradas de mayo, 2017

ADALINE Introduction

Imagen
According to [1], the ADALINE, which  stands for  Ad aptive  Li near  Ne uron, and a learning rule which is capable, at least in principle, of finding such a robust set of weights and biases. For [2], the ADALINE nets was developed by Bernie Widrow in the Stanford University shortly after  Rosenblatt  will develop the  Perceptron . The ADALINE term is the initials, however, its meaning has changed slightly over the years [2], Initially it was called  ADA ptive  LI near  NE uron; happened to be  ADA ptive  LIN ear  E lement when the networks fell out of favor in the 70 years. By [1], The architecture for the NN for the ADALINE is basically the same as the Perceptron, and similarly the ADALINE is capable of performing pattern classifications into two or more categories. Bipolar neurons are also used. The ADALINE differs from the Perceptron in the way the NNs are trained, and in the form of the transfer function used for the output neurons during training. For the ADALINE, the tr

ADALINE Architecture

Imagen
ADALINE Architecture Description This content is based in the [1] reference. The general structure of the ADALINE type network is: where: p = input patterns b = activation thresholds a = neuron output The output nets is given by: For an ADALNE network of a single neuron with two inputs the diagram corresponds to the following figure: In similarity to Perceptron, the limit of the decision characteristic for the ADALINE network is presented when n = 0, therefore: In the next figure, the line separating the input space in two regions, as shown in the following figure: The output of the neuron is major than zero in the gray area, in the white output area is less than zero. The Adaline network can correctly classify linearly separable patterns in two categories. The neuronal architecture has a layer of neurons connected to  R  inputs through a matrix of weights  W . This network is often called  MADALINE  or  Multiple ADALINE . It defines an output vector a o

Learning Algorithm (LMS rule or Widrow - Hoff)

Imagen
Learning Algorithm (LMS rule or Widrow - Hoff) An input pattern P is applied. The output of the Adaptive Linear Combiner (ALC) is obtained and the difference is calculated with respect to the desired, that is, the error. The weights are updated. Steps 1 to 3 are repeated with all input vectors. If the error is an acceptable value, stop, otherwise repeat the algorithm. The Widrow-Hoff or LMS (Least Mean Square) learning rule, which uses the Adaline network for its training, makes it possible to perform step 3. By means of the following equations, the network parameters are updated: For the vector of weights W For the threshold b For the error e Where is known as ratio or learning rate, such that, 0 < a  <= 1 . The calculus of this parameter is done by a correlation matrix R : The eigenvalues Ii of the correlation matrix will be useful for parameter determining a , that is: References: [1] Kim Seng Chia. Predicting the boiling point of

Adaline Example and Exercise

Imagen
Example: In the [1], they have the same example that they have considered before for the Perceptron with multiple output neurons. We use bipolar output neurons and the training set: (class 1) (class 2) (class 3) (class 4) It is clear that N = 2, Q = 8, and the number of classes is 4. The number of output neuron is chosen to be M = 2 so that 2M = 4 classes can be represented. Our exact calculation of the weights and bias for the case of a single output neuron can be extended to the case of multiple output neurons. One can then obtain the following exact results for the weights and biases: Using these exact results, we can easily see how good or bad our iterative solutions are. It should be remarked that the most robust set of weights and biases is determined only by a few training vectors that lie very close to the decision boundaries. However in the Delta rule, all training vectors contribute in some way. Therefore the set of weight