ADALINE Introduction

According to [1], the ADALINE, which  stands for Adaptive Linear Neuron, and a learning rule which is capable, at least in principle, of finding such a robust set of weights and biases.

For [2], the ADALINE nets was developed by Bernie Widrow in the Stanford University shortly after Rosenblatt will develop the Perceptron.

The ADALINE term is the initials, however, its meaning has changed slightly over the years [2], Initially it was called ADAptive LInear NEuron; happened to be ADAptive LINear Element when the networks fell out of favor in the 70 years.

By [1], The architecture for the NN for the ADALINE is basically the same as the Perceptron, and similarly the ADALINE is capable of performing pattern classifications into two or more categories. Bipolar neurons are also used.

The ADALINE differs from the Perceptron in the way the NNs are trained, and in the form of the transfer function used for the output neurons during training. For the ADALINE, the
transfer function is taken to be the identity function during training.

However, after training, the transfer function is taken to be the bipolar Heaviside step function when the NN is used to classify any input patterns. Thus the transfer function is:



We will first consider the case of classification into 2 categories only, and thus the NN has only a single output neuron. Extension to the case of multiple categories is treated in the next section.
The total input received by the output neuron is given by:

References:

[1] K. Ming Leung. ADALINE fro Pattern Classification. POLYTECHNIC UNIVERSITY.

[2] Qiu Tengfei, Wen Xuhui. Adaptive-Linear-Neuron Based Dead-Time Effects Compensation Scheme for PMSM Drives. Chinese Academy of Sciences. 2015

Comentarios

Publicar un comentario

Entradas más populares de este blog

Learning Algorithm (LMS rule or Widrow - Hoff)

Adaline Example and Exercise

ADALINE Architecture