REPAIRS TO GLVQ - A NEW FAMILY OF COMPETITIVE LEARNING SCHEMES

Citation
Nb. Karayiannis et al., REPAIRS TO GLVQ - A NEW FAMILY OF COMPETITIVE LEARNING SCHEMES, IEEE transactions on neural networks, 7(5), 1996, pp. 1062-1071
Citations number
18
Categorie Soggetti
Computer Application, Chemistry & Engineering","Engineering, Eletrical & Electronic","Computer Science Artificial Intelligence","Computer Science Hardware & Architecture","Computer Science Theory & Methods
ISSN journal
10459227
Volume
7
Issue
5
Year of publication
1996
Pages
1062 - 1071
Database
ISI
SICI code
1045-9227(1996)7:5<1062:RTG-AN>2.0.ZU;2-C
Abstract
First, we identify an algorithmic defect of the generalized learning v ector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data, We show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on t he sum of squares of distances from an input vector to the node weight vectors, Finally, we propose a new family of models-the GLVQ-F family -that remedies the problem. We derive competitive learning algorithms for each member of the GLVQ-F model and prove that they are invariant to all scalings of the data. We show that GLVQ-F offers a wide range o f learning models since it reduces to LVQ as its weighting exponent (a parameter of the algorithm) approaches one from above. As this parame ter increases, GLVQ-F then transitions to a Model in which either all nodes may be excited according to their; (inverse) distances from an i nput or in which the winner is excited while losers are penalized. And as this parameter increases without limit, GLVQ-F updates all nodes e qually, We illustrate the failure of GLVQ and success of GLVQ-F with t he IRIS data.