Nb. Karayiannis et al., REPAIRS TO GLVQ - A NEW FAMILY OF COMPETITIVE LEARNING SCHEMES, IEEE transactions on neural networks, 7(5), 1996, pp. 1062-1071
First, we identify an algorithmic defect of the generalized learning v
ector quantization (GLVQ) scheme that causes it to behave erratically
for a certain scaling of the input data, We show that GLVQ can behave
incorrectly because its learning rates are reciprocally dependent on t
he sum of squares of distances from an input vector to the node weight
vectors, Finally, we propose a new family of models-the GLVQ-F family
-that remedies the problem. We derive competitive learning algorithms
for each member of the GLVQ-F model and prove that they are invariant
to all scalings of the data. We show that GLVQ-F offers a wide range o
f learning models since it reduces to LVQ as its weighting exponent (a
parameter of the algorithm) approaches one from above. As this parame
ter increases, GLVQ-F then transitions to a Model in which either all
nodes may be excited according to their; (inverse) distances from an i
nput or in which the winner is excited while losers are penalized. And
as this parameter increases without limit, GLVQ-F updates all nodes e
qually, We illustrate the failure of GLVQ and success of GLVQ-F with t
he IRIS data.