Several recent papers have described sequential competitive learning a
lgorithms that are curious hybrids of algorithms used to optimize the
fuzzy c-means (FCM) and learning vector quantization (LVQ) models. Fir
st, we show that these hybrids do not optimize the FCM functional. The
n we show that the gradient descent conditions they use are not necess
ary conditions for optimization of a sequential version of the FCM fun
ctional. We give a numerical example that demonstrates some weaknesses
of the sequential scheme proposed by Chung and Lee. And finally, we e
xplain why these algorithms may work at times, by exhibiting the stoch
astic approximation problem that they unknowingly attempt to solve. Co
pyright (C) 1996 Published by Elsevier Science Ltd