Winner-take-all algorithms are commonly used techniques in clustering
analysis. However, they have some problems ranging from clusters under
utilization to the extended training time. Some solutions to these pr
oblems are addressed here. It is shown here that using the maximum-lik
elihood criterion instead of the Euclidean distance metric results in
better clustering. The clusters are represented by a set of neuron eac
h has a Gaussian receptive field. For these Gaussian neurons, the cova
riance matrices, in addition to the centers, are learned. The one-winn
er condition is relaxed by maximizing the likelihood function of the m
ixture density function of the samples. This produces larger likelihoo
d values and more normally distributed clusters. A fast mixture likeli
hood clustering is provided for both batch and pattern learning modes.
Convergence analysis and experimental results are also presented.