In this paper an analysis of the statistical and the convergence prope
rties of Kohonen's self-organizing map of any dimension is presented.
Every feature in the map is considered as a sum of a number of random
variables. We extend the Central Limit Theorem to a particular case, w
hich is then applied to prove that the feature space during learning t
ends to multiple gaussian distributed stochastic processes, which will
eventually converge in the mean-square sense to the probabilistic cen
ters of input subsets to form a quantization mapping with a minimum me
an squared distortion either globally or locally. The diminishing effe
ct, as training progresses, of the initial states on the value of the
feature map is also shown.