In this paper, we focus on the convergence of a stochastic neural proc
ess. In this process, a "physiologically plausible" Hebb's learning ru
le gives rise to a self-organization phenomenon. Some preliminary resu
lts concern the asymptotic behaviour of the nework given that the upda
te of neurons is either sequential, partially parallel, or massively p
arallel. We shall pay attention to the fact that Hebbian learning is c
losely linked to the underlying dynamics of the network. Thereafter, w
e shall give, within the mathematical framework of stochastic approxim
ation, some conditions for convergence of the learning scheme.