This article proposes a stochastic method for determining the number o
f hidden nodes of a multilayer perceptron trained by a backpropagation
algorithm. During the learning process, an auxiliary markovian algori
thm controls the sizing of the hidden layers. As usual, the main idea
is to promote the addition of nodes the closer the net is to a stall c
onfiguration, and to remove those units not sufficiently ''lively''. T
he combined algorithm produces families of nets which converge fast to
wards well trained nets with a small number of nodes. Numerical experi
ments are performed both on conventional benchmarks and on realistic l
earning problems. These experiments show that for learning tasks of su
fficiently high complexity, the additional (with respect to the conven
tional fixed architecture methods) complexity of our method is compens
ated by a greater velocity and a higher success percentage in obtainin
g the minimum of the error function.