From an analytical approach of the multilayer network architecture, we dedu
ce a polynomial-time algorithm for learning from examples. We call it JNN,
for "Jacobian Neural Network". Although this learning algorithm is a random
ized algorithm, it gives a correct network with probability 1. The JNN lear
ning algorithm is defined for a wide variety of multilayer networks, comput
ing real output vectors, from real input vectors, through one or several hi
dden layers, with low assumptions on the activation functions of the hidden
units. Starting from an exact learning algorithm, for a given database, we
propose a regularization technique which improves the performance on appli
cations, as can be verified on several benchmark problems. Moreover, the JN
N algorithm does not require a priori statements on the network architectur
e, since the number of hidden units, for a one-hidden-layer network, is com
puted by learning. Finally, we show that a modular approach allows to learn
with a reduced number of weights. (C) 1999 Elsevier Science B.V. All right
s reserved.