JNN, a randomized algorithm for training multilayer networks in polynomialtime

Citation
A. Elisseeff et H. Paugam-moisy, JNN, a randomized algorithm for training multilayer networks in polynomialtime, NEUROCOMPUT, 29(1-3), 1999, pp. 3-24
Citations number
24
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
NEUROCOMPUTING
ISSN journal
09252312 → ACNP
Volume
29
Issue
1-3
Year of publication
1999
Pages
3 - 24
Database
ISI
SICI code
0925-2312(199911)29:1-3<3:JARAFT>2.0.ZU;2-8
Abstract
From an analytical approach of the multilayer network architecture, we dedu ce a polynomial-time algorithm for learning from examples. We call it JNN, for "Jacobian Neural Network". Although this learning algorithm is a random ized algorithm, it gives a correct network with probability 1. The JNN lear ning algorithm is defined for a wide variety of multilayer networks, comput ing real output vectors, from real input vectors, through one or several hi dden layers, with low assumptions on the activation functions of the hidden units. Starting from an exact learning algorithm, for a given database, we propose a regularization technique which improves the performance on appli cations, as can be verified on several benchmark problems. Moreover, the JN N algorithm does not require a priori statements on the network architectur e, since the number of hidden units, for a one-hidden-layer network, is com puted by learning. Finally, we show that a modular approach allows to learn with a reduced number of weights. (C) 1999 Elsevier Science B.V. All right s reserved.