A GENERALIZED LEARNING-PARADIGM EXPLOITING THE STRUCTURE OF FEEDFORWARD NEURAL NETWORKS

Citation
R. Parisi et al., A GENERALIZED LEARNING-PARADIGM EXPLOITING THE STRUCTURE OF FEEDFORWARD NEURAL NETWORKS, IEEE transactions on neural networks, 7(6), 1996, pp. 1450-1460
Citations number
28
Categorie Soggetti
Computer Application, Chemistry & Engineering","Engineering, Eletrical & Electronic","Computer Science Artificial Intelligence","Computer Science Hardware & Architecture","Computer Science Theory & Methods
ISSN journal
10459227
Volume
7
Issue
6
Year of publication
1996
Pages
1450 - 1460
Database
ISI
SICI code
1045-9227(1996)7:6<1450:AGLETS>2.0.ZU;2-J
Abstract
In this paper a general class of fast learning algorithms for feedforw ard neural networks is introduced and described, The approach exploits the separability of each layer into linear and nonlinear blocks and c onsists of two steps, The first step is the descent of the error funct ional in the space of the outputs of the linear blocks (descent in the neuron space), which can be performed using any preferred optimizatio n strategy, In the second step, each linear block is optimized separat ely by using a least squares (LS) criterion. To demonstrate the effect iveness of the new approach, a detailed treatment of a gradient descen t in the neuron space is conducted. The main properties of this approa ch are the higher speed of convergence with respect to methods that em ploy an ordinary gradient descent in the weight space backpropagation (BP), better numerical conditioning, and lower computational cost comp ared to techniques based on the Hessian matrix, The numerical stabilit y is assured by the use of robust LS linear system solvers, operating directly on the input data of each layer. Experimental results obtaine d in three problems are described, which confirm the effectiveness of the new method.