ACCELERATING THE TRAINING OF FEEDFORWARD NEURAL NETWORKS USING GENERALIZED HEBBIAN RULES FOR INITIALIZING THE INTERNAL REPRESENTATIONS

Authors
Citation
Nb. Karayiannis, ACCELERATING THE TRAINING OF FEEDFORWARD NEURAL NETWORKS USING GENERALIZED HEBBIAN RULES FOR INITIALIZING THE INTERNAL REPRESENTATIONS, IEEE transactions on neural networks, 7(2), 1996, pp. 419-426
Citations number
19
Categorie Soggetti
Computer Application, Chemistry & Engineering","Engineering, Eletrical & Electronic","Computer Science Artificial Intelligence","Computer Science Hardware & Architecture","Computer Science Theory & Methods
ISSN journal
10459227
Volume
7
Issue
2
Year of publication
1996
Pages
419 - 426
Database
ISI
SICI code
1045-9227(1996)7:2<419:ATTOFN>2.0.ZU;2-R
Abstract
This paper presents an unsupervised learning scheme for initializing t he internal representations of feedforward neural networks, which acce lerates the convergence of supervised learning algorithms. It is propo sed in this paper that the initial set of internal representations can be formed through a bottom-up unsupervised learning process applied b efore the top-down supervised training algorithm. The synaptic weights that connect the input of the network with the hidden units can be de termined through linear or nonlinear variations of a generalized Hebbi an learning rule, known as Oja's rule. Various generalized Hebbian rul es were experimentally tested and evaluated in terms of their effect o n the convergence of the supervised training process. Several experime nts indicated that the use of the proposed initialization of the inter nal representations significantly improves the convergence of gradient -descent-based algorithms used to perform nontrivial training tasks. T he improvement of the convergence becomes significant as the size and complexity of the training task increase.