THE INTERCHANGEABILITY OF LEARNING RATE AND GAIN IN BACKPROPAGATION NEURAL NETWORKS

Citation
G. Thimm et al., THE INTERCHANGEABILITY OF LEARNING RATE AND GAIN IN BACKPROPAGATION NEURAL NETWORKS, Neural computation, 8(2), 1996, pp. 451-460
Citations number
22
Categorie Soggetti
Computer Sciences","Computer Science Artificial Intelligence",Neurosciences
Journal title
ISSN journal
08997667
Volume
8
Issue
2
Year of publication
1996
Pages
451 - 460
Database
ISI
SICI code
0899-7667(1996)8:2<451:TIOLRA>2.0.ZU;2-2
Abstract
The backpropagation algorithm is widely used for training multilayer n eural networks. In this publication the gain of its activation functio n(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rat e and the weights. This simplifies the backpropagation learning rule b y eliminating one of its parameters. The theorem can be extended to ho ld for some well-known variations on the backpropagation algorithm, su ch as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstand ard gain of optical sigmoids for optical neural networks.