Training neural networks with additive noise in the desired signal

Citation
Ca. Wang et Jc. Principe, Training neural networks with additive noise in the desired signal, IEEE NEURAL, 10(6), 1999, pp. 1511-1517
Citations number
28
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IEEE TRANSACTIONS ON NEURAL NETWORKS
ISSN journal
10459227 → ACNP
Volume
10
Issue
6
Year of publication
1999
Pages
1511 - 1517
Database
ISI
SICI code
1045-9227(199911)10:6<1511:TNNWAN>2.0.ZU;2-X
Abstract
A new global optimization strategy for training adaptive systems such as ne ural networks and adaptive filters [finite or infinite impulse response (FI R or IIR)] is proposed in this paper. Instead of adding random noise to the weights as proposed in the past, additive random noise is injected directl y into the desired signal. Experimental results show that this procedure al so speeds up greatly the backpropagation algorithm. The method is very easy to implement in practice, preserving the backpropagation algorithm and req uiring a single random generator with a monotonically decreasing step size per output channel. Hence, this is an ideal strategy to speed up supervised learning, and avoid local minima entrapment when the noise variance is app ropriately scheduled.