A new global optimization strategy for training adaptive systems such as ne
ural networks and adaptive filters [finite or infinite impulse response (FI
R or IIR)] is proposed in this paper. Instead of adding random noise to the
weights as proposed in the past, additive random noise is injected directl
y into the desired signal. Experimental results show that this procedure al
so speeds up greatly the backpropagation algorithm. The method is very easy
to implement in practice, preserving the backpropagation algorithm and req
uiring a single random generator with a monotonically decreasing step size
per output channel. Hence, this is an ideal strategy to speed up supervised
learning, and avoid local minima entrapment when the noise variance is app
ropriately scheduled.