The backpropagation algorithm is widely used for training multilayer n
eural networks. In this publication the gain of its activation functio
n(s) is investigated. In specific, it is proven that changing the gain
of the activation function is equivalent to changing the learning rat
e and the weights. This simplifies the backpropagation learning rule b
y eliminating one of its parameters. The theorem can be extended to ho
ld for some well-known variations on the backpropagation algorithm, su
ch as using a momentum term, flat spot elimination, or adaptive gain.
Furthermore, it is successfully applied to compensate for the nonstand
ard gain of optical sigmoids for optical neural networks.