An accelerated learning algorithm (ABP-adaptive back propagation1) is
proposed for the supervised training of multilayer perceptron networks
. The learning algorithm is inspired from the principle of ''forced dy
namics'' for the total error functional. The algorithm updates the wei
ghts in the direction of steepest descent, but with a learning rate a
specific function of the error and of the error gradient norm. This sp
ecific form of this function is chosen such as to accelerate convergen
ce. Furthermore, ABP introduces no additional ''tuning'' parameters fo
und in variants of the backpropagation algorithm. Simulation results i
ndicate a superior convergence speed for analog problems only, as comp
ared to other competing methods, as well as reduced sensitivity to alg
orithm step size parameter variations.