The back-propagation method encounters two problems in practice, i.e.,
slow learning progress and convergence to a false local minimum. The
present study addresses the latter problem and proposes a modified bac
k-propagation method. The basic idea of the method is to keep the sigm
oid derivative relatively large while some of the error signals are la
rge. For this purpose, each connecting weight in a network is multipli
ed by a factor in the range of (0,1), at a constant interval during a
learning process. Results of numerical experiments substantiate the va
lidity of the method. (C) 1998 Elsevier Science Ltd. All rights reserv
ed.