Conventional methods of supervised learning are inevitably faced with
the problem of local minima; evidence is presented that second order m
ethods such as the conjugate gradient and quasi-Newton techniques are
particularly susceptible to being trapped in sub-optimal solutions. A
new technique, expanded range approximation (ERA), is presented, which
by the use of a homotopy on the range of the target outputs allows su
pervised learning methods to find a global minimum of the error functi
on in almost every case. (C) 1997 Elsevier Science Ltd All Rights Rese
rved.