Piecewise-linear (PWL) neural networks are widely known for their amenabili
ty to digital implementation. This paper presents a new algorithm for learn
ing in PWL networks consisting of a single hidden layer. The approach adopt
ed is based upon constructing a continuous PWL error function and developin
g an efficient algorithm to minimize it. The algorithm consists of two basi
c stages in searching the weight space. The first stage of the optimization
algorithm is used to locate a point in the weight space representing the i
ntersection of N linearly independent hyperplanes, with N being the number
of weights in the network. The second stage is then called to use this poin
t as a starting point in order to continue searching by moving along the si
ngle-dimension boundaries between the different linear regions of the error
function, hopping from one point (representing the intersection of N hyper
planes) to another. The proposed algorithm exhibits significantly accelerat
ed convergence, as compared to standard algorithms such as back-propagation
and improved versions of it, such as the conjugate gradient algorithm. In
addition, it has the distinct advantage that there are no parameters to adj
ust, and therefore there is no time-consuming parameters tuning step. The n
ew algorithm is expected to find applications in function approximation, ti
me series prediction and binary classification problems. (C) 2000 Elsevier
Science Ltd. All rights reserved.