A new algorithm for learning in piecewise-linear neural networks

Citation
Ef. Gada et al., A new algorithm for learning in piecewise-linear neural networks, NEURAL NETW, 13(4-5), 2000, pp. 485-505
Citations number
26
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
NEURAL NETWORKS
ISSN journal
08936080 → ACNP
Volume
13
Issue
4-5
Year of publication
2000
Pages
485 - 505
Database
ISI
SICI code
0893-6080(200005/06)13:4-5<485:ANAFLI>2.0.ZU;2-T
Abstract
Piecewise-linear (PWL) neural networks are widely known for their amenabili ty to digital implementation. This paper presents a new algorithm for learn ing in PWL networks consisting of a single hidden layer. The approach adopt ed is based upon constructing a continuous PWL error function and developin g an efficient algorithm to minimize it. The algorithm consists of two basi c stages in searching the weight space. The first stage of the optimization algorithm is used to locate a point in the weight space representing the i ntersection of N linearly independent hyperplanes, with N being the number of weights in the network. The second stage is then called to use this poin t as a starting point in order to continue searching by moving along the si ngle-dimension boundaries between the different linear regions of the error function, hopping from one point (representing the intersection of N hyper planes) to another. The proposed algorithm exhibits significantly accelerat ed convergence, as compared to standard algorithms such as back-propagation and improved versions of it, such as the conjugate gradient algorithm. In addition, it has the distinct advantage that there are no parameters to adj ust, and therefore there is no time-consuming parameters tuning step. The n ew algorithm is expected to find applications in function approximation, ti me series prediction and binary classification problems. (C) 2000 Elsevier Science Ltd. All rights reserved.