A training algorithm for multilayer neural networks of hard-limiting unitswith random bias

Citation
Hb. Zhu et al., A training algorithm for multilayer neural networks of hard-limiting unitswith random bias, IEICE T FUN, E83A(6), 2000, pp. 1040-1048
Citations number
12
Categorie Soggetti
Eletrical & Eletronics Engineeing
Journal title
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES
ISSN journal
09168508 → ACNP
Volume
E83A
Issue
6
Year of publication
2000
Pages
1040 - 1048
Database
ISI
SICI code
0916-8508(200006)E83A:6<1040:ATAFMN>2.0.ZU;2-9
Abstract
The conventional back-propagation algorithm cannot be applied to networks o f units having hard-limiting output functions, because these functions cann ot be differentiated. In this paper, a gradient descent algorithm suitable for training multilayer feedforward networks of units having hard-limiting output functions, is presented. In order to get a differentiable output fun ction for a hard-limiting unit, we utilized that if the bias of a unit in s uch a network is a random variable with smooth distribution function, the p robability of the unit's output being in a particular state is a continuous ly differentiable function of the unit's inputs. Three simulation results a re given, which show that the performance of this algorithm is similar to t hat of the conventional hack-propagation.