The conventional back-propagation algorithm cannot be applied to networks o
f units having hard-limiting output functions, because these functions cann
ot be differentiated. In this paper, a gradient descent algorithm suitable
for training multilayer feedforward networks of units having hard-limiting
output functions, is presented. In order to get a differentiable output fun
ction for a hard-limiting unit, we utilized that if the bias of a unit in s
uch a network is a random variable with smooth distribution function, the p
robability of the unit's output being in a particular state is a continuous
ly differentiable function of the unit's inputs. Three simulation results a
re given, which show that the performance of this algorithm is similar to t
hat of the conventional hack-propagation.