N. Kamiura et al., A learning algorithm with activation function manipulation for fault tolerant neural networks, IEICE T INF, E84D(7), 2001, pp. 899-905
In this paper we propose a learning algorithm to enhance the fault toleranc
e of feedforward neural networks (NNs for short) by manipulating the gradie
nt of sigmoid activation function of the neuron. We assume stuck-at-0 and s
tuck-at-1 faults of the connection link. For the output layer, we employ th
e function with the relatively gentle gradient to enhance its fault toleran
ce. For enhancing the fault tolerance of hidden layer, we steepen the gradi
ent of function after convergence. The experimental results for a character
recognition problem show that our NN is superior in fault tolerance, learn
ing cycles and learning time to other NNs trained with the algorithms emplo
ying fault injection, forcible weight limit and the calculation of relevanc
e of each weight to the output error. Besides the gradient manipulation inc
orporated in our algorithm never spoils the generalization ability.