BACKPROPAGATION IS NOT EFFICIENT

Authors
Citation
J. Sima, BACKPROPAGATION IS NOT EFFICIENT, Neural networks, 9(6), 1996, pp. 1017-1023
Citations number
18
Categorie Soggetti
Mathematical Methods, Biology & Medicine","Computer Sciences, Special Topics","Computer Science Artificial Intelligence",Neurosciences,"Physics, Applied
Journal title
ISSN journal
08936080
Volume
9
Issue
6
Year of publication
1996
Pages
1017 - 1023
Database
ISI
SICI code
0893-6080(1996)9:6<1017:BINE>2.0.ZU;2-U
Abstract
The back-propagation learning algorithm for multi-layered neural netwo rks, which is often successfully used in practice, appears very time c onsuming even for small network architectures or training tasks. Howev er, no results are yet known concerning the complexity of this algorit hm. Blum and Rivest proved that training even a three-node network is NP-complete for the case when a neuron computes the discrete linear th reshold function. We generalize the technique from their NP-hardness p roof for a continuous sigmoidal function used in back-propagation. We show that training a three-node sigmoid network with an additional con straint on the output neuron function (e.g., zero threshold) is NP-har d. As a consequence of this, we find training sigmoid feedforward netw orks, with a single hidden layer and with zero threshold of output neu ron, to be intractable. This implies that back-propagation is generall y not an efficient algorithm, unless at least P = NP. We rake advantag e of these results by showing the NP-hardness of a special nonlinear p rogramming problem. Copyright (C) 1996 Elsevier Science Ltd.