It is well known that (McCulloch-Pitts) neurons are efficiently traina
ble to learn an unknown halfspace from examples, using linear-programm
ing methods. We want to analyze how the learning performance degrades
when the representational power of the neuron is overstrained, i.e., i
f more complex concepts than just halfspaces are allowed. We show that
the problem of learning a probably almost optimal weight vector for a
neuron is so difficult that the minimum error cannot even be approxim
ated to within a constant factor in polynomial time (unless RP = NP);
we obtain the same hardness result for several variants of this proble
m. We considerably strengthen these negative results for neurons with
binary weights 0 or 1. We also show that neither heuristical learning
nor learning by sigmoidal neurons with a constant reject rate is effic
iently possible (unless RP = NP). (C) 1995 Academic Press, Inc.