We address the issue of the precision required by a threshold element
in order to implement a linearly separable mapping of N binary inputs.
In contrast to previous works we require only the ability to correctl
y implement, with high probability, the mapping of P randomly chosen t
raining examples, as opposed to the requirement of perfect mapping for
every possible set of inputs. Our results are obtained within the sta
tistical mechanics approach and are thus average case results as oppos
ed to the extreme case analyses in the computational learning theory l
iterature. We show that as long as the fraction P/N is finite, then wi
th probability close to 1 as N --> infinity, a finite number of bits s
uffice to implement the mapping. This should be compared with the extr
eme case analysis which requires at least N bits. We also calculate th
e ability of the constrained network to predict novel examples and com
pare their predictions with those of an unconstrained network, showing
that as long as the training set is not too large, the learning and g
eneralization abilities of the constrained network are not very differ
ent from the unconstrained one.