Learning in a perceptron having a discrete weight space, where each weight
can take 2L+1 different values, is examined analytically and numerically. T
he learning algorithm is based on the training of the continuous perceptron
and prediction following the clipped weights. The learning is described by
a new set of order parameters, composed of the overlaps between the teache
r and the continuous/clipped students. Different scenarios are examined, am
ong them on-line learning with discrete and continuous transfer functions.
The generalization error of the clipped weights decays asymptotically as ex
p(-K alpha (2)) in the case of on-line learning with binary activation func
tions and exp(-e(\ lambda \ alpha)) in the case of on-line learning with co
ntinuous one, where alpha is the number of examples divided by N, the size
of the input vector and K is a positive constant. For finite N and L, perfe
ct agreement between the discrete student and the teacher is obtained for a
lpha proportional toL root 1n(NL). A crossover to the generalization error
proportional to1/alpha, characterizing continuous weights with binary outpu
t, is obtained for synaptic depth L>O(rootN).