Analog neural networks of limited precision are essentially k-ary neur
al networks. That is, their processors classify the input space into k
regions using k - 1 parallel hyperplanes by computing k-ary weighted
multilinear threshold functions. The ability of k-ary neural networks
to learn k-ary weighted multilinear threshold functions is examined. T
he well-known perception learning algorithm is generalized to a k-ary
perceptron algorithm with guaranteed convergence property. Littlestone
's winnow algorithm is superior to the perception learning algorithm w
hen the ratio of the sum of the weight to the threshold value of the f
unction being learned is small. A k-ary winnow algorithm with a mistak
e bound which depends on this value and the ratio between the largest
and smallest thresholds is presented. (C) 1994 Academic Press, Inc.