A probabilistic perceptron is a device that computes a representation
of a predicate ICI as a linear threshold function. In contrast to the
classical perceptron, this representation will only be correct with pr
obability close to 1. For the construction of this device the method o
f superimposed coding is used. The resulting advantages in comparison
to the original perceptron may be summarized as follows: there is a tr
ivial learning theorem; the size of the coefficients remains small; th
e condition of linear separability is not required. It is argued that
the advantages mentioned outweigh the disadvantages. To support this c
laim, experimental work concerning full text retrieval is presented. E
n passant a qualitative comparison between several information retriev
al methods is obtained.