The ability of neural net classifiers to deal with a priori informatio
n is investigated. For this purpose, back-propagation classifiers are
trained with data from known distributions with variable a priori prob
abilities, and their performance on separate test sets is evaluated. I
t is found that back-propagation employs a priori information in a sli
ghtly suboptimal fashion, but that this does not have serious conseque
nces on the performance of this classifier. Furthermore, it is found t
hat the inferior generalization that results when an excessive number
of network parameters are used can (partially) be ascribed to this sub
optimality.