The idea that a trained network can assign a confidence number to its predi
ction, indicating the level of its reliability, is addressed and exemplifie
d by an analytical examination of a perceptron with discrete and continuous
output units. Results are derived for:both Gibbs and Bayes scenarios. The
information gain by the confidence number is estimated by various entropy m
easurements. [S1063-651X(99)06606-4].