S. Santini et A. Delbimbo, RECURRENT NEURAL NETWORKS CAN BE TRAINED TO BE MAXIMUM A-POSTERIORI PROBABILITY CLASSIFIERS, Neural networks, 8(1), 1995, pp. 25-29
This paper proves that supervised learning algorithms used to train re
current neural networks have an equilibrium point when the network imp
lements a maximum a posteriori probability (MAP) classifier. The resul
t holds as a limit when the size of the training set goes to infinity.
The result is general, because it stems as a property of cost minimiz
ing algorithms, but to prove it we implicitly assume that the network
we are training has enough computing power to actually implement the M
AP classifier. This assumption can be satisfied using a universal dyna
mic system approximator. We refer our discussion to Block Feedback Neu
ral Networks (B(F)Ns) and show that they actually have the universal a
pproximation property.