RECURRENT NEURAL NETWORKS CAN BE TRAINED TO BE MAXIMUM A-POSTERIORI PROBABILITY CLASSIFIERS

Citation
S. Santini et A. Delbimbo, RECURRENT NEURAL NETWORKS CAN BE TRAINED TO BE MAXIMUM A-POSTERIORI PROBABILITY CLASSIFIERS, Neural networks, 8(1), 1995, pp. 25-29
Citations number
7
Categorie Soggetti
Mathematical Methods, Biology & Medicine","Computer Sciences, Special Topics","Computer Science Artificial Intelligence",Neurosciences,"Physics, Applied
Journal title
ISSN journal
08936080
Volume
8
Issue
1
Year of publication
1995
Pages
25 - 29
Database
ISI
SICI code
0893-6080(1995)8:1<25:RNNCBT>2.0.ZU;2-4
Abstract
This paper proves that supervised learning algorithms used to train re current neural networks have an equilibrium point when the network imp lements a maximum a posteriori probability (MAP) classifier. The resul t holds as a limit when the size of the training set goes to infinity. The result is general, because it stems as a property of cost minimiz ing algorithms, but to prove it we implicitly assume that the network we are training has enough computing power to actually implement the M AP classifier. This assumption can be satisfied using a universal dyna mic system approximator. We refer our discussion to Block Feedback Neu ral Networks (B(F)Ns) and show that they actually have the universal a pproximation property.