ONLINE GIBBS LEARNING - II - APPLICATION TO PERCEPTRON AND MULTILAYERNETWORKS

Citation
Jw. Kim et H. Sompolinsky, ONLINE GIBBS LEARNING - II - APPLICATION TO PERCEPTRON AND MULTILAYERNETWORKS, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics, 58(2), 1998, pp. 2348-2362
Citations number
32
Categorie Soggetti
Physycs, Mathematical","Phsycs, Fluid & Plasmas
ISSN journal
1063651X
Volume
58
Issue
2
Year of publication
1998
Part
B
Pages
2348 - 2362
Database
ISI
SICI code
1063-651X(1998)58:2<2348:OGL-I->2.0.ZU;2-Z
Abstract
In the preceding paper (''On-line Gibbs Learning. I. General Theory'') we have presented the on-line Gibbs algorithm (OLGA) and studied anal ytically its asymptotic convergence. In this paper we apply OLGA to on -line supervised learning in several network architectures: a single-l ayer perceptron, two-layer committee machine, and a winner-takes-all ( WTA) classifier. The behavior of OLGA for a single-layer perceptron is studied both analytically and numerically for a variety of rules: a r ealizable perceptron rule, a perceptron rule corrupted by output and i nput noise, and a rule generated by a committee machine. The two-layer committee machine is studied numerically for the cases of learning a realizable rule as well as a rule that is corrupted by output noise. T he WTA network is studied numerically for the case of a realizable rul e. The asymptotic results reported in this paper agree with the predic tions of the general theory of OLGA presented in paper I. In all the s tudied cases, OLGA converges to a set of weights that minimizes the ge neralization error. When the learning rate is chosen as a power law wi th an optimal power, OLGA converges with a power law that is the same as that of batch learning.