LEARNING VIEWPOINT-INVARIANT FACE REPRESENTATIONS FROM VISUAL EXPERIENCE IN AN ATTRACTOR NETWORK

Citation
Ms. Bartlett et Tj. Sejnowski, LEARNING VIEWPOINT-INVARIANT FACE REPRESENTATIONS FROM VISUAL EXPERIENCE IN AN ATTRACTOR NETWORK, Network, 9(3), 1998, pp. 399-417
Citations number
53
Categorie Soggetti
Computer Science Artificial Intelligence",Neurosciences,"Engineering, Eletrical & Electronic","Computer Science Artificial Intelligence
Journal title
ISSN journal
0954898X
Volume
9
Issue
3
Year of publication
1998
Pages
399 - 417
Database
ISI
SICI code
0954-898X(1998)9:3<399:LVFRFV>2.0.ZU;2-#
Abstract
In natural visual experience, different views of an object or face ten d to appear in close temporal proximity as an animal manipulates the o bject or navigates around it, or as a face changes expression or pose. A set of simulations is presented which demonstrate how viewpoint-inv ariant representations of faces can be developed from visual experienc e by capturing the temporal relationships among the input patterns. Th e simulations explored the interaction of temporal smoothing of activi ty signals with Hebbian learning in both a feedforward layer and a sec ond, recurrent layer of a network. The feedforward connections were tr ained by competitive Hebbian learning with temporal smoothing of the p ost-synaptic unit activities. The recurrent layer was a generalization of a Hopfield network with a low-pass temporal filter on all unit act ivities. The combination of basic Hebbian learning with temporal smoot hing of unit activities produced an attractor network learning rule th at associated temporally proximal input patterns into basins of attrac tion. These two mechanisms were demonstrated in a model that took grey -level images of faces as input. Following training on image sequences of faces as they changed pose, multiple views of a given face fell in to the same basin of attraction,and the system acquired representation s of faces that were approximately viewpoint-invariant.