AN ANALOG VLSI RECURRENT NEURAL-NETWORK LEARNING A CONTINUOUS-TIME TRAJECTORY

Authors
Citation
G. Cauwenberghs, AN ANALOG VLSI RECURRENT NEURAL-NETWORK LEARNING A CONTINUOUS-TIME TRAJECTORY, IEEE transactions on neural networks, 7(2), 1996, pp. 346-361
Citations number
47
Categorie Soggetti
Computer Application, Chemistry & Engineering","Engineering, Eletrical & Electronic","Computer Science Artificial Intelligence","Computer Science Hardware & Architecture","Computer Science Theory & Methods
ISSN journal
10459227
Volume
7
Issue
2
Year of publication
1996
Pages
346 - 361
Database
ISI
SICI code
1045-9227(1996)7:2<346:AAVRNL>2.0.ZU;2-T
Abstract
Real-time algorithms for gradient descent supervised learning in recur rent dynamical neural networks fail to support scalable VLSI (very lar ge scale integration) implementation, due to their complexity which gr ows sharply with the network dimension. We present an alternative impl ementation in analog VLSI, which employs a stochastic perturbative alg orithm to observe the gradient of the error index directly on the netw ork in random directions of the parameter space, thereby avoiding the tedious task of deriving the gradient from an explicit model of the ne twork dynamics. The network contains six fully recurrent neurons with continuous-time dynamics, providing 42 free parameters which comprise connection strengths and thresholds. The chip implementing the network includes local provisions supporting both the learning and storage of the parameters, integrated in a scalable architecture which can be re adily expanded for applications of learning recurrent dynamical networ ks requiring larger dimensionality. We describe and characterize the f unctional elements comprising the implemented recurrent network and in tegrated learning system, and include experimental results obtained fr om training the network to produce two outputs following a circular tr ajectory, representing a quadrature-phase oscillator.