COMPUTATIONAL CAPABILITIES OF RECURRENT NARX NEURAL NETWORKS

Citation
Ht. Siegelmann et al., COMPUTATIONAL CAPABILITIES OF RECURRENT NARX NEURAL NETWORKS, IEEE transactions on systems, man and cybernetics. Part B. Cybernetics, 27(2), 1997, pp. 208-215
Citations number
30
Categorie Soggetti
Controlo Theory & Cybernetics","Computer Science Cybernetics","Robotics & Automatic Control
ISSN journal
10834419
Volume
27
Issue
2
Year of publication
1997
Pages
208 - 215
Database
ISI
SICI code
1083-4419(1997)27:2<208:CCORNN>2.0.ZU;2-O
Abstract
Recently, fully connected recurrent neural networks have been proven t o be computationally rich-at least as powerful as Turing machines, Thi s work focuses on another network which is popular in control applicat ions and has been found to be very effective at learning a variety of problems, These networks are based upon Nonlinear AutoRegressive model s with eXogenous Inputs (NARX models), and are therefore called NARX n etworks, As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than f rom hidden states, They are formalized by y(t) = Psi(u(t-n(u)),...,u(t -1),u(t), y(t-n(y)),...,y(t-1)) where u(t) and y(t) represent input an d output of the network at time t, n(u) and n(y) are the input and out put order, and the function Psi is the mapping performed by a Multilay er Perceptron, We constructively prove that the NARX networks with a f inite number of parameters are computationally as strong as fully conn ected recurrent networks and thus Turing machines, We conclude that in theory one can use the NARX models, rather than conventional recurren t networks without any computational loss even though their feedback i s limited. Furthermore, these results raise the issue of what amount o f feedback or recurrence is necessary for any network to be Turing equ ivalent and what restrictions on feedback limit computational power.