In this paper we present a vector space framework to study short-term
memory filters in dynamic neural networks. We define parameters to qua
ntify the function of feedforward and recursive linear memory filters.
We show, using vector spaces, what is the optimization problem solved
by the PEs of the first hidden layer of the single input focused netw
ork architecture. Due to the special properties of the gamma bases, re
cursion brings an extra parameter lambda (the time constant of the lea
ky integrator) that displaces the memory manifold towards the desired
signal when the mean square error is minimized. In contrast, for the f
eedforward memory filter the angle between the desired signal and the
memory manifold is fixed for a given memory order. The adaptation of t
he feedback parameter can be done using gradient descent, but the opti
mization is nonconvex.