Experimental data show that biological synapses behave quite differently fr
om the symbolic synapses in all common artificial neural network models. Bi
ological synapses are dynamic; their "weight" changes on a short timescale
by several hundred percent in dependence of the past input to the synapse.
In this article we address the question how this inherent synaptic dynamics
(which should not be confused with long term learning) affects the computa
tional power of a neural network. In particular, we analyze computations on
temporal and spatiotemporal patterns, and we give a complete mathematical
characterization of all filters that can be approximated by feedforward neu
ral networks with dynamic synapses. It turns out that even with just a sing
le hidden layer, such networks can approximate a very rich class of nonline
ar filters: all filters that can be characterized by Volterra series. This
result is robust with regard to various changes in the model for synaptic d
ynamics. Our characterization result provides for all nonlinear filters tha
t are approximable by Volterra series a new complexity hierarchy related to
the cost of implementing such filters in neural systems.