This article studies finite size networks that consist of interconnect
ions of synchronously evolving processors: Each processor updates its
state by applying an activation function to a Linear combination of th
e previous states of all units. We prove that any function for which t
he left and right limits exist and are different can be applied to the
neurons to yield a network which is at least as strong computationall
y as a finite automaton. we conclude that if this is the power require
d, one may choose any of the aforementioned neurons, according to the
hardware available or the learning software preferred for the particul
ar application.