We examine the representational capabilities of first-order and second
-order single-layer recurrent neural networks (SLRNN's) with hard-limi
ting neurons. We show that a second-order SLRNN is strictly more power
ful than a first-order SLRNN. However, if the first-order SLRNN is aug
mented with output layers of feedforward neurons, it can implement any
finite-state recognizer, but only if state-splitting is employed. Whe
n a state is split, it is divided into two equivalent states. The judi
cious use of state-splitting allows for efficient implementation of fi
nite-state recognizers using augmented first-order SLRNN's.