Pollack (1991) demonstrated that second-order recurrent neural network
s can act as dynamical recognizers for formal languages when trained o
n positive and negative examples, and observed both phase transitions
in learning and interacted function system-like fractal state sets. Fo
llow-on work focused mainly on the extraction and minimization of a fi
nite state automaton (FSA) from the trained network. However, such net
works are capable of inducing languages that are not regular and there
fore not equivalent to any FSA. Indeed, it may be simpler for a small
network to fit its training data by inducing such a nonregular languag
e. But when is the network's language not regular? In this article, us
ing a low-dimensional network capable of learning all the Tomita data
sets, we present an empirical method for testing whether the language
induced by the network is regular. We also provide a detailed epsilon-
machine analysis of trained networks for both regular and nonregular l
anguages.