A potential problem for connectionist accounts of inflectional morphol
ogy is the need to learn a ''default'' inflection (Prasada & Pinker, 1
993). The early connectionist work of Rumelhart and McClelland (1986)
might be interpreted as suggesting that a network can learn to treat a
given inflection as the ''elsewhere'' case only if it applies to a mu
ch larger class of items than any other inflection. This claim is true
of Rumelhart and McClelland's (1986) model, which was a two-layer net
work subject to the computational limitations on networks of that clas
s (Minsky & Papert, 1969). However, it does not generalise to current
models, which have available to them more sophisticated architectures
and learning algorithms. In this paper, we explain the basis of the di
stinction, and demonstrate that given more appropriate architectural a
ssumptions, connectionist models are perfectly capable of learning a d
efault category and generalising as required, even in the absence of s
uperior type frequency.