If connectionism is to be an adequate theory of mind, we must have a theory
representation for neural networks that allows for individual differences
in weighting and architecture while preserving sameness, or at least simila
rity, of content. In this paper we propose a procedure for measuring samene
ss of content of neural representations. We argue that the correct way to c
ompare neural representations is through analysis of the distances between
neural activations, and we present a method for doing so. We then use the t
echnique to demonstrate empirically that different artificial neural networ
ks trained by backpropagation on the same categorization task, even with di
fferent representational encodings of the input patterns and different numb
ers of hidden units, reach states in which representations at the hidden un
its are similar. We discuss how this work provides a rebuttal to Fodor and
Lepore's critique of Paul Churchland's state space semantics.