This article shows that the weights of continuous-time feedback neural
networks are uniquely identifiable from input/output measurements. Un
der weak genericity assumptions, the following is true: Assume given t
wo nets, whose neurons all have the same nonlinear activation function
sigma; if the two nets have equal behaviors as ''black boxes'' then n
ecessarily they must have the same number of neurons and-except at mos
t for sign reversals at each node-the same weights. Moreover, even if
the activations are not a priori known to coincide, they are shown to
be also essentially determined from the external measurements.