This investigation identifies the linear independence of the internal repre
sentation of the multilayer perceptron as an essential property for exact l
earning. The sigmoidal hidden unit activation function has the ability to p
roduce linearly independent outputs. As a result, the minimum number of hid
den units for a set of specified input is the number of patterns less the r
ank of the input patterns. In addition, the basis of many training algorith
ms is shown to inherently increase the number of linearly independent vecto
rs in the internal representations, thereby increasing the likelihood of ex
act learning.