We present an artificial neural network which self-organizes in an uns
upervised manner to form a sparse distributed representation of the un
derlying causes in data sets. This coding is achieved by introducing s
everal rectification constraints to a PCA network, based on our prior
beliefs about the data. Through experimentation we investigate the rel
ative performance of these rectifications on the weights and/or output
s of the network. We find that use of an exponential function on the o
utput to the network is most reliable in discovering all the causes in
data sets even when the input data are strongly corrupted by random n
oise. Preprocessing our inputs to achieve unit variance on each is ver
y effective in helping us to discover all underlying causes when the p
ower on each cause is variable. Our resulting network methodologies ar
e straightforward yet extremely robust over many trials.