We present a new training-out algorithm for neural networks that permi
ts good performance on nonideal hardware with limited analog neuron an
d weight accuracy. Optical neural networks are emphasized with the err
or sources including nonuniform beam illumination and nonlinear device
characteristics. We compensate for processor nonidealities during gat
ed learning (off-line training); thus our algorithm does not require r
eal-time neural networks with adaptive weights. This permits use of hi
gh-accuracy nonadaptive weights and reduced hardware complexity. The s
pecific neural network we consider is the Ho-Kashyap associative proce
ssor because it provides the largest storage capacity. Simulation resu
lts and optical laboratory data are provided. The storage measure we u
se is the ratio M/N of the number of vectors stored (M) to the dimensi
onality of the vectors stored (N). We show a storage capacity of M/N =
1.5 on our optical laboratory system with excellent recall accuracy,
> 95%. The theoretical maximum storage is M/N = 2 (as N approaches inf
inity), and thus the storage and performance we demonstrate are impres
sive considering the processor nonidealities we present. Our technique
s can be applied to other neural network algorithms and other nonideal
processing hardware.