This brief explores a useful self-scaling property of a hybrid (analog-digi
tal) artificial neural network architecture based on distributed neurons. I
n conventional sigmoidal neural networks with lumped neurons, the effect of
weight quantization errors becomes more noticeable at the output as the ne
twork becomes larger. However, it is shown here based on a stochastic model
that the inherent self-sealing property of a distributed-neuron architectu
re controls the output quantization noise (error) to signal ratio as the nu
mber of inputs to an Adaline increases. This property contributes to a robu
st hybrid VLSI architecture consisting of digital synaptic weights and anal
og distributed neurons.