This paper studies Hopfield neural networks from the perspective of self-st
abilizing distributed computation. Known self-stabilization results on Hopf
ield networks are surveyed. Key ingredients of the proofs are given. Novel
applications of self-stabilization-associative memories and optimization-ar
ising from the context of neural networks are discussed. Two new results at
the intersection of Hopfield nets and of distributed systems are obtained:
One involves convergence under a fine-grained implementation; the other is
on perturbation analysis. Some possibilities for further research at the i
ntersection of these two fields are discussed.