In a recent paper we presented a methodological framework describing t
he two-iteration performance of Hopfield-like attractor neural network
s with history-dependent Bayesian dynamics. We now extend this analysi
s in a number of directions: input patterns applied to small subsets o
f neurons, general connectivity architectures and more efficient use o
f history. We show that the optimal signal (activation) function has a
slanted sigmoidal shape, and provide an intuitive account of activati
on functions with a non-monotone shape. This function endows the analy
tical model with some properties characteristic of cortical neurons' f
iring.