I consider layered neural networks in which the weights are trained by
optimizing an arbitrary performance function with respect to a set of
examples. Using the cavity method and many-body diagrammatic techniqu
es, the evolution in the network can be described by an overlap and a
noise parameter. Parameter pairs corresponding to various input condit
ions are found to collapse on a universal curve. Simulations with the
maximally stable network confirm the theory.