We consider learning and generalization of real functions by a multi-i
nteracting feed-forward network model with continuous outputs with inv
ertible transfer functions. The expansion in different multi-interacti
ng orders provides a classification for the functions to be learnt and
suggests the learning rules, that reduce to the Hebb-learning rule on
ly for the second order, linear perceptron. The over-sophistication pr
oblem is straightforwardly overcome by a natural cutoff in the multi-i
nteracting synapses: the student is able to learn the architecture of
the target rule, that is, the simpler a rule is, the faster the multi-
interacting perceptron may learn. Simulation results are in excellent
agreement with analytical calculations.