The special character of certain degrees of freedom in two-layered neural n
etworks is investigated for on-line learning of realizable rules. Our analy
sis shows that the dynamics of these degrees of freedom can be put on a fas
ter timescale than those remaining, with the profit of speeding up the over
all adaptation process. This is shown for two groups of degrees of freedom:
second-layer weights and bias weights. For the former case our analysis pr
ovides a theoretical explanation of phenomenological findings. The resultin
g learning algorithm is compared with natural gradient descent in order to
check whether the proposed sealing can be naturally derived from that type
of learning rule.