In an approximation problem with a neural network, a low-output root mean s
quare (rms) error is not always a universal criterion. In this paper, we in
vestigate problems where the Jacobians-first derivative of an output value
with respect to an input value-of the approximation model are needed and pr
opose to add a quality criterion on these Jacobians during the learning ste
p. More specifically, we focus here on the approximation of functionals A,
from a space of continuous functions (discretized in pratice) to a scalar s
pace. In this case, the approximation is confronted with the compensation p
henomenon: a lower contribution of one input can be compensated by a larger
one of its neighboring inputs, In this case, profiles (with respect to the
input index) of neural Jacobians are very irregular instead of smooth. The
n, the approximation of A becomes an ill-posed problem because many solutio
ns can be chosen by the learning process. We propose to introduce the smoot
hness of Jacobian profiles as an a priori: information via a regularization
technique and develop a new and efficient learning algorithm, called "weig
ht smoothing." We assess the robustness of the weight smoothing algorithm b
y testing it on a real and complex problem stemming from meteorology: the n
eural approximation of the forward model of radiative transfer equation in
the atmosphere. The stabilized Jacobians of this model are then used in an
inversion process to illustrate the improvement of the Jacobians after weig
ht smoothing.