Finite difference methods, such as the mid-point rule, have been applied su
ccessfully to the numerical solution of ordinary and partial differential e
quations. If such formulas are applied to observational data, in order to d
etermine derivatives, the results can be disastrous. The reason for this is
that measurement errors, and even rounding errors in computer approximatio
ns, are strongly amplified in the differentiation process, especially if sm
all step-sizes are chosen and higher derivatives are required.
A number of authors have examined the use of various forms of averaging whi
ch allows the stable computation of low order derivatives from observationa
l data. The size of the averaging set acts like a regularization parameter
and has to be chosen as a function of the grid size h.
In this paper, it is initially shown how first (and higher) order single-va
riate numerical differentiation of higher dimensional observational data ca
n be stabilized with a reduced loss of accuracy than occurs for the corresp
onding differentiation of one-dimensional data. The result is then extended
to the multivariate differentiation of higher dimensional data. The nature
of the trade-off between convergence and stability is explicitly character
ized, and the complexity of various implementations is examined.