Testing for lack of fit of the experimental points to the regression line i
s an important step in linear regression. When lack of fit exists, standard
deviations for both regression line coefficients are overestimated, and th
is gives rise, for instance, to confidence intervals that are too large. If
these confidence intervals are then used in hypothesis tests, bias may not
be detected so there is a greater probability of committing a beta error.
In this paper, we present a statistical test, which analyses the variance o
f the residuals from the regression line whenever the data to be handled ha
ve errors in both axes. The theoretical expressions developed were validate
d by applying the Monte Carlo simulation method, to two real and nine simul
ated data sets. Two other real data sets were used to provide examples of a
pplication (C) 2000 Elsevier Science B.V. All rights reserved.