It is demonstrated that the statistical reliability of experimentally
determined confidence intervals in some linear calibration problems in
instrumental analyses can greatly be enhanced if the standard deviati
on (s, sigma(M)) of measurements, which is a theoretical prediction of
the instrumental response error, is incorporated into the usual stati
stical equation instead of the residual of least-squares fitting, The
reliability of the calibration depends on not only the fluctuation of
calibration lines due to the response error, sigma(M), but also the er
ror of parametrization associated with the a priori response error pre
diction, i.e., the variance of the response st Var(sigma(M)). The erro
r prediction requires the Fourier transform of an instrumental baselin
e, signal shape and others (e.g., sample injection error). The variabi
lity in the confidence interval from five calibration standards is as
small in the probabilistic approach as that for 50 standards in the st
atistical method, Liquid chromatography and capillary electrophoresis
are taken as examples.