A common practice in immunoassay is the use of sequential dilutions of
an initial stock solution of the antigen of interest to obtain standa
rd samples in a desired concentration range. Nonlinear, heteroscedasti
c regression models are a common framework for analysis, and the usual
methods for fitting the model assume that measured responses on the s
tandards are independent. However, the dilution procedure introduces a
propagation of random measurement error that may invalidate this assu
mption. We demonstrate that failure to account for serial dilution err
or in calibration inference on unknown samples leads to serious inaccu
racy of assessments of assay precision such as confidence intervals an
d precision profiles. Techniques for taking serial dilution error into
account based on data from multiple assay runs are discussed and are
shown to yield valid calibration inferences.