We show that the maximum likelihood (ML) estimates of the parameters of a w
ell-known software reliability model are not consistent as the observation
period for observed software failures extends to infinity. Properties of th
e ML estimators as the observation period gets long are particularly import
ant when the observation period corresponds to the test interval, since ext
ending the test interval is the most natural way to improve the reliability
of the software prior to its release. In addition to providing insight on
how to interpret the ML estimators in actual applications, our result also
has pedagogical value as an illustration that asymptotic properties of ML e
stimators cannot be taken for granted.