A new, efficient procedure estimates the number of errors in a system.
A known number of seeded errors are inserted into a system. The failu
re intensities of the seeded and real errors are allowed to be differe
nt and time dependent. When an error is detected during the test, it i
s removed from the system. The testing process is observed for a fixed
amount of time tau. Martingale theory is used to derive a class of es
timators for the number of seeded errors in a continuous time setting.
Some of the estimators and their associated standard deviations have
explicit expressions. An optimal estimator among the class of estimato
rs is obtained. A simulation study assesses the performance of the pro
posed estimators.