The fault exposure ratio K is an important factor that controls the pe
r-fault hazard rate, and hence the effectiveness of testing of softwar
e. The paper examines the variations of K with fault density which dec
lines with testing time. Because faults get harder to find, K should d
ecline if testing is strictly random. However, it is shown that at low
er fault densities K tends to increase; we explain this by using a hyp
othesis: real testing is more efficient than strictly random testing e
specially at the end of the test phase. Data sets from several differe
nt projects (in USA and Japan) are analyzed. If we combine the two fac
tors, e.g., shift in the detectability profile and the nonrandomness o
f testing, then the analysis leads us to the logarithmic model which i
s known to have superior predictive capability.