The use of test-coverage measures (e.g., block-coverage to control the soft
ware test process has become an increasingly common practice. This is justi
fied by the assumption that higher test-coverage helps achieve higher defec
t-coverage and therefore improves software quality. In practice, data often
show that defect-coverage and test-coverage grow over time, as additional
testing is performed. However, it is unclear whether this phenomenon of con
current growth can be attributed to a causal dependency, or if it is coinci
dental, simply due to the cumulative nature of both measures. Answering suc
h a question is important as it determines whether a given test-coverage me
asure should be monitored for quality control and used to drive testing.
Although it is no general answer to this problem, a procedure is proposed t
o investigate whether any test-coverage criterion has a genuine additional
impact on defect-coverage when compared to the impact of just running addit
ional test cases. This procedure applies in typical testing conditions wher
e
the software is tested once, according to a given strategy,
coverage measures are collected as well as defect data.
This procedure is tested on published data, and the results are compared wi
th the original findings. The study outcomes do not support the assumption
of a causal dependency between test-coverage and defect-coverage, a result
for which several plausible explanations are provided.