Aa. Porter et al., AN EXPERIMENT TO ASSESS THE COST-BENEFITS OF CODE INSPECTIONS IN LARGE-SCALE SOFTWARE-DEVELOPMENT, IEEE transactions on software engineering, 23(6), 1997, pp. 329-346
We conducted a long-term experiment to compare the costs and benefits
of several different software inspection methods. These methods were a
pplied by professional developers to a commercial software product the
y were creating. Because the laboratory for this experiment was a live
development effort, we took special care to minimize cost and risk to
the project, while maximizing our ability to gather useful data. This
article has several goals: 1) to describe the experiment's design and
show how we. used simulation techniques to optimize it, 2) to present
our results and discuss their implications for both software practiti
oners acid researchers, and 3) to discuss several new questions raised
by our findings. For each inspection, we randomly assigned three inde
pendent variables: 1) the number of reviewers on each inspection team
(1, 2, or 4), 2) the number of teams inspecting the code unit (1 or 2)
, and 3) the requirement that defects be repaired between the first an
d second team's inspections. The reviewers for each inspection were ra
ndomly selected without replacement from a pool of 11 experienced soft
ware developers. The dependent variables for each inspection included
inspection interval (elapsed time), total effort, and the defect detec
tion rate. Our results showed that these treatments did not significan
tly influence the defect detection effectiveness, but that certain com
binations of changes dramatically increased the inspection interval.