A COVERAGE ANALYSIS TOOL FOR THE EFFECTIVENESS OF SOFTWARE TESTING

Citation
Mr. Lyu et al., A COVERAGE ANALYSIS TOOL FOR THE EFFECTIVENESS OF SOFTWARE TESTING, IEEE transactions on reliability, 43(4), 1994, pp. 527-535
Citations number
19
Categorie Soggetti
Computer Sciences","Engineering, Eletrical & Electronic","Computer Science Hardware & Architecture","Computer Science Software Graphycs Programming
ISSN journal
00189529
Volume
43
Issue
4
Year of publication
1994
Pages
527 - 535
Database
ISI
SICI code
0018-9529(1994)43:4<527:ACATFT>2.0.ZU;2-W
Abstract
This paper describes the software testing and analysis tool, ''ATAC (A utomatic Test Analysis for C)'', developed as a research instrument at Bellcore to measure the effectiveness of testing data. It is also a t ool to facilitate the design and evaluation of test cases during softw are development. To demonstrate the capability and applicability of AT AC, we obtained 12 program versions of a critical industrial applicati on developed in a recent university/industry N-Version Software projec t, and used ATAC to analyze and compare coverage of the testing on the program versions. Preliminary results from this investigation show th at ATAC is a powerful testing tool to provide testing metrics and qual ity control guidance for the certification of high quality software co mponents or systems. In using ATAC to derive high quality test data, w e assume that a good test has a high data-flow coverage score. This hy pothesis requires that we show that good data-flow testing implies goo d software, viz, software with higher reliability, One would hope, for example, that code tested to 85% c-uses coverage would have a lower f ield-failure rate than similar code tested to 20% c-uses coverage. The establishment of a correlation between good data-flow testing and a l ow (or zero) rate of field failures is the ultimate and critical test of the usefulness of data-flow coverage testing. We demonstrated by AT AC that the 12 program versions obtained from the U. of Iowa and Rockw ell NVS project (a project that has been subjected to a stringent desi gn, implementation, and testing procedure) had very high testing cover age scores of blocks, decisions, c-uses, and p-uses. Results from the field testing (in which only one fault was found) confirmed this belie f. The ultimate question that we hope ATAC can help us ans,ver is a ty pical question for all software reliability engineers: ''When is a pro gram considered acceptable?'' Software reliability analysts have propo sed several models to answer this question. However, none of these mod els address the issues of program structure or testing coverages, whic h are important in understanding software quality.