Evaluating testing methods by delivered reliability

Frankl, P. G., Hamlet, R. G., Littlewood, B. & Strigini, L. (1998). Evaluating testing methods by delivered reliability. IEEE Transactions on Software Engineering, 24(8), pp. 586-601. doi: 10.1109/32.707695

[img]
Preview
PDF
Download (295kB) | Preview

Abstract

There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons.

Item Type: Article
Additional Information: © 1998 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
Uncontrolled Keywords: reliability, debugging, software testing, statistical testing theory, SOFTWARE, PARTITION
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: School of Informatics > Centre for Software Reliability
URI: http://openaccess.city.ac.uk/id/eprint/259

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics