Abstract :
The Jelinski-Moranda, Shooman, and Musa software reliability models all predict that the software error detection rate in a software system is a linear function of the detected errors. The basic differences among the models are that the error rates are, respectively, in terms of calendar-time, manpower, and computer-time. The models are simple to use for estimating the number of errors still in the tested software. Published studies generally show that error rates during system testing correlate best with the Musa model, and progressively less with the Shooman, and Jelinski-Moranda models. Simulation shows that, with respect to the number of detected errors, 1) testing the functions of a software system in a random or round-robin order gives linearly decaying system-error rates, 2) testing each function exhaustively one at a time gives flat system-error rates, 3) testing different functions at widely different frequencies gives exponentially decaying system-error rates, and 4) testing strategies which result in linear decaying error rates tend to requlire the fewest tests to detect a given number of errors.