The Unnecessity of Assuming Statistically Independent Tests in Bayesian Software Reliability Assessments
Abstract: When assessing a software-based system, the results of Bayesian statistical inference on operational testing data can provide strong support for software reliability claims. For inference, this data (i.e. software successes and failures) is often assumed to arise in an independent, identically distributed (i.i.d.) manner. In this paper we show how conservative Bayesian approaches make this assumption unnecessary, by incorporating one's doubts about the assumption into the assessment. We derive conservative confidence bounds on a system's probability of failure on demand (pfd), when operational testing reveals no failures. The generality and utility of the confidence bounds are illustrated in the assessment of a nuclear power-plant safety-protection system, under varying levels of skepticism about the i.i.d. assumption. The analysis suggests that the i.i.d. assumption can make Bayesian reliability assessments extremely optimistic - such assessments do not explicitly account for how software can be very likely to exhibit no failures during extensive operational testing despite the software's pfd being undesirably large.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.