Next Article in Journal
Two-Component Unit Weibull Mixture Model to Analyze Vote Proportions
Previous Article in Journal
Generalized Integral Transform and Fractional Calculus Operators Involving a Generalized Mittag-Leffler (ML)-Type Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

A Testing Coverage Based SRGM Subject to the Uncertainty of the Operating Environment †

1
Department of Mathematics, Birla Institute of Technology and Science Pilani K. K. Birla Goa Campus, Zuarinagar 403726, Goa, India
2
Department of Mathematics, Amity Institute of Applied Sciences, Amity University, Noida 201313, Uttar Pradesh, India
*
Authors to whom correspondence should be addressed.
Presented at the 1st International Online Conference on Mathematics and Applications, 1–15 May 2023; Available online: https://iocma2023.sciforum.net/.
These authors contributed equally to this work.
Comput. Sci. Math. Forum 2023, 7(1), 44; https://doi.org/10.3390/IOCMA2023-14436
Published: 29 April 2023

Abstract

:
The number of software failures, software reliability, and failure rates can be measured and predicted by the software reliability growth model (SRGM). SRGM is developed and tested in a controlled environment where the operating environment is different. Many SRGMs have developed, assuming that the working and developing environments are the same. In this paper, we have developed a new SRGM incorporating the imperfect debugging and testing coverage function. The proposed model’s parameters are estimated from two real datasets and compared with some existing SRGMs based on five goodness-of-fit criteria. The results show that the proposed model gives better descriptive and predictive performance than the existing selected models.

1. Introduction

During the past four decades, various software reliability growth models (SRGM) [1,2,3,4,5,6,7] have been proposed to estimate reliability, predict the number of faults, determine the release time of the software, etc. Various proposed models have been developed based on different suppositions. For example, some models have discussed perfect debugging [1,7], and others have discussed imperfect debugging [3,4]. Some researchers have studied SRGM by considering a constant fault detection rate [1] or by the learning phenomenon [7]. During the testing and debugging process, various research papers have discussed resource allocation [8,9,10,11,12], testing effort [13,14], etc.
Most models have considered that the operating and testing environments are the same. In general, the software is implemented in the real working environment after the in-house testing process. In early 2000, researchers proposed different SRGMs incorporating uncertainty of the operating environment with new approaches. Teng and Pham [15] proposed a generalized SRGM considering the effects of the uncertainty of the working environment on software failure rate. Pham [6,16] presented an SRGM incorporating Vtub-shaped and Loglog fault detection rates subject to random environments, respectively. Li and Pham [17] discussed an SRGM where fault removal efficiency and error generation are incorporated together with the uncertainty of the operating environment. Li et al. [18] proposed a generalized SRGM incorporating the uncertainty of the operating environment.
This paper presents an SRGM incorporating imperfect debugging and testing coverage functions under the effects of a random field environment. The above discussed model [6,15,16,17,18] have assumed that the random variable η follows Gamma distribution. In our model, we have assumed exponential distribution instead of the gamma distribution to keep less number of parameters in the model. We have validated the goodness-of-fit and predictability of the proposed model on two datasets. The remaining part of the paper is as follows. In Section 2, the explicit solution of the mean value function is derived. Numerical and data analysis is performed in Section 3. In Section 4, we summarize the paper’s conclusions.

2. Software Reliability Growth Model

The cumulative number of detected software faults follows non-homogeneous Poisson process (NHPP) and express as follows
P { N ( t ) = n } = m ( t ) n n ! exp ( m ( t ) ) , for n = 1 , 2 , 3 , .
The mean value function for the fault counting process is represented in terms of intensity function λ ( t ) as
m ( t ) = 0 t λ ( s ) d s .
The following assumptions are taken for the proposed model:
  • The software fault detection follows the non-homogeneous Poisson process.
  • Fault detection rate is proportional to the remaining faults in the software.
  • After fault detection, the debugging process takes place immediately.
  • During the testing process, new faults are introduced into the software.
  • The testing coverage rate function is incorporated as the fault detection rate function.
  • Random testing environment affects the fault detection rate.
Considering the above assumptions, the SRGM, with the uncertainty of the operating environment, is
d m ( t ) d t = η c ( t ) 1 c ( t ) N ( t ) m ( t ) , m ( 0 ) = 0 .
where η is random variable, c ( t ) is the testing coverage function, N ( t ) is the total fault present in the software at time t and m ( t ) is cumulative number of software failure at time t.
Initially the software has N number of faults. During debugging phase new faults are introduced at a rate d. Therefore, the fault content function is
N ( t ) = N + d m ( t )
The general solution for the MVF m ( η , t ) is given by
m ( η , t ) = N 1 d 1 e η 0 t ( 1 d ) c ( τ ) 1 c ( τ ) d τ
In order to find the mean value function m ( t ) , we have assume that, the random variable η follows Exponential distribution, i.e., η exp ( α ) and the probability density function of η is given by
f ( η ) = α e α η , α > 0 and η 0 .
An application of Laplace transformation of Equation (5) using Exponential distribution for random variable η , the mean value function m ( t ) is given by:
m ( t ) = N 1 d 1 α α + ( 1 d ) 0 t c ( τ ) 1 c ( τ ) d τ
In this paper, we have considered the following testing coverage rate function c ( t ) as follows:
c ( t ) = 1 e 1 c t b , c > 1 , b > 0
After substituting c ( t ) in Equation (8), we obtained the following closed form of the solution of the mean value function m ( t ) as:
m ( t ) = N 1 d 1 α α + ( 1 d ) ( c t b 1 ) .
Table 1 summarizes the MVF of the proposed model and other selected models, which are taken for comparison.

3. Numerical and Data Analysis

3.1. Software Failure Data

The first dataset (DS-I) discussed in this paper is collected from the online IBM entry software package [2]. During the testing process of 21 weeks, 46 failures are observed. The second dataset (DS-II) is presented and collected from testing system at AT&T [23]. The system takes a total of 14 weeks to perform testing. As a result, 22 number of faults are experienced during the testing weeks.

3.2. Parameter Estimation and Goodness-of-Fit Criteria

Usually, the parameters of the SRGMs are estimated using the least square estimation (LSE) or maximum likelihood estimation (MLE) methods. We have used the least square estimation method to estimate the parameters of the proposed model and the parameter estimation is shown in Table 2.
Several goodness-of-fit criteria are available to predict the best-fit model in the literature. Out of those, the standard criteria used to compare with the existing selected model are mean-squared error (MSE), predictive ratio risk (PRR), bias, variance, and root mean square prediction error (RMSPE). The smaller value of all goodness-of-fit criteria gives a better fit of the model.
The MSE measures the average of the deviation between the predicted values with the actual data [24] and is represented as
MSE = 1 n i = 1 n ( m ( t i ) y i ) 2 ,
where n is the number of observations in the model.
The predictive ratio risk (PRR) gives the distance between the model estimates and actual data against the actual data and is defined as [25]
PRR = i = 1 n ( m ( t i ) y i ) m ( t i ) 2
The bias is defined as the sum of the deviation of the model estimates testing curve from the actual data as [26]:
Bias = 1 n i = 1 n ( m ( t i ) y i ) .
The variance is defined as [27]:
Variance = i = 1 n y i m ( t i ) Bias 2 n 1 .
The root mean square prediction error (RMSPE) is defined as [27]
RMSPE = Variance 2 + Bias 2 .
where m ( t i ) is the predict fault at time t i and y i is the observed fault at time t i .

3.3. Model Comparison for DS-I

Table 3 shows that the proposed model performs better regarding MSE, PRR, Bias, Variance, and RMSE criteria. The value of MSE, PRR, Bias, Variance, and RMSE of the proposed model are 1.168283 , 0.381616 , −0.04598, 1.105182 and 1.106138, respectively, which is significantly smaller than the value of the other selected software reliability models. Figure 1a compared the proposed and existing models chosen with observed failure data. Figure 1b shows the relative errors of the proposed model in terms of the test week, which approach to zero compared to the other models. Overall, the proposed model is better than other selected existing models.

3.4. Model Comparison for DS-II

The proposed model’s performance using LSE is evaluated in terms of MSE, PRR, Bias, Variance, and RMSE and shown in Table 4. From Table 4, we observe that the proposed models have the least value of MSE ( 0.708826 ), PRR ( 0.117530 ), Bias ( 0.051773 ), Variance ( 0.878641 ), and RMSE ( 0.880165 ) than the other models. The comparison between the proposed and selected model’s MVF is depicted in Figure 1c. Figure 1d shows the relative errors for different models and approaches rapidly to zero compared to other selected models. Overall, the proposed model also fits the DS-II better.

4. Conclusions

Many SRGMs have been proposed on different realistic issues. This paper has incorporated imperfect debugging and the testing coverage rate function in the model. All the models discussed in Section 1 assumed that the uncertainty of the operating environment follows the gamma distribution. The main contribution of the model is implementing a random variable, which follows an exponential distribution. However, it is a special case of the gamma distribution and keeps fewer parameters in the model. The proposed model’s parameter is estimated using two datasets and validated over five goodness-of-fit criteria. The results show that the proposed model fits better than other selected models. In the future, we will incorporate multi-release and change-point concepts.

Author Contributions

Conceptualization, S.K.P., A.K. and V.K.; methodology, S.K.P., A.K. and V.K.; formal analysis, S.K.P.; validation, S.K.P.; writing original draft, S.K.P.; writing-review and editing, S.K.P., A.K. and V.K. All authors have read and agreed to the published version of the manuscript.

Funding

The authors did not receive any funding from any organizations.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

We confirm that the data supporting the findings of this study are cited and available within the article.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Goel, A.L.; Okumoto, K. Time-dependent error-detection rate model for software reliability and other performance measures. IEEE Trans. Reliab. 1979, 28, 206–211. [Google Scholar] [CrossRef]
  2. Ohba, M. Software reliability analysis models. IBM J. Res. Dev. 1984, 28, 428–443. [Google Scholar] [CrossRef]
  3. Yamada, S.; Tokuno, K.; Osaki, S. Imperfect debugging models with fault introduction rate for software reliability assessment. Int. J. Syst. Sci. 1992, 23, 2241–2252. [Google Scholar] [CrossRef]
  4. Pham, H. An imperfect-debugging fault-detection dependent-parameter software. Int. J. Autom. Comput. 2007, 4, 325. [Google Scholar] [CrossRef]
  5. Pham, H. A software cost model with imperfect debugging, random life cycle and penalty cost. Int. J. Syst. Sci. 1996, 27, 455–463. [Google Scholar] [CrossRef]
  6. Pham, H. Loglog fault-detection rate and testing coverage software reliability models subject to random environments. Vietnam. J. Comput. Sci. 2014, 1, 39–45. [Google Scholar] [CrossRef]
  7. Yamada, S.; Ohba, M.; Osaki, S. S-shaped reliability growth modeling for software error detection. IEEE Trans. Reliab. 1983, 32, 475–484. [Google Scholar] [CrossRef]
  8. Pradhan, S.K.; Kumar, A.; Kumar, V. An Optimal Resource Allocation Model Considering Two-Phase Software Reliability Growth Model with Testing Effort and Imperfect Debugging. Reliab. Theory Appl. 2021, 16, 241–255. [Google Scholar]
  9. Pradhan, S.K.; Kumar, A.; Kumar, V. An Effort Allocation Model for a Three Stage Software Reliability Growth Model. In Predictive Analytics in System Reliability; Springer: Cham, Switzerland, 2023; pp. 263–282. [Google Scholar]
  10. Pradhan, S.K.; Kumar, A.; Kumar, V. An optimal software enhancement and customer growth model: A control-theoretic approach. Int. J. Qual. Reliab. Manag. 2023. [Google Scholar] [CrossRef]
  11. Kumar, V.; Sahni, R. Dynamic testing resource allocation modeling for multi-release software using optimal control theory and genetic algorithm. Int. J. Qual. Reliab. Manag. 2020, 37, 1049–1069. [Google Scholar] [CrossRef]
  12. Kumar, V.; Kapur, P.K.; Taneja, N.; Sahni, R. On allocation of resources during testing phase incorporating flexible software reliability growth model with testing effort under dynamic environment. Int. J. Oper. Res. 2017, 30, 523–539. [Google Scholar] [CrossRef]
  13. Kapur, P.; Goswami, D.; Bardhan, A. A general software reliability growth model with testing effort dependent learning process. Int. J. Model. Simul. 2007, 27, 340–346. [Google Scholar] [CrossRef]
  14. Samal, U.; Kushwaha, S.; Kumar, A. A Testing-Effort Based Srgm Incorporating Imperfect Debugging and Change Point. Reliab. Theory Appl. 2023, 18, 86–93. [Google Scholar]
  15. Teng, X.; Pham, H. A new methodology for predicting software reliability in the random field environments. IEEE Trans. Reliab. 2006, 55, 458–468. [Google Scholar] [CrossRef]
  16. Pham, H. A new software reliability model with Vtub-shaped fault-detection rate and the uncertainty of operating environments. Optimization 2014, 63, 1481–1490. [Google Scholar] [CrossRef]
  17. Li, Q.; Pham, H. NHPP software reliability model considering the uncertainty of operating environments with imperfect debugging and testing coverage. Appl. Math. Model. 2017, 51, 68–85. [Google Scholar] [CrossRef]
  18. Li, Q.; Pham, H. A generalized software reliability growth model with consideration of the uncertainty of operating environments. IEEE Access 2019, 7, 84253–84267. [Google Scholar] [CrossRef]
  19. Yamada, Shigeru; Ohba, Mitsuru and Osaki, Shunji S-shaped software reliability growth models and their applications. IEEE Trans. Reliab. 1984, 33, 289–292.
  20. Yamada, S.; Ohtera, H.; Narihisa, H. Software reliability growth models with testing-effort. IEEE Trans. Reliab. 1986, 35, 19–23. [Google Scholar] [CrossRef]
  21. Pham, H.; Zhang, X. An NHPP software reliability model and its comparison. Int. J. Reliab. Qual. Saf. Eng. 1997, 4, 269–282. [Google Scholar] [CrossRef]
  22. Pham, H. System Software Reliability; Springer Science & Business Media: Berlin, Germany, 2007. [Google Scholar]
  23. Ehrlich, W.; Prasanna, B.; Stampfel, J.; Wu, J. Determining the cost of a stop-test decision (software reliability). IEEE Softw. 1993, 10, 33–42. [Google Scholar] [CrossRef]
  24. Kapur, P.; Goswami, D.; Bardhan, A.; Singh, O. Flexible software reliability growth model with testing effort dependent learning process. Appl. Math. Model. 2008, 32, 1298–1307. [Google Scholar] [CrossRef]
  25. Pham, H.; Deng, C. Predictive-ratio risk criterion for selecting software reliability models. In Proceedings of the 9th International Conference on Reliability and Quality in Design, Waikiki, HI, USA, 6–8 August 2003; pp. 17–21. [Google Scholar]
  26. Pillai, K.; Nair, V.S. A model for software development effort and cost estimation. IEEE Trans. Softw. Eng. 1997, 23, 485–497. [Google Scholar] [CrossRef]
  27. Kapur, P.; Pham, H.; Anand, S.; Yadav, K. A unified approach for developing software reliability growth models in the presence of imperfect debugging and error generation. IEEE Trans. Reliab. 2011, 60, 331–340. [Google Scholar] [CrossRef]
Figure 1. (a) Estimated MVFs for different selected and proposed model (DS-I). (b) Relative errors curve for different selected and proposed model (DS-I). (c) Estimated MVFs for different selected and proposed model (DS-II). (d) Relative errors curve for different selected and proposed model (DS-II).
Figure 1. (a) Estimated MVFs for different selected and proposed model (DS-I). (b) Relative errors curve for different selected and proposed model (DS-I). (c) Estimated MVFs for different selected and proposed model (DS-II). (d) Relative errors curve for different selected and proposed model (DS-II).
Csmf 07 00044 g001
Table 1. Summary of SRGM.
Table 1. Summary of SRGM.
No.ModelMVF
1Goel-Okumoto model [1] m ( t ) = a ( 1 e b t )
2Delayed S-shaped model [19] m ( t ) = a ( 1 ( 1 + b t ) e b t )
3Yamada ID model-I [3] m ( t ) = a b α + b ( e α t e b t )
4Yamada ID model-II [3] m ( t ) = a ( 1 e b t ) ( 1 α b ) + α a t
5Yamada et al. (YExp) [20] m ( t ) = a 1 e γ α ( 1 e β t )
6Yamada et al. (YRay) [20] m ( t ) = a 1 e γ α ( 1 e β t 2 / 2 )
7Pham-Zhang model [21] m ( t ) = 1 1 + β e b t ( c + a ) ( 1 e b t ) a b b α ( e α t e b t )
8Pham-Zhang ID model [22] m ( t ) = a ( 1 e b t ) ( 1 + ( b + d ) t + b d t 2 )
9Proposed model m ( t ) = N 1 d 1 α α + ( 1 d ) ( c t b 1 )
Table 2. Parameter estimation for DS-I [2] and DS-II [23].
Table 2. Parameter estimation for DS-I [2] and DS-II [23].
No.ModelParameter Estimate (DSI)Parameter Estimate (DSII)
1Goel-Okumoto model [1] a ^ = 192.3303 , b ^ = 0.0121 a ^ = 23.0127 ,   b ^ = 0.1884
2Delayed S-shaped model [19] a ^ = 77.253 , b ^ = 0.0966 a ^ = 20.0045 , b ^ = 0.5198
3Yamada ID model-I [3] a ^ = 38.1884 , b ^ = 0.0439 , α ^ = 0.055 a ^ = 26.4815 b ^ = 0.13204 α ^ = 0.00004
4Yamada ID model-II [3] a ^ = 1.7105 b ^ = 0.2959 α ^ = 1.5247 a ^ = 17.5163 , b ^ = 0.2657 α ^ = 0.0251
5Yamada et al. (YExp) [20] a ^ = 1935.6515 ,   γ ^ = 1.6636 , a ^ = 34.7439 ,   γ ^ = 4.2853 ,
   α ^ = 2.4954 ,   β ^ = 0.0003 α ^ = 0.2857 ,   β ^ = 0.1061
6Yamada et al. (YRay) [20] a ^ = 77.8791 ,   γ ^ = 2.5357 , a ^ = 20.7737 ,   γ ^ = 3.7091 ,
   α ^ = 0.5513 ,   β ^ = 0.0045 α ^ = 0.7458 ,   β ^ = 0.0520
7Pham-Zhang model [21] a ^ = 83.1886 ,   b ^ = 0.0747 β ^ = 0.0533 a ^ = 22.7269 ,   b ^ = 0.223 , β ^ = 0.0011
   α ^ = 0.0838 ,   c ^ = 8.0682 α ^ = 4.1802 ,   c ^ = 0.1
8Pham-Zhang ID model [22] a ^ = 75.7344 ,   b ^ = 0.1001 , d ^ = 0.001 a ^ = 19.4765 ,   b ^ = 0.6924 , d ^ = 0.113
9Proposed model N ^ = 3.1569 ,   d ^ = 0.9527 , N ^ = 29.773 ,   d ^ = 0.001 ,
   α ^ = 6.5911 , b ^ = 0.4806 , c ^ = 3.8052 , α ^ = 0.0006 ,   b ^ = 1.082 ,   c ^ = 1.0001 ,
Table 3. Comparison criteria for DS-I.
Table 3. Comparison criteria for DS-I.
NoMSEPRRBaisVarianceRMSE
18.9738721.0197030.8908483.4528883.565957
21.48068626.32063−0.232111.3131731.333528
34.0572390.371741−0.653022.060012.164841
41.4583472.676588−0.053031.2410171.242149
56.2452280.8228930.8442262.560762.696332
61.95929255.68604−0.398521.4343141.488648
71.2789452.729673−0.06961.1653971.167473
81.53360340.66148−0.250211.3444271.367511
91.168283 0.381616−0.045981.1051821.106138
Table 4. Comparison criteria for DS-II.
Table 4. Comparison criteria for DS-II.
NoMSEPRRBiasVarianceRMSE
10.8378890.1265910.2943101.0872841.126412
21.4096150.385109−0.122661.2516621.257659
31.3943070.144932−0.161511.2253821.235980
40.7180110.1462910.1681200.9298160.944893
50.7465320.1350720.0613770.8966370.898735
62.0279241.355159−0.179891.4778091.488717
71.5264150.1181100.8796792.0356602.217600
81.9693082.921449−0.172271.4888481.498781
90.7088260.1175300.0517730.8786410.880165
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pradhan, S.K.; Kumar, A.; Kumar, V. A Testing Coverage Based SRGM Subject to the Uncertainty of the Operating Environment. Comput. Sci. Math. Forum 2023, 7, 44. https://doi.org/10.3390/IOCMA2023-14436

AMA Style

Pradhan SK, Kumar A, Kumar V. A Testing Coverage Based SRGM Subject to the Uncertainty of the Operating Environment. Computer Sciences & Mathematics Forum. 2023; 7(1):44. https://doi.org/10.3390/IOCMA2023-14436

Chicago/Turabian Style

Pradhan, Sujit Kumar, Anil Kumar, and Vijay Kumar. 2023. "A Testing Coverage Based SRGM Subject to the Uncertainty of the Operating Environment" Computer Sciences & Mathematics Forum 7, no. 1: 44. https://doi.org/10.3390/IOCMA2023-14436

APA Style

Pradhan, S. K., Kumar, A., & Kumar, V. (2023). A Testing Coverage Based SRGM Subject to the Uncertainty of the Operating Environment. Computer Sciences & Mathematics Forum, 7(1), 44. https://doi.org/10.3390/IOCMA2023-14436

Article Metrics

Back to TopTop