Next Article in Journal
Stagnation Point Flow and Heat Transfer over an Exponentially Stretching/Shrinking Sheet in CNT with Homogeneous–Heterogeneous Reaction: Stability Analysis
Previous Article in Journal
Testing Predictions of the Quantum Landscape Multiverse 3: The Hilltop Inflationary Potential
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NHPP Software Reliability Model with Inflection Factor of the Fault Detection Rate Considering the Uncertainty of Software Operating Environments and Predictive Analysis

1
Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Road, Piscataway, NJ 08855-8018, USA
2
Department of Computer Science and Statistics, Chosun University, 309 Pilmun-daero Dong-gu, Gwangju 61452, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(4), 521; https://doi.org/10.3390/sym11040521
Submission received: 8 March 2019 / Revised: 1 April 2019 / Accepted: 8 April 2019 / Published: 10 April 2019

Abstract

:
The non-homogeneous Poisson process (NHPP) software has a crucial role in computer systems. Furthermore, the software is used in various environments. It was developed and tested in a controlled environment, while real-world operating environments may be different. Accordingly, the uncertainty of the operating environment must be considered. Moreover, predicting software failures is commonly an important part of study, not only for software developers, but also for companies and research institutes. Software reliability model can measure and predict the number of software failures, software failure intervals, software reliability, and failure rates. In this paper, we propose a new model with an inflection factor of the fault detection rate function, considering the uncertainty of operating environments and analyzing how the predicted value of the proposed new model is different than the other models. We compare the proposed model with several existing NHPP software reliability models using real software failure datasets based on ten criteria. The results show that the proposed new model has significantly better goodness-of-fit and predictability than the other models.

1. Introduction

The core technologies of the fourth industrial revolution, such as artificial intelligence (AI), big data, the Internet of Things (IoT), are implemented in software, and software is essential as a mediator to create new values by fusing these technologies in all industries. As the importance and role of software in a computer system keep growing, a fatal software error can cause significant damage. For the effective operation of software, it is imperative to reduce the possibilities of software failures and maintain high levels of reliability. Software reliability is defined as the probability that the software will run without a fault for a certain period. It is vital for developing skills and theories to improve the software reliability. However, the development of a software system is a difficult and complex process. Therefore, the main focus of software development is on improving the reliability and stability of a software system. The number of software failures and the time interval of each failure have a significant influence on the reliability of software. Therefore, the prediction of software failures is a research field that is important not only for software developers, but also for companies and research institutes. Software reliability models can be classified according to the applied software development cycle. Before the testing phase, a reliability prediction model is used that predicts reliability using information such as past data or language used, development domain, complexity, and architecture. After the test phase, a software reliability model is used, which is a mathematical model of software failures such as the frequency of failures and failure interval times. A model makes it easier to evaluate the software reliability using the fault data collected in the test or operating environment. In addition, the model can measure the number of software failures, software failure interval, and software reliability; and failure rate can be estimated and variously predicted.
Although various types of software reliability models have been studied, software defects and failures generally do not occur at the same time intervals. Based on this, a non-homogeneous Poisson process (NHPP) software reliability model was developed. The NHPP models determine mathematically handled software reliability; they are used extensively because of their potential for various applications. Most of the previous NHPP software reliability models were developed based on the assumptions that faults detected in the testing phase are removed immediately with no debugging time delay, new faults are not introduced, and software systems used in the field environments are the same as or close to those used in the development-testing environment. Based on this, Goel and Okumoto [1] presented a stochastic model for the software failure phenomenon using an NHPP; this model describes the failure observation phenomenon by an exponential curve. Also, there have been some other software reliability models that describe either S-shaped curves or a mixture of exponential and S-shaped curves [2,3,4]. As the Internet became popular in the mid-1990s, due to rapid changes in industrial structure and environment, a software reliability model with a variety of operating environments begun to be studied. In early 2000, considering the uncertainty of the operating environment, researchers began to try new approaches such as the application of calibration factors [5,6,7]. Based on this, Teng and Pham [8] generalized the software reliability model considering the uncertainty of the environment and its effects upon software failure rates. Recently, Inoue et al. [9] proposed the software reliability model with the uncertainty of testing environments. Li and Pham [10,11] proposed NHPP software reliability models considering fault removal efficiency and error generation, and the uncertainty of operating environments with imperfect debugging and testing coverage. Song et al. [12,13,14,15] studied NHPP software reliability models with various fault detection rates considering the uncertainty of operating environments. Zhu and Pham [16] proposed an NHPP software reliability model with a pioneering idea by considering software fault dependency and imperfect fault removal. However, previous NHPP software reliability models [1,2,3,4,17,18,19,20,21,22,23,24,25] did not take into account the uncertainty of the software operating environment, and did not consider the learn-curve in the fault detection rate function [8,9,10,11,13,14,26,27].
In this paper, we discuss a new model with inflection factor of the fault detection rate function considering the uncertainty of operating environments, and the predictive analysis. We examine the goodness-of-fit and the predictability of a new software reliability model and other existing NHPP models based on several datasets. The explicit solution of the mean value function for the new software reliability model is derived in Section 2. Criteria for model comparisons, prediction, and selection of the best model are discussed in Section 3. Model analysis and results through numerical examples are discussed in Section 4. Section 5 presents conclusions and remarks.

2. NHPP Software Reliability Modeling

2.1. A General NHPP Software Reliability Model

N ( t )   ( t 0 ) represents the cumulative number of failures up to the point of execution time t when the software failure/defect follows the NHPP.
P r { N ( t ) = n } = { m ( t ) } n n ! e x p { m ( t ) } ,   n = 0 , 1 , 2 , 3 .
Assuming that m ( t ) is a mean value function, the relationship between the mean value function m ( t ) and the intensity function λ ( t ) is
m ( t ) = 0 t λ ( s ) d s .
A general mean value function m ( t ) of NHPP software reliability models using the differential equation is as follows [19]:
d   m ( t ) d t = b ( t ) [ a ( t ) m ( t ) ] .
Solving Equation (1) by using different functions a ( t ) and b ( t ) , the following mean value function m(t) is observed [19],
m ( t ) = e B ( t ) [ m 0 + t 0 t a ( s ) b ( s ) e B ( s ) b s ]
where B ( t ) = t 0 t b ( s ) d s , and m ( t 0 ) = m 0 is the marginal condition of Equation (2), with t 0 representing the start time of the testing process.

2.2. A New NHPP Software Reliability Model

A general mean value function m ( t ) of NHPP software reliability models using the differential equation considering the uncertainty of operating environments is as follows [26]:
d   m ( t ) d t = η [ b ( t ) ] [ N m ( t ) ]
where m ( t ) is the mean value function, b ( t ) is the fault detection rate function, N is the expected number of faults that exist in the software before testing, and η is a random variable that represents the uncertainty of the system fault detection rate in the operating environments with a probability density function [26],
m ( t ) = η N ( 1 e η 0 t b ( x ) d x ) d g ( η ) .
We find the following mean value function m ( t ) using the differential equation by applying the random variable η ; it has a generalized probability density function with two parameters α 0 and β 0 , where the initial condition m(0) = 0:
m ( t ) = N ( 1 β β + 0 t b ( s ) d s ) α .
In this paper, we consider a fault detection rate function b ( t ) to be as follows:
b ( t ) = b 1 + a e b t ,   a ,   b > 0
where b is the failure detection rate and a represents the inflection factor.
We obtain a new mean value function m ( t ) of NHPP software reliability model subject to the uncertainty of operating environments that can be used to determine the expected number of software failures detected by time t by substituting the function b ( t ) in Equation (5):
m ( t ) = N ( 1 β β + l n ( a + e b t 1 + a ) ) α
In this paper, the advantages of the proposed new model take into account the learn-curve in the fault detection rate function and the uncertainty of the operating environments.

3. Parameter Estimation and Criteria for Model Comparisons

3.1. Parameter Estimation and Models for Comparison

Many NHPP software reliability models use the least square estimation (LSE) and the maximum likelihood estimation (MLE) methods to estimate the parameters. However, if the expression of the mean value function m ( t ) of the software reliability model is too complicated, an accurate estimate may not be obtained from the MLE method. Here, we derived the parameters of the mean value function m ( t ) using the Matlab and R programs based on the LSE method. Table 1 summarizes the mean value functions of existing NHPP software reliability models and the proposed new model; among them, NHPP software reliability models 18, 19, and 20 consider the uncertainty of the environment.

3.2. Criteria for Model Comparison

We use ten criteria to estimate the goodness-of-fit of the proposed model, and use one criterion to compare the predicted values.
(1)  Mean squared error (MSE)
M S E = i = 1 n ( m ^ ( t i ) y i ) 2 n m
The MSE measures the average of the squares of the errors that is the average squared difference between the estimated values and the actual data.
(2)  Root mean square error (RMSE)
R M S E = i = 1 n ( m ^ ( t i ) y i ) 2 n m
The RMSE is a frequently used measure of the differences between values predicted by a model or an estimator and the values observed.
(3)  Predictive ratio risk (PRR) [22]
P R R = i = 1 n ( m ^ ( t i ) y i m ^ ( t i ) ) 2
The PRR measures the distance of the model estimates from the actual data against the model estimate.
(4)  Predictive power (PP) [22]
P P = i = 1 n ( m ^ ( t i ) y i y i ) 2
The PP measures the distance of the model estimates from the actual data.
(5)  Akaike’s information criterion (AIC) [28]
A I C = 2 l o g L + 2 m
AIC is measured to compare the capability of each model in terms of maximizing the likelihood function (L), while considering the degrees of freedom. L and l o g   L are given as follows:
L = i = 1 n ( m ( t i ) m ( t i 1 ) ) y i y i 1 ( y i y i 1 ) ! e ( m ( t i ) m ( t i 1 ) ) ,
l o g   L = i = 1 n { ( y i y i 1 ) l n ( ( m ( t i ) m ( t i 1 ) ) ( m ( t i ) m ( t i 1 ) ) l n ( ( y i y i 1 ) ! ) } .
(6)  R-square (R2) [10]
R 2 = 1 i = 1 n ( m ^ ( t i ) y i ) 2 i = 1 n ( y i y i ¯ ) 2
The R2 measures how successful fit is in explaining the variation of the data.
(7)  Adjusted R-square (Adj R2) [10]
A d j   R 2 = 1 ( 1 R 2 ) ( n 1 ) n m 1
The Adjusted R2 is a modification to R2 that adjusts for the number of explanatory terms in a model relative to the number of data points.
(8)  Sum of absolute errors (SAE) [13]
S A E = i = 1 n | m ^ ( t i ) y i |
The SAE measures the absolute distance of the model.
(9)  Mean absolute errors (MAE) [29]
M A E = i = 1 n | m ^ ( t i ) y i | n m
The MAE measures the deviation by the use of absolute distance of the model.
(10)  Variance [16]
V a r i a n c e = i = 1 n ( y i m ^ ( t i ) B i a s ) 2 n 1
The variance measures the standard deviation of the prediction bias, where Bias is given as:
B i a s   = 1 n i = 1 n ( m ^ ( t i ) y i ) .
(11)  Sum of squared errors for predicted value (Pre SSE) [11]
P r e   S S E = i = k + 1 n ( m ^ ( t i ) y i ) 2
We use the data points up to time t k to estimate the parameters of the mean value function m ( t ) , then measure the square of the error between the estimated value and the actual data after the time t k , obtained by substituting the estimated parameter into the mean value function.
Here, m ^ ( t i ) is the estimated cumulative number of failures at t i for i = 1 ,   2 , , n ; y i is the total number of failures observed at time t i ; n is the total number of observations; m is the number of unknown parameters in the model.
The smaller the value of these nine criteria, i.e., MSE, RMSE, PRR, PP, AIC, SAE, MAE, Variance, and Pre SSE, the better is the fit of the model (closer to 0). On the other hand, the higher the value of the two criteria, i.e., R 2 and Adj   R 2 , the better is the fit of the model (closer to 1).

3.3. Confidence Interval

It is possible to check whether the value of the mean value function is included in the confidence interval at each point, t i , or not and how much the confidence interval actually contains the value. We use the following Equation (7) to obtain the confidence interval [22] of the proposed new model and existing NHPP software reliability models;
m ^ ( t ) ± Z α / 2 m ^ ( t ) ,
where Z α / 2 is 100 ( 1 α ) , the percentile of the standard normal distribution.

4. Numerical Examples

4.1. Data Information

Datasets #1 and #2 were reported by [22] based on system test data for a telecommunication system. Both, the automated and human-involved tests are executed on multiple test beds. The system records the cumulative of faults by each week. In Datasets #1 and #2, the week index is from week 1 to 21, and there are 26 and 43 cumulative failures in 21 weeks, respectively. Detailed information can be seen in [22]. Datasets #3, #4, and #5 were reported by [22] based on the on-line communication system. Here as well, the system records the cumulative of faults by each week. In Datasets #3, #4, and #5, the week index is from week 1 to 12, and there are 26, 55, and 55 cumulative failures in 12 weeks, respectively. Detailed information can be seen in [22].

4.2. Results of the Estimated Parameters

Table 2, Table 3, Table 4, Table 5 and Table 6 summarize the results of the estimated parameters using the LSE technique and the values of the ten criteria (MSE, RMSE, PRR, PP, AIC, R 2 , Adj   R 2 , SAE, MAE, and Variance) of all 21 models in Table 1. First, for comparison of the goodness-of-fit, we obtained the parameter estimates and the criteria of all models using all data sets; when t = 1 ,   2 , , 21 from Dataset #1 and #2, and when t = 1 ,   2 , , 12 from Dataset #3, #4 and #5. As shown in Table 2, Table 3, Table 4, Table 5 and Table 6, we can see that the proposed new model has the best results when comparing the ten criteria to the other models.
As can be seen from Table 2, the MSE, RMSE, PRR, SAE, MAE, and Variance values for the proposed new model are the lowest values compared to all models in Table 1. The MSE value of the proposed new model is 0.5864, which is smaller than the value of MSE of other models. The RMSE value is 0.7658, PRR value is 0.5024, SAE value is 11.3783, MAE value is 0.7111, and Variance value is 0.6903, which are smaller than the corresponding values of other models. The R 2 and Adj   R 2 values for the proposed new model are the largest values as compared to all models. The R 2 value of the proposed model is 0.9947, and the Adj   R 2 value is 0.9929, which are larger than the corresponding values of other models.
From Table 3, we can see that the MSE, RMSE, PRR, PP, SAE, MAE, and Variance values for the proposed new model are the lowest values in comparison with every model in Table 1. The MSE value of the proposed new model is 0.8470, which is smaller than that of other models. The RMSE value is 0.9203, PRR value is 0.1159, PP value is 0.1355, SAE value is 14.0367, MAE value is 0.8773, and Variance value is 0.8232, which are smaller than the corresponding values of other models. The AIC value is 77.0423, which is the second lowest value. The R 2 and Adj   R 2 values for the proposed new model are the largest values compared to all models. The value of R 2 for the proposed model is 0.9970 and the Adj   R 2 is 0.9960, which are larger than the respective values of other models.
As can be seen from Table 4, the MSE, RMSE, PP, AIC, SAE, and Variance values for the proposed new model are the lowest values compared to all other models in Table 1. The MSE value of the proposed new model is 4.4412, which is smaller than that of the other models. The RMSE value is 2.1074, PP value is 0.7376, AIC value is 54.3482, SAE value is 15.4691, and Variance value is 1.7189, which are smaller than the value of other models. The MAE value is 2.2099, which is the second lowest value. The R 2 and Adj   R 2 values for the proposed new model are the largest values compared to all models. The R 2 value of the proposed model is 0.9682 and the Adj   R 2 value is 0.9416, which are larger than the value of other models.
As seen from Table 5, the MSE, RMSE, PRR, PP, AIC, SAE, MAE, and Variance values for the proposed new model are the lowest values in comparison with every model in Table 1. The MSE value of the proposed new model is 6.7120, RMSE value is 2.5908, PRR value is 0.1812, PP value is 0.1363, AIC value is 70.5195, SAE value is 18.2230, MAE value is 2.6033, and Variance value is 2.0735, which are smaller than the value of other models. The values of R 2 and Adj   R 2 for the proposed new model are the largest values compared to all models. The value of R 2 for the proposed model is 0.9877 and that of Adj   R 2 is 0.9774, which are larger than the respective values of other models.
As depicted in Table 6, the MSE, RMSE, PRR, PP, SAE, MAE, and Variance values for the proposed new model are the lowest values as compared to all other models in Table 1. The MSE value of the proposed new model is 2.3671, RMSE value is 1.5385, PRR value is 0.04121, PP value is 0.0333, SAE value is 11.4867, MAE value is 1.6410, and Variance value is 1.2284, which are smaller than the corresponding values of other models. The AIC value is 58.7819, which is the second lowest value. The R 2 and Adj   R 2 values for the proposed new model are the largest values compared to all models. The value of R 2 for the proposed model is 0.9940 and that of Adj   R 2 is 0.9890, which are larger than the corresponding values of other models.
Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 show graphs of the mean value functions for all models based on Datasets #1–#5, respectively. Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 show graphs of the 95% confidence interval of the proposed new model, which serve to confirm whether the value of the mean value function is included in the confidence interval of each time point. Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15 show graphs of the relative error value of all models, which serve to confirm its ability to provide better accuracy.

4.3. Prediction Analysis

In this paper, we use Dataset #1 and #2 to compare how the predicted values of each model are different to fulfill the objective of this paper. We compare the goodness-of-fit of all models by using up to 75% of the dataset and compare the predicted value of all models using the remaining 25% of dataset. For comparison of the goodness-of-fit, we obtained the parameter estimates and the criteria (MSE, RMSE, PRR, PP, AIC, R 2 , Adj   R 2 , SAE, MAE, and Variance) for all models when t = 1 ,   2 , , 16 , and, for comparison of the predicted value, we obtained the PreSSE value of all models when t = 17 ,   18 , , 21 from Dataset #1 and #2.
First of all, as seen in Table 7 and Table 8 for comparison of the goodness-of-fit, it is evident that the proposed new model has the best results when comparing the ten criteria with the other models. As seen from Table 7, the MSE, RMSE, PRR, SAE, and Variance values for the proposed new model are the lowest values compared to all models in Table 1. The MSE value of the proposed new model is 0.5915, which is smaller than the corresponding value of other models. The RMSE value is 0.7691, the PRR value is 0.3380, the SAE value is 8.2769, and the Variance value is 0.6591, which are smaller than the value of other models. The R 2 and Adj   R 2 values for the proposed new model are the largest values compared to all models. The value of R 2 for the proposed model is 0.9923 and that of Adj   R 2 is 0.9885, which are larger than the respective values of other models. As seen from Table 8, the MSE, RMSE, PRR, PP, SAE, MAE, and Variance values for the proposed new model are the lowest values in comparison with all models in Table 1. The MSE value of the proposed new model is 0.7827, RMSE value is 0.8847, PRR value is 0.1103, PP value is 0.1306, SAE value is 9.9671, MAE value is 0.7576, and Variance value is 0.7576, which are smaller than the corresponding values of other models. The R 2 and Adj   R 2 values for the proposed new model are the largest values compared to all models. The value of R 2 for the proposed model is 0.9959 and that of Adj   R 2 is 0.9939, both of which are larger than the respective values of other models. Finally, as shown in Table 7 and Table 8 for the comparison of the predicted value, it is evident that the proposed new model has the best results when comparing the criterion of PreSSE with the other models. As it can be seen from Table 7, the PreSSE value for the proposed new model is the lowest value as compared to all models in Table 1. The PreSSE value of the proposed new model is 2.6780, which is smaller than that of the other models. The PreSSE value of the proposed new model is 8.6532, which is smaller than the value of PreSSE of other models in Table 8. Figure 16 and Figure 17 show graphs of the goodness-of-fit and prediction of mean value functions for all models from Datasets #1 and #2, respectively.

5. Conclusions

The software is used in a variety of environments; however, it is typically developed and tested in a controlled environment. The uncertainty of the operating environment is considered because the environment in which the software is operated varies. Therefore, we consider the uncertainty of the operating environment and the learn-curve in the fault detection rate function. In this paper, we discussed a new model with inflection factor of the fault detection rate function considering the uncertainty of operating environments and analyzed how the predicted values of the proposed new model are different than the other models. We provided numerical proof by goodness-of-fit and also predicted the values for all models, and compared the proposed new model with several existing NHPP software reliability models based on eleven criteria (MSE, RMSE, PRR, PP, AIC,   R 2 , Adj   R 2 , SAE, MAE, Variance, and Pre SSE). As shown with the numerical examples, the results prove that the proposed new model has significantly better goodness-of-fit and predicts the value better than the other existing models. Future work will involve broader validation of this conclusion based on recent data sets. In addition, we need to apply Bayesian and big-data estimation method to estimate parameters, and also need to consider the multi-release point.

Author Contributions

K.Y.S. analyzed the data; K.Y.S. contributed analysis tools; K.Y.S. and I.H.C. wrote the paper; K.Y.S. and I.H.C. supported funding; H.P. suggested development of new proposed model; I.H.C. and H.P. designed the paper.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1B07045734, NRF-2018R1A6A3A03011833).

Acknowledgments

This research was supported by the National Research Foundation of Korea.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goel, A.L.; Okumoto, K. Time-dependent error-detection rate model for software reliability and other performance measures. IEEE Trans. Reliab. 1979, 28, 206–211. [Google Scholar] [CrossRef]
  2. Yamada, S.; Ohba, M.; Osaki, S. S-shaped reliability growth modeling for software fault detection. IEEE Trans. Reliab. 1983, 32, 475–484. [Google Scholar] [CrossRef]
  3. Ohba, M. Inflexion S-shaped software reliability growth models. In Stochastic Models in Reliability Theory; Osaki, S., Hatoyama, Y., Eds.; Springer: Berlin, Germany, 1984; pp. 144–162. [Google Scholar]
  4. Yamada, S.; Ohtera, H.; Narihisa, H. Software Reliability Growth Models with Testing-Effort. IEEE Trans. Reliab. 1986, 35, 19–23. [Google Scholar] [CrossRef]
  5. Yang, B.; Xie, M. A study of operational and testing reliability in software reliability analysis. Reliab. Eng. Syst. Saf. 2000, 70, 323–329. [Google Scholar] [CrossRef]
  6. Huang, C.Y.; Kuo, S.Y.; Lyu, M.R.; Lo, J.H. Quantitative software reliability modeling from testing from testing to operation. In Proceedings of the International Symposium on Software Reliability Engineering, IEEE, Los Alamitos, CA, USA, 8–11 October 2000; pp. 72–82. [Google Scholar]
  7. Zhang, X.; Jeske, D.; Pham, H. Calibrating software reliability models when the test environment does not match the user environment. Appl. Stoch. Models Bus. Ind. 2002, 18, 87–99. [Google Scholar] [CrossRef]
  8. Teng, X.; Pham, H. A new methodology for predicting software reliability in the random field environments. IEEE Trans. Reliab. 2006, 55, 458–468. [Google Scholar] [CrossRef]
  9. Inoue, S.; Ikeda, J.; Yamda, S. Bivariate change-point modeling for software reliability assessment with uncertainty of testing-environment factor. Ann. Oper. Res. 2016, 244, 209–220. [Google Scholar] [CrossRef]
  10. Li, Q.; Pham, H. A testing-coverage software reliability model considering fault removal efficiency and error generation. PLoS ONE 2017, 12, e0181524. [Google Scholar] [CrossRef] [PubMed]
  11. Li, Q.; Pham, H. NHPP software reliability model considering the uncertainty of operating environments with imperfect debugging and testing coverage. Appl. Math. Model. 2017, 51, 68–85. [Google Scholar] [CrossRef]
  12. Song, K.Y.; Chang, I.H.; Pham, H. A Three-parameter fault-detection software reliability model with the uncertainty of operating environments. J. Syst. Sci. Syst. Eng. 2017, 26, 121–132. [Google Scholar] [CrossRef]
  13. Song, K.Y.; Chang, I.H.; Pham, H. A software reliability model with a Weibull fault detection rate function subject to operating environments. Appl. Sci. 2017, 7, 983. [Google Scholar] [CrossRef]
  14. Song, K.Y.; Chang, I.H.; Pham, H. An NHPP software reliability model with S-shaped growth curve subject to random operating environments and optimal release time. Appl. Sci. 2017, 7, 1304. [Google Scholar] [CrossRef]
  15. Song, K.Y.; Chang, I.H.; Pham, H. Optimal Release Time and Sensitivity Analysis Using a New NHPP Software Reliability Model with Probability of Fault Removal Subject to Operating Environments. Appl. Sci. 2018, 8, 714. [Google Scholar] [CrossRef]
  16. Zhu, M.; Pham, H. A two-phase software reliability modeling involving with software fault dependency and imperfect fault removal. Comput. Lang. Syst. Struct. 2018, 53, 27–42. [Google Scholar] [CrossRef]
  17. Yamada, S.; Tokuno, K.; Osaki, S. Imperfect debugging models with fault introduction rate for software reliability assessment. Int. J. Syst. Sci. 1992, 23, 2241–2252. [Google Scholar] [CrossRef]
  18. Hossain, S.A.; Dahiya, R.C. Estimating the parameters of a non-homogeneous Poisson-process model for software reliability. IEEE Trans. Reliab. 1993, 42, 604–612. [Google Scholar] [CrossRef]
  19. Pham, H.; Nordmann, L.; Zhang, X. A general imperfect software debugging model with S-shaped fault detection rate. IEEE Trans. Reliab. 1999, 48, 169–175. [Google Scholar] [CrossRef]
  20. Pham, H.; Zhang, X. An NHPP software reliability models and its comparison. Int. J. Reliab. Qual. Saf. Eng. 1997, 4, 269–282. [Google Scholar] [CrossRef]
  21. Zhang, X.M.; Teng, X.L.; Pham, H. Considering fault removal efficiency in software reliability assessment. IEEE Trans. Syst. Man. Cybern. Part Syst. Hum. 2003, 33, 114–120. [Google Scholar] [CrossRef]
  22. Pham, H. System Software Reliability; Springer: London, UK, 2006. [Google Scholar]
  23. Pham, H. An Imperfect-debugging Fault-detection Dependent-parameter Software. Int. J. Automat. Comput. 2007, 4, 325–328. [Google Scholar] [CrossRef]
  24. Kapur, P.K.; Pham, H.; Anand, S.; Yadav, K. A unified approach for developing software reliability growth models in the presence of imperfect debugging and error generation. IEEE Trans. Reliab. 2011, 60, 331–340. [Google Scholar] [CrossRef]
  25. Roy, P.; Mahapatra, G.S.; Dey, K.N. An NHPP software reliability growth model with imperfect debugging and error generation. Int. J. Reliab. Qual. Saf. Eng. 2014, 21, 1–3. [Google Scholar] [CrossRef]
  26. Pham, H. A new software reliability model with Vtub-Shaped fault detection rate and the uncertainty of operating environments. Optimization 2014, 63, 1481–1490. [Google Scholar] [CrossRef]
  27. Chang, I.H.; Pham, H.; Lee, S.W.; Song, K.Y. A testing-coverage software reliability model with the uncertainty of operation environments. Int. J. Syst. Sci. Oper. Logist. 2014, 1, 220–227. [Google Scholar]
  28. Akaike, H. A new look at statistical model identification. IEEE Trans. Autom. Control 1974, 19, 716–719. [Google Scholar] [CrossRef]
  29. Anjum, M.; Haque, M.A.; Ahmad, N. Analysis and ranking of software reliability models based on weighted criteria value. Int. J. Inf. Technol. Comput. Sci. 2013, 2, 1–14. [Google Scholar] [CrossRef]
Figure 1. Mean value functions of all models for Dataset #1.
Figure 1. Mean value functions of all models for Dataset #1.
Symmetry 11 00521 g001
Figure 2. Mean value functions of all models for Dataset #2.
Figure 2. Mean value functions of all models for Dataset #2.
Symmetry 11 00521 g002
Figure 3. Mean value functions of all models for Dataset #3.
Figure 3. Mean value functions of all models for Dataset #3.
Symmetry 11 00521 g003
Figure 4. Mean value functions of all models for Dataset #4.
Figure 4. Mean value functions of all models for Dataset #4.
Symmetry 11 00521 g004
Figure 5. Mean value functions of all models for Dataset #5.
Figure 5. Mean value functions of all models for Dataset #5.
Symmetry 11 00521 g005
Figure 6. 95% confidence interval of the proposed new model for Dataset #1.
Figure 6. 95% confidence interval of the proposed new model for Dataset #1.
Symmetry 11 00521 g006
Figure 7. 95% confidence interval of the proposed new model for Dataset #2.
Figure 7. 95% confidence interval of the proposed new model for Dataset #2.
Symmetry 11 00521 g007
Figure 8. 95% confidence interval of the proposed new model for Dataset #3.
Figure 8. 95% confidence interval of the proposed new model for Dataset #3.
Symmetry 11 00521 g008
Figure 9. 95% confidence interval of the proposed new model for Dataset #4.
Figure 9. 95% confidence interval of the proposed new model for Dataset #4.
Symmetry 11 00521 g009
Figure 10. 95% confidence interval of the proposed new model for Dataset #5.
Figure 10. 95% confidence interval of the proposed new model for Dataset #5.
Symmetry 11 00521 g010
Figure 11. Relative error value of all models for Dataset #1.
Figure 11. Relative error value of all models for Dataset #1.
Symmetry 11 00521 g011
Figure 12. Relative error value of all models for Dataset #2.
Figure 12. Relative error value of all models for Dataset #2.
Symmetry 11 00521 g012
Figure 13. Relative error value of all models for Dataset #3.
Figure 13. Relative error value of all models for Dataset #3.
Symmetry 11 00521 g013
Figure 14. Relative error value of all models for Dataset #4.
Figure 14. Relative error value of all models for Dataset #4.
Symmetry 11 00521 g014
Figure 15. Relative error value of all models for Dataset #5.
Figure 15. Relative error value of all models for Dataset #5.
Symmetry 11 00521 g015
Figure 16. Goodness-of-fit and prediction of mean value functions of all models for Dataset #1.
Figure 16. Goodness-of-fit and prediction of mean value functions of all models for Dataset #1.
Symmetry 11 00521 g016
Figure 17. Goodness-of-fit and prediction of mean value functions of all models for Dataset #2.
Figure 17. Goodness-of-fit and prediction of mean value functions of all models for Dataset #2.
Symmetry 11 00521 g017
Table 1. Software reliability models.
Table 1. Software reliability models.
No.Model m ( t )
1Goel–Okumoto (GO) [1] m ( t ) = a ( 1 e b t )
2Yamada et al. (Y-DS) [2] m ( t ) = a ( 1 ( 1 + b t ) e b t )
3Ohba (O-IS) [3] m ( t ) = a ( 1 e b t ) 1 + β e b t
4Yamada et al.(Y-Exp) [4] m ( t ) = a ( 1 e γ α ( 1 e β t ) )
5Yamata et al. (Y-Ray) [4] m ( t ) = a ( 1 e γ α ( 1 e β t 2 / 2 ) )
6Yamada et al. (Y-ID 1) [17] m ( t ) = a b α + b ( e α t e b t )
7Yamada et al. (Y-ID 2) [17] m ( t ) = a [ 1 e b t ] [ 1 α b ] + α a t
8Hossain-Dahiya (HD-GO) [18] m ( t ) = l o g [ ( e a c ) / ( e a e b t c ) ]
9Pham et al. (P-GID 1) [19] m ( t ) = a [ 1 e b t ] [ 1 α b ] + α a t 1 + β e b t
10Pham–Zhang (P-GID) [20] m ( t ) = 1 1 + β e b t ( ( c + a ) [ 1 e b t ] [ a b b α ( e α t e b t ) ] )
11Zhang et al. (Z-FR) [21] m ( t ) = a p β 1 ( ( 1 + α ) e b t 1 + α e b t ) c b ( p β )
12Teng-Pham (TP) [8] m ( t ) = a p q 1 ( β β + ( p q ) l n ( c + e b t c + 1 ) ) α
13Pham Zhang IFD (PZ-IFD) [22] m ( t ) = a ( 1 e b t ) ( 1 + ( b + d ) t + b d t 2 )
14Pham (DP 1) [23] m ( t ) = α ( γ t + 1 ) ( γ t 1 + e γ t )
15Pham (DP 2) [23] m ( t ) = m 0 ( γ t + 1 γ t 0 + 1 )   e γ ( t t 0 )
+ α ( γ t + 1 ) ( γ t 1 + ( 1 γ t 0 ) e γ ( t t 0 )
16Kapur et al. (SRGM-3) [24] m ( t ) = A 1 α 1 ( ( 1 + b t + b 2 t 2 2 ) e b t ) p ( 1 α )
17Roy et al. (R-M-D) [25] m ( t ) = a α [ 1 e b t ] [ a b b β ( e β t e b t ) ]
18Chang et al. (C-TC) [27] m ( t ) = N [ 1 ( β β + ( a t ) b ) α ]
19Pham (P-Vtub) [26] m ( t ) = N [ 1 ( β β + a t b 1 ) α ]
20Song et al. (S-3PFD) [12] m ( t ) = N [ 1 ( β β a b l n ( ( 1 + c ) e b t 1 + c e b t ) ) ]
21Proposed New Model m ( t ) = N ( 1 β β + l n ( a + e b t 1 + a ) ) α
Table 2. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #1.
Table 2. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #1.
No.ModelParameter EstimationMSERMSEPRRPPAICR2Adj R2SAEMAEVariation
1GO a ^ = 3,923,854.7292
b ^ = 3.2 × 10−7
3.86721.96651.31074.700165.35390.95820.953533.78951.77842.0637
2Y-DS a ^ = 39.82198
b ^ = 0.1104
1.49381.222212.07300.967563.94000.98380.982019.99511.05241.1926
3O-IS a ^ = 26.6845, b ^ = 0.2918
β ^ = 21.6851
0.67450.82132.84750.655664.17700.99310.991912.93690.71870.7996
4Y-Exp a ^ = 92,075.4308, α ^ = 0.3562
β ^ = 0.0005147, γ ^ = 0.07514
4.33922.08311.33804.882769.36340.95800.947533.98881.99932.1300
5Y-Ray a ^ = 29.0366, α ^ = 5.9677
β ^ = 0.000374, γ ^ = 4.8158
1.14211.068730.35831.243567.62170.98890.986216.69300.98190.9920
6Y-ID 1 a ^ = 1091.828, b ^ = 0.00098
α ^ = 0.0209
3.44701.85660.94142.733967.67560.96470.958430.91441.71751.8285
7Y-ID 2 a ^ = 2.3351, b ^ = 0.2451
α ^ = 0.6469
2.57661.60520.68470.908166.58430.97360.968925.04831.39161.5265
8HD-GO a ^ = 709.7827, b ^ = 0.00181
c ^ = 0.0998
4.17862.04421.37225.098267.37850.95720.949634.37081.90952.1867
9P-GID 1 a ^ = 10.6281, b ^ = 0.37304
α ^ = 0.0817, β ^ = 17.0709
1.21401.10188.54071.131766.66360.98830.985317.79281.04661.0894
10P-GID 2 a ^ = 7.1732, b ^ = 0.2784
α ^ = 0.2249, β ^ = 16.7796
c ^ = 19.9096
0.80410.89673.47900.716268.16900.99270.990213.30270.83140.8203
11Z-FR a ^ = 0.5193, b ^ = 0.4377
α ^ = 5.5458, β ^ = 6.9059
c ^ = 4.6241, p ^ = 6.9172
1.77151.33101.85320.633270.91950.98490.978419.26291.28421.1597
12TP a ^ = 6.7122, b ^ = 0.1818
α ^ = 0.0687, β ^ = 0.053
c ^ = 0.7196, p ^ = 0.1544
q ^ = 0.1534
3.68871.92060.77221.948874.76130.97060.954827.81561.98681.6781
13PZ-IFD a ^ = 6.3355, b ^ = 0.1287
d ^ = 0.0129
2.83391.68340.71561.616166.80700.97100.965827.39021.52171.6445
14P-DP 1 α ^ = 2.7 × 10−6
γ ^ = 165.8689
14.58263.8187172.83723.790075.94090.84230.824865.76053.46114.7568
15P-DP 2 α ^ = 6879.0649, γ ^ = 0.00408
t ^ 0 = 0.3483, m ^ 0 = 3.9986
9.12843.02132.127222.146178.56800.91170.889648.56222.85662.7857
16K-SRGM 3 A ^ = 24.989, b ^ = 0.1385
α ^ = 0.1012, p ^ = 3.5204
1.22951.1088768.13662.375970.33120.98810.985117.70061.04121.1036
17R-M-D a ^ = 40.2018, b ^ = 0.1152
α ^ = 0.9319, β ^ = 0.1402
2.00591.41636379037.09051.723480.01290.98060.975722.36241.31541.5591
18C-TC a ^ = 0.00432, b ^ = 2.234
α ^ = 9959.1698, β ^ = 15.2504
N ^ = 26.8334
1.09391.0459102.39041.748170.55890.99000.986716.04221.00260.9842
19P-Vtub a ^ = 1.0985, b ^ = 1.2978
α ^ = 1.5176, β ^ = 11.3848
N ^ = 25.7412
0.71780.84724.38630.720069.01140.99350.991312.75220.79700.7803
20S-3PFD a ^ = 0.038, = 0.292
β ^ = 0.002, N ^ = 26.889
c ^ = 1488.598
0.75900.87122.85380.656568.16940.99310.990812.94910.80930.7980
21New Model a ^ = 108,232.8195
b ^ = 1.0047, α ^ = 0.2176
β ^ = 155.5011, N ^ = 47.7965
0.58640.76580.50241.202566.73010.99470.992911.37830.71110.6906
Table 3. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #2.
Table 3. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #2.
No.ModelParameter EstimationMSERMSEPRRPPAICR2Adj R2SAEMAEVariation
1GO a ^ = 5899.3694
b ^ = 0.0036
6.65372.57950.68591.087078.31630.97180.968743.17312.27232.6349
2Y-DS a ^ = 62.3045
b ^ = 0.1185
3.27321.809244.36121.429881.08730.98620.984632.52161.71171.8123
3O-IS a ^ = 46.5437, b ^ = 0.2409
β ^ = 12.2242
1.87041.36765.95460.896576.94770.99250.991221.96051.22001.3693
4Y-Exp a ^ = 616.2702, α ^ = 0.1998
β ^ = 0.00048, γ ^ = 36.8345
7.95982.82130.70151.183582.26100.96990.962344.26082.60362.7046
5Y-Ray a ^ = 47.0086, α ^ = 6.2232
β ^ = 0.00029, γ ^ = 6.3708
3.21471.7929111.41781.845185.76320.98780.984827.76021.63301.8734
6 Y-ID 1 a ^ = 647.4658, b ^ = 0.00295
α ^ = 0.0158
6.24632.49930.68330.751280.98180.97500.970540.91772.27322.4195
7Y-ID 2 a ^ = 68.4424, b ^ = 0.0265
α ^ = 0.0511
5.99962.44940.73080.670780.89240.97600.971739.95772.21992.3415
8HD-GO a ^ = 709.7826, b ^ = 0.00307
c ^ = 0.7659
7.33982.70920.70351.204580.28590.97060.965444.41422.46752.7776
9P-GID 1 a ^ = 46.4854, b ^ = 0.2410
α ^ = 0.000067, β ^ = 12.2127
1.98141.40765.95240.896478.94800.99250.990621.97491.29261.3673
10P-GID 2 a ^ = 0.000032, b ^ = 0.2409
α ^ = 1.6139, β ^ = 12.2219
c ^ = 46.5445
2.10421.45065.95100.896380.94770.99250.990021.96441.37281.3683
11Z-FR a ^ = 283.299, b ^ = 0.173997
α ^ = 520.118, β ^ = 0.27156
c ^ = 1.8366, p ^ = 6.9716
1.86011.36393.90720.729684.23040.99380.991120.10201.34011.2316
12TP a ^ = 192.3412, b ^ = 0.3131
α ^ = 0.029, β ^ = 0.1954
c ^ = 1 0.0202, p ^ = 0.5938
q ^ = 0.3842
3.51001.87356.21270.918286.28750.98910.983228.62932.04501.6235
13PZ-IFD a ^ = 13.1083, b ^ = 0.1154
d ^ = 0.0057
5.11262.26111.06500.654680.13750.97950.975936.46782.02602.1645
14P-DP 1 α ^ = 7.6 × 10−7
γ ^ = 403.0753
43.76006.6151613.62854.5758104.74740.81490.7943122.01956.42218.8176
15P-DP 2 α ^ = 3343.5848, γ ^ = 0.00728
t ^ 0 = 0.3771, m ^ 0 = 7.7754
21.30064.61531.41485.463196.77400.91940.899276.48764.49934.2597
16K-SRGM 3 A ^ = 21.2662, b ^ = 0.3874
α ^ = 0.6255, p ^ = 0.8962
3.95251.9881439.71962.057590.55200.98500.981332.81461.93031.9720
17R-M-D a ^ = 157.3012, b ^ = 0.0144
α ^ = 1.2327, β ^ = 0.3454
4.63782.15366.71020.913983.37770.98240.978134.48052.02832.0045
18C-TC a ^ = 0.00607, b ^ = 1.864
α ^ = 9445.7865, β ^ = 93.4027
N ^ = 48.9629
3.30031.816757.31691.578886.44420.98820.984328.83351.80211.7301
19P-Vtub a ^ = 1.2575, b ^ = 0.98699
α ^ = 1.4559, β ^ = 16.4583
N ^ = 45.2916
1.98511.40894.74540.809280.77440.99290.990621.78601.36161.3047
20S-3PFD a ^ = 3.078, b ^ = 0.2410
β ^ = 0.170, N ^ = 46.8430
c ^ = 999.493
2.10461.45075.95670.896780.94770.99250.990021.96611.37291.3680
21New Model a ^ = 9198.8054
b ^ = 0.7274, α ^ = 0.2584
β ^ = 5.9777, N ^ = 50.2841
0.84700.92030.11590.135577.04230.99700.996014.03670.87730.8232
Table 4. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #3.
Table 4. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #3.
No.ModelParameter EstimationMSERMSEPRRPPAICR2Adj R2SAEMAEVariation
1GO a ^ = 464.5247
b ^ = 0.00536
9.09203.01530.92841.444362.41280.90690.886228.22002.82202.8899
2Y-DS a ^ = 35.8316
b ^ = 0.2396
7.00342.646413.44151.602563.35030.92830.912424.46712.44672.5391
3O-IS a ^ = 26.9254, b ^ = 0.6204
β ^ = 30.0163
5.11382.261422.90091.443657.80130.95290.935218.28012.03112.1767
4Y-Exp a ^ = 6456.5057, α ^ = 0.1461
β ^ = 0.00495, γ ^ = 0.5332
11.36523.37120.92841.444266.41490.90690.853728.22113.52762.8900
5Y-Ray a ^ = 28.4774, α ^ = 23.884
β ^ = 2.3 × 10−6, γ ^ = 746.3198
7.48182.735336.85841.585566.06820.93870.903721.38312.67292.5201
6 Y-ID 1 a ^ = 316.6384, b ^ = 0.00789
α ^ = 0.0016
10.10403.17870.92431.451564.37420.90690.872028.18063.13122.8893
7Y-ID 2 a ^ = 4.4246, b ^ = 0.4623
α ^ = 0.5756
9.97463.15831.26451.227164.99670.90810.873629.07653.23072.8578
8HD-GO a ^ = 446.4583, b ^ = 0.00558
c ^ = 1 × 10−9
10.10223.17840.92791.444764.40080.90690.872028.21493.13502.8889
9P-GID 1 a ^ = 27.1316, b ^ = 0.5709
α ^ = 2.1 × 10−11, β ^ = 22.163
5.82332.413214.47031.367259.54270.95230.925018.26132.28272.1505
10P-GID 2 a ^ = 1 × 10−10, b ^ = 0.6204
α ^ = 0.0387, β ^ = 30.0163
c ^ = 26.9254
6.57492.564222.90091.443661.80130.95290.913618.28012.61142.1767
11Z-FR a ^ = 46.9742, b ^ = 0.1457
α ^ = 0.1382, β ^ = 0.1987
c ^ = 0.0596, p ^ = 0.4732
14.98493.87100.94161.441970.21050.90790.797528.03424.67242.8709
12TP a ^ = 107.787, b ^ = 2.6× 10−8
α ^ = 0.2044, β ^ = 1.1 × 10−7
c ^ = 1.1232, p ^ = 0.00142
q ^ = 0.000061
18.28594.27620.94531.425872.70100.90640.742628.38325.67662.9222
13PZ-IFD a ^ = 281.2703, b ^ = 0.0083
d ^ = 0.000031
10.20293.19421.03361.284464.96470.90600.870729.10553.23392.8893
14P-DP 1 α ^ = 2.0 × 10−6
γ ^ = 1115.343
34.44165.8687246.90202.612686.78620.64740.569054.14425.41446.8563
15P-DP 2 α ^ = 2070.3183, γ ^ = 0.0125
t ^ 0 = 8.7785, m ^ 0 = 19.516
21.26674.61161.03061.540772.40940.82580.726341.39195.17403.9346
16K-SRGM 3 A ^ = 27.2536, b ^ = 0.1691
α ^ = 0.00, p ^ = 9.476
7.29702.7013444.63361.956971.88130.94020.906120.69292.58662.5289
17R-M-D a ^ = 35.969, b ^ = 0.24269
α ^ = 0.9901, β ^ = 0.2427
8.75252.958515.61611.624667.69200.92830.887324.40483.05062.5408
18C-TC a ^ = 0.2739, b ^ = 2.604
α ^ = 12.2099, β ^ = 50.5848
N ^ = 26.7229
7.98592.8259302.48761.905671.55910.94280.895119.75732.82252.4696
19P-Vtub a ^ = 1.9764, b ^ = 0.8427
α ^ = 33.7789, β ^ = 804.4101
N ^ = 25.8336
6.14082.47819.41911.193259.45760.95600.919318.01942.57422.0513
20S-3PFD a ^ = 10.6227, b ^ = 0.6211
β ^ = 1.3819, N ^ = 27.808
c ^ = 397.0025
6.60752.570523.04541.441261.98860.95260.913218.34612.62092.1986
21New Model a ^ = 9643.4774
b ^ = 1.3046, α ^ = 0.3131
β ^ = 1.4073, N ^ = 27.70003
4.44122.10741.71900.737654.34820.96820.941615.46912.20991.7189
Table 5. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #4.
Table 5. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #4.
No.ModelParameter EstimationMSERMSEPRRPPAICR2Adj R2SAEMAEVariation
1GO a ^ = 270.1056
b ^ = 0.02075
18.36344.28530.32490.450380.36760.95190.941241.48954.14904.1060
2Y-DS a ^ = 69.8210
b ^ = 0.2627
14.04183.74726.71450.871182.31700.96320.955035.41853.54183.6596
3O-IS a ^ = 59.0235, b ^ = 0.4417
β ^ = 9.7336
10.86773.29662.61380.629875.12540.97440.964728.10223.12253.0973
4Y-Exp a ^ = 5981.4323, α ^ = 0.0778
β ^ = 0.0199, γ ^ = 0.6051
22.95994.79160.32500.449884.36860.95180.924341.50895.18864.1045
5Y-Ray a ^ = 57.701, α ^ = 58.8436
β ^ = 2.6 × 10−8, γ ^ = 30,017.6386
16.61084.075620.82561.096786.70890.96520.945331.46683.93343.9710
6 Y-ID 1 a ^ = 270.1054, b ^ = 0.02075
α ^ = 1.4 × 10−8
20.40384.51710.32490.450382.36760.95190.933841.48964.61004.1060
7Y-ID 2 a ^ = 270.1041, b ^ = 0.0208
α ^ = 9.9 × 10−8
20.41104.51790.32380.454682.38420.95180.933841.38854.59874.1220
8HD-GO a ^ = 270.1056, b ^ = 0.0208
c ^ = 1 × 10−9
20.41114.51790.32380.454682.38430.95180.933841.38824.59874.1221
9P-GID 1 a ^ = 59.0239, b ^ = 0.4417
α ^ = 1.4 × 10−10, β ^ = 9.7338
12.22613.49662.61390.629877.12550.97440.959728.10253.51283.0972
10P-GID 2 a ^ = 1 × 10−10, b ^ = 0.4418
α ^ = 0.1819, β ^ = 9.7336
c ^ = 59.0235
13.97273.73802.61130.629879.12450.97440.953028.09914.01423.0944
11Z-FR a ^ = 88.285, b ^ = 0.2295
α ^ = 1.1342, β ^ = 0.1063
c ^ = 0.1169, p ^ = 1.0616
25.79055.07840.40370.439985.89690.95940.910738.17716.36293.7579
12TP a ^ = 5.5526, b ^ = 0.1157
α ^ = 2.7637, β ^ = 0.2783
c ^ = 0.1523, p ^ = 0.0844
q ^ = 0.0632
36.28116.02340.32890.447590.22140.95240.869241.30618.26124.0814
13PZ-IFD a ^ = 13.351, b ^ = 0.3015
d ^ = 1 × 10−10
18.95824.35410.55630.451482.75520.95530.938541.56604.61843.9404
14P-DP 1 α ^ = 5.5 × 10−8
γ ^ = 3036.6077
146.234012.0927193.53422.9279132.57620.61660.5314120.141512.014115.5642
15P-DP 2 α ^ = 71,469.8104, γ ^ = 0.00313
t ^ 0 = 8.7 × 10−6, m ^ 0 = 13.8578
64.82528.05140.74521.5864108.71490.86400.786371.55898.94496.8673
16K-SRGM3 A ^ = 47.998, b ^ = 0.9041
α ^ = 0.2937, p ^ = 0.394
18.95894.354223.29821.111291.81890.96020.937534.90754.36343.8929
17R-M-D a ^ = 66.7919, b ^ = 0.2313
α ^ = 1.10797, β ^ = 0.23129
16.95974.11822.13360.652883.05800.96440.944135.95264.49413.5372
18C-TC a ^ = 0.3805, b ^ = 1.7675
α ^ = 6021.5496
β ^ = 33,686.7619, N ^ = 61.0142
18.12404.25727.62840.883986.12500.96670.939032.26564.60943.5264
19P-Vtub a ^ = 2.3187, b ^ = 0.6928
α ^ = 13.81799, β ^ = 269.3212
N ^ = 55.7854
12.23313.49761.25080.470876.29490.97750.958826.51963.78852.8320
20S-3PFD a ^ = 223.3416, b ^ = 0.4424
β ^ = 4.4956, N ^ = 59.245
c ^ = 1213.0757
13.97423.73822.62640.630879.12810.97440.953028.06624.00953.0997
21New Model a ^ = 42763.1241
b ^ = 1.6385, α ^ = 0.2005
β ^ = 12.6879, N ^ = 65.868
6.71202.59080.18120.136370.51950.98770.977418.22302.60332.0735
Table 6. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #5.
Table 6. Results of Model Parameter Estimation and Criteria for Comparison from Dataset #5.
No.ModelParameter EstimationMSERMSEPRRPPAICR2Adj R2SAEMAEVariation
1GO a ^ = 94.3479
b ^ = 0.0733
4.02452.00610.29320.162757.70770.98550.982219.41981.94201.9150
2Y-DS a ^ = 57.5047
b ^ = 0.3437
8.20952.86527.39030.618469.61850.97040.963820.95772.09583.0188
3O-IS a ^ = 65.8343, b ^ = 0.2055
β ^ = 1.2874
4.05552.01380.48020.190360.14040.98680.981917.05331.89481.8479
4Y-Exp a ^ = 3344.357, α ^ = 0.1819
β ^ = 0.0718, γ ^ = 0.1584
5.03652.24420.29290.162561.70780.98550.977119.42162.42771.9171
5Y-Ray a ^ = 62.3356, α ^ = 0.038
β ^ = 0.0315, γ ^ = 51.9474
15.06693.881618.83970.850881.37980.95650.931626.51453.31433.7589
6 Y-ID 1 a ^ = 94.3479, b ^ = 0.0733
α ^ = 1.1 × 10−9
4.47162.11460.29320.162759.70770.98550.980019.41982.15781.9150
7Y-ID 2 a ^ = 94.3479, b ^ = 0.0733
α ^ = 1 × 10−10
4.47162.11460.29320.162759.70770.98550.980019.41982.15781.9150
8HD-GO a ^ = 94.3479, b ^ = 0.0733
c ^ = 0.000109
4.47162.11460.29320.162759.70770.98550.980019.41982.15781.9150
9P-GID 1 a ^ = 65.8343, b ^ = 0.2055
α ^ = 3.8 × 10−8, β ^ = 1.2874
4.56242.13600.48020.190362.14040.98680.979317.05332.13171.8479
10P-GID 2 a ^ = 0.0023, b ^ = 0.2055
α ^ = 0.2328, β ^ = 1.2874
c ^ = 65.832
5.21422.28350.48030.190364.14040.98680.975917.05392.43631.8480
11Z-FR a ^ = 27.83996, b ^ = 0.1217
α ^ = 5.0642, β ^ = 0.2853
c ^ = 1.3182, p ^ = 0.7382
6.05912.46150.45180.184466.18610.98690.971117.07762.84631.8421
12TP a ^ = 14.3726, b ^ = 0.1574
α ^ = 10.3379, β ^ = 2.9351
c ^ = 0.1765, p ^ = 0.2112
q ^ = 0.05296
7.88792.80850.31200.165267.71990.98580.960918.99813.79961.8967
13PZ-IFD a ^ = 6.4775, b ^ = 0.6673
d ^ = 1 × 10−10
9.06083.01010.85940.265762.80080.97060.959525.32172.81352.9951
14P-DP 1 α ^ = 0.0115
γ ^ = 6.5411
183.437813.5439432.03333.4669124.80120.33800.1909136.047213.604718.4397
15P-DP 2 α ^ = 36,611.3949, γ ^ = 0.004045
t ^ 0 = 15.3658, m ^ 0 = 90.9356
43.75046.61440.54731.095681.28940.87370.801557.21367.15175.6431
16K-SRGM3 A ^ = 55.9248, b ^ = 5.8366
α ^ = 0.2215, p ^ = 0.029
6.63142.57511.94900.368666.83240.98090.969919.51642.43962.3443
17R-M-D a ^ = 39.8594, b ^ = 0.18996
α ^ = 1.7852, β ^ = 0.1900
4.67552.16230.45480.188462.08410.98650.978817.57482.19691.8633
18C-TC a ^ = 0.1375, b ^ = 1.0738
α ^ = 16,035.1043
β ^ = 24,002.2196, N ^ = 80.5179
5.64192.37530.42850.188364.24190.98570.973918.38802.62691.9083
19P-Vtub a ^ = 2.3789, b ^ = 0.6047
α ^ = 1.2364, β ^ = 14.6987
N ^ = 64.9314
4.41442.10100.24440.130462.51810.98880.979616.58552.36941.6903
20S-3PFD a ^ = 0.05496, b ^ = 0.2072
β ^ = 0.0245, N ^ = 68.5181
c ^ = 25.0097
5.21432.28350.48120.190564.13660.98680.975917.04842.43551.8470
21New Model a ^ = 276.2278
b ^ = 1.1084, α ^ = 0.2693
β ^ = 54.0622, N ^ = 93.8052
2.36711.53850.04120.033358.78190.99400.989011.48671.64101.2284
Table 7. Results of Model parameter estimation, criteria, and prediction for comparison-Dataset #1.
Table 7. Results of Model parameter estimation, criteria, and prediction for comparison-Dataset #1.
No.ModelParameter EstimationMSERMSEPRRPPAICR2Adj R2SAEMAEVariationPreSSE
1GO a ^ = 2439.1963
b ^ = 0.00052
4.85792.20411.32694.872153.23860.91990.907629.43192.10232.51725.7456
2Y-DS a ^ = 81.5159
b ^ = 0.0657
0.84550.919525.18851.139252.12610.98610.983911.10070.79290.9257128.7087
3O-IS a ^ = 32.2358, b ^ = 0.2378
β ^ = 16.9353
0.66850.81761.45520.458852.16020.98980.98729.41320.72410.772636.1311
4Y-Exp a ^ = 8326.9505, α ^ = 0.5383
β ^ = 0.00029, γ ^ = 0.9697
5.65732.37851.30884.750057.25500.92010.891029.26442.43872.45816.0419
5Y-Ray a ^ = 54.2003, α ^ = 0.0486
β ^ = 0.00296, γ ^ = 35.7066
1.02061.010239.94421.365556.13630.98560.980311.91000.99250.988570.2926
6 Y-ID 1 a ^ = 174.82114, b ^ = 0.00407
α ^ = 0.08606
1.09971.04870.41400.600453.94070.98320.979010.74610.82660.9927518.5369
7Y-ID 2 a ^ = 26.9978, b ^ = 0.0173
α ^ = 0.3124
0.87880.93740.92270.413553.36830.98650.98328.79740.67670.8735299.6604
8HD-GO a ^ = 709.7826, b ^ = 0.00179
c ^ = 27.17296
5.34332.31161.32834.859755.34030.91820.897829.62882.27912.48426.2455
9P-GID 1 a ^ = 12.7655, b ^ = 0.2324
α ^ = 0.095, β ^ = 6.7409
0.79990.89441.91200.517554.58680.98870.98469.65600.80470.816889.6441
10P-GID 2 a ^ = 0.00079, b ^ = 0.2378
α ^ = 6.79798, β ^ = 16.9353
c ^ = 32.235
0.79000.88881.45520.458856.16020.98980.98479.41320.85570.772636.1309
11Z-FR a ^ = 339.2109, b ^ = 0.2114
α ^ = 208.6339, β ^ = 0.1286
c ^ = 0.2734, p ^ = 13.3842
0.79170.88971.33980.441057.67730.99070.98459.21980.92200.73834.4041
12TP a ^ = 1.9773, b ^ = 0.2819
α ^ = 0.6614, β ^ = 0.2226
c ^ = 2.2607, p ^ = 0.0576
q ^ = 0.0702
1.23461.11110.78660.411961.19050.98690.97558.93090.99230.8616258.9145
13PZ-IFD a ^ = 3.0369, b ^ = 0.1407
d ^ = 0.1038
0.85890.92681.15190.431753.29610.98690.98368.84360.68030.8636268.2888
14P-DP 1 α ^ = 0.00053
γ ^ = 13.6882
2.56931.602989.21442.149155.27580.95770.951119.90411.42172.0091803.4099
15P-DP 2 α ^ = 16,554.0442, γ ^ = 0.00323
t ^ 0 = 1.1159, m ^ 0 = 1.8408
1.46011.20830.66542.035756.38300.97940.971912.33651.02801.0812598.1091
16K-SRGM 3 A ^ = 43.9312, b ^ = 1.3062
α ^ = 2.2423, p ^ = 0.0265
1.01991.009920.52910.934856.60490.98560.980410.16360.84700.9148264.3312
17R-M-D a ^ = 153.5748, b ^ = 0.0327
α ^ = 1.0556, β ^ = 0.0463
0.90910.95352.68610.563055.25280.98720.98259.28520.77380.8606188.6321
18C-TC a ^ = 0.0288, b ^ = 1.6825
α ^ = 193.5851, β ^ = 167.6135
N ^ = 86.0872
1.02151.01078.96120.823257.68700.98680.980210.01270.91020.8859160.1096
19P-Vtub a ^ = 1.2816, b ^ = 0.9844
α ^ = 1.1473, β ^ = 20.7523
N ^ = 31.4297
0.78300.88491.38790.450856.11360.98990.98489.39830.85440.768132.5458
20S-3PFD a ^ = 0.1496, b ^ = 0.2372
β ^ = 0.1982, N ^ = 36.9478
c ^ = 62.4407
0.81720.90401.30890.445556.19280.98940.98419.74230.88570.777445.9267
21New Model a ^ = 10453.17249
b ^ = 0.53348, α ^ = 0.4174
β ^ = 0.1175, N ^ = 24.9924
0.59150.76910.33800.605354.85720.99230.98858.27690.75240.65912.6780
Table 8. Results of Model parameter estimation, criteria, and prediction for comparison-Dataset #2.
Table 8. Results of Model parameter estimation, criteria, and prediction for comparison-Dataset #2.
No.ModelParameter EstimationMSERMSEPRRPPAICR2Adj R2SAEMAEVariationPreSSE
1GO a ^ = 3631.5077
b ^ = 0.00058856
8.09142.84450.68121.113461.19660.94630.938036.59622.61403.045613.3815
2Y-DS a ^ = 92.9084
b ^ = 0.08688
2.71801.648670.61001.621865.04410.98200.979219.85861.41851.8435157.7642
3O-IS a ^ = 76.8382, b ^ = 0.1513
β ^ = 9.348
1.51041.22902.74780.608461.02510.99070.988415.23381.17181.1949276.1112
4Y-Exp a ^ = 16,029.2962, α ^ = 0.2561
β ^ = 0.000375, γ ^ = 1.3887
9.42243.06960.68161.115865.18570.94640.926936.58703.04893.052313.4557
5Y-Ray a ^ = 83.3575, α ^ = 0.06002
β ^ = 0.005, γ ^ = 20.2075
3.75091.9367122.73341.901570.13730.97870.970920.97821.74822.140664.8807
6 Y-ID 1 a ^ = 1829.4685, b ^ = 0.00074
α ^ = 0.0672
1.72081.31181.55400.475961.57830.98940.986716.50541.26961.2638826.9275
7Y-ID 2 a ^ = 2404.9698, b ^ = 0.00049
α ^ = 0.1324
1.63751.27962.28630.561661.51030.98990.987415.90211.22321.2062605.1401
8HD-GO a ^ = 709.7827, b ^ = 0.00306
c ^ = 1× 10−9
9.19793.03280.69121.175263.40610.94330.929237.73172.90243.149012.6426
9P-GID 1 a ^ = 45.3264, b ^ = 0.1489
α ^ = 0.0343, β ^ = 5.1894
1.67921.29582.97360.631563.14270.99040.987015.39071.28261.2188306.8552
10P-GID 2 a ^ = 0.00085, b ^ = 0.1513
α ^ = 0.8954, β ^ = 9.34796
c ^ = 76.8381
1.78501.33602.74770.608465.02510.99070.986015.23351.38491.1948276.1485
11Z-FR a ^ = 110.9274, b ^ = 0.1443
α ^ = 243.7368, β ^ = 0.04498
c ^ = 2.4367, p ^ = 2.1721
1.89741.37742.91620.625366.75690.99100.985014.84151.48411.1887129.7523
12TP a ^ = 3.1572, b ^ = 0.1481
α ^ = 5.3435, β ^ = 0.1162
c ^ = 18.143, p ^ = 0.0997
q ^ = 0.0547
2.16361.47092.75350.609268.96440.99080.982715.13121.68121.1850244.6593
13PZ-IFD a ^ = 31.2103, b ^ = 0.038
d ^ = 0.0429
1.62791.27592.26680.557761.47000.99000.987515.98191.22941.2277583.1510
14P-DP 1 α ^ = 0.00055
γ ^ = 17.3503
10.74903.2786328.02242.952672.24850.92870.917744.49483.17824.63932211.9360
15P-DP 2 α ^ = 59,270.7316, γ ^ = 0.00215
t ^ 0 = 6.7491, m ^ 0 = 10.6115
2.46261.56930.28840.495363.55550.98600.980917.67011.47251.40471310.6154
16K-SRGM 3 A ^ = 7.5932, b ^ = 0.799
α ^ = 0.9886, p ^ = 0.6217
3.23771.7994153.84751.708471.36050.98160.974920.39971.70001.9248253.8031
17R-M-D a ^ = 3618.8593, b ^ = 0.00207
α ^ = 1.1458, β ^ = 0.0255
1.79071.33822.78030.610163.54280.98980.986116.04281.33691.2495500.8745
18C-TC a ^ = 0.0624, b ^ = 1.3996
α ^ = 58.5612, β ^ = 3755.7385
N ^ = 2468.2417
2.35271.53388.76590.916267.14760.98770.981617.56091.59641.3852383.3217
19P-Vtub a ^ = 1.8042, b ^ = 0.6499
α ^ = 2.9707, β ^ = 103.1556
N ^ = 66.0397
1.56011.24901.27530.441964.04240.99190.987814.58861.32621.0982233.1903
20S-3PFD a ^ = 0.0649, b ^ = 0.1509
β ^ = 0.07298, N ^ = 83.3809
c ^ = 64.9276
1.78611.33652.73950.607365.02840.99070.986015.26731.38791.2011278.3163
21New Model a ^ = 14,718.555
b ^ = 0.5631, α ^ = 0.3272
β ^ = 0.1884
N ^ = 41.8677
0.78270.88470.11030.130661.23420.99590.99399.96710.90610.75768.6532

Share and Cite

MDPI and ACS Style

Song, K.Y.; Chang, I.H.; Pham, H. NHPP Software Reliability Model with Inflection Factor of the Fault Detection Rate Considering the Uncertainty of Software Operating Environments and Predictive Analysis. Symmetry 2019, 11, 521. https://doi.org/10.3390/sym11040521

AMA Style

Song KY, Chang IH, Pham H. NHPP Software Reliability Model with Inflection Factor of the Fault Detection Rate Considering the Uncertainty of Software Operating Environments and Predictive Analysis. Symmetry. 2019; 11(4):521. https://doi.org/10.3390/sym11040521

Chicago/Turabian Style

Song, Kwang Yoon, In Hong Chang, and Hoang Pham. 2019. "NHPP Software Reliability Model with Inflection Factor of the Fault Detection Rate Considering the Uncertainty of Software Operating Environments and Predictive Analysis" Symmetry 11, no. 4: 521. https://doi.org/10.3390/sym11040521

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop