Next Article in Journal
Silver Price Forecasting Using Extreme Gradient Boosting (XGBoost) Method
Previous Article in Journal
Forecasting Canadian Age-Specific Mortality Rates: Application of Functional Time Series Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study of a New Software Reliability Growth Model under Uncertain Operating Environments and Dependent Failures

1
Department of Computer Science and Statistics, Chosun University, 146 Chosundae-gil, Dong-gu, Gwangju 61452, Republic of Korea
2
Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Road, Piscataway, NJ 08855-8018, USA
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(18), 3810; https://doi.org/10.3390/math11183810
Submission received: 28 July 2023 / Revised: 30 August 2023 / Accepted: 4 September 2023 / Published: 5 September 2023

Abstract

:
The coronavirus disease (COVID-19) outbreak has prompted various industries to embark on digital transformation efforts, with software playing a critical role. Ensuring the reliability of software is of the utmost importance given its widespread use across multiple industries. For example, software has extensive applications in areas such as transportation, aviation, and military systems, where reliability problems can result in personal injuries and significant financial losses. Numerous studies have focused on software reliability. In particular, the software reliability growth model has served as a prominent tool for measuring software reliability. Previous studies have often assumed that the testing environment is representative of the operating environment and that software failures occur independently. However, the testing and operating environments can differ, and software failures can sometimes occur dependently. In this study, we propose a new model that assumes uncertain operating environments and dependent failures. In other words, the model proposed in this study takes into account a wider range of environments. The numerical examples in this study demonstrate that the goodness of fit of the new model is significantly better than that of the existing SRGM. Additionally, we show the utilization of the sequential probability ratio test (SPRT) based on the new model to assess the reliability of the dataset.

1. Introduction

A software reliability growth model (SRGM) is employed to assess the reliability and quality of software products. This enables consumers to evaluate products by referring to reliability information, and developers can efficiently manage development plans based on reliability considerations. For instance, using the mean value function m(t), it is possible to predict the number of failures at a future time point t. Additionally, it can be used to establish policies for determining the optimal release timing for selling products. In other words, the SRGM is used as a tool for predicting the number of failures in future time periods, predicting product reliability, determining release policies, and estimating development costs. A software reliability growth model is represented by a mean value function m(t) that exhibits unique characteristics. The form of m(t) varies depending on the assumed environments (such as the development, testing, and operating phases). Although there have been numerous studies on software reliability models, it is generally observed that software defects and failures do not occur at regular time intervals. In response to this, existing SRGMs predominantly adopt a nonhomogeneous Poisson process (NHPP) framework. NHPP SRGMs provide a mathematical framework for handling software reliability and are widely utilized due to their versatility in various applications. Previous NHPP software reliability models were primarily built on the assumptions that any faults detected during the testing phase are promptly resolved without any debugging delays, no new faults are introduced, and the software systems deployed in real-world environments are either identical to or closely resemble those used during development and testing.
Most SRGMs generally follow a NHPP and assume that the testing environments are the same as the operating environments, and failures occur independently. In addition, SRGM modeling studies consider assumptions such as debugging environments, testing coverage function, total number of faults, and fault detection rate function. Huang et al. [1] introduced an NHPP SRGM that considered imperfect debugging, various errors, and change-points during the testing phase. Lou et al. [2] discussed a generalized NHPP SRGM with imperfect debugging. Imperfect debugging refers to the state in which not all faults or bugs within the software are eliminated when they occur. Chiu et al. [3] proposed an SRGM in which the number of potential errors fluctuates throughout the debugging period. Gupta et al. [4] introduced a model that considered the coverage factor and power functions in development environments. Zhang et al. [5] proposed a model that introduced new errors during the debugging period owing to imperfect debugging. Nguyen et al. [6] developed a new NHPP SRGM with an S-shaped fault detection rate function having three parameters.
Uncertain operating environments refer to the actual environment in which consumers use the software, including factors such as the operating system, background environments, hardware specifications, etc. This encompasses various scenarios and possibilities. Pradhan et al. [7,8,9,10] developed a model that incorporated an S-shaped inflection as the testing coverage function and considered uncertain operating environments. Environment factors (EFs) refer to various internal and external environments such as testing environments, programming tools, programming effort, program structure, hardware specifications, etc. The testing environment is also not consistent. Haque et al. [11] considered uncertain testing environments such as testing effort, testing skills, and testing coverage. Chatterjee et al. [12] studied the randomness effort using uncertain testing environments and operating environments. The NHPP SRGM studies mentioned earlier assumed independent failures. Sometimes, software failures can occur dependently due to interactions among EFs. Lee et al. [13] and Kim et al. [14] assumed that failures occurred dependently.
SRGMs can be used not only to determine reliability but also to plan release or warranty policies. Raheem et al. [15] devised an optimal release policy based on an SRGM considering imperfect debugging. Minamino et al. [16] and Ke et al. [17] introduced an optimal release policy based on a change-point model. Several studies have been conducted on software reliability using various approaches. Saxena et al. [18], Kumar et al. [19], and Garg et al. [20] developed criteria to assess the goodness of fit of SRGMs. These criteria were derived from a combination of the entropy principle and existing evaluation measures. Several studies have focused on criteria evaluating the reliability of the software and hardware. Yaghoobi [21] proposed two multicriteria decision-making methods for comparing SRGMs. Zhu [22] introduced the concept of complex reliability, which considered both hardware and software components, and proposed maintenance policies applicable to such systems. Several recent software reliability studies have employed machine-learning and deep-learning techniques [23,24,25,26].
Hypothesis testing is worth considering to determine software reliability. However, classical hypothesis testing requires large datasets, which is often a limitation because most software failure datasets are small. To address this issue, we introduce the sequential probability ratio test (SPRT) pioneered by Wald [27], which enables testing with small datasets. Unlike traditional statistical hypothesis testing, the SPRT provides test results at each data collection point, saving time and reducing the cost of data collection by drawing conclusions with less data. In other words, the SPRT is an efficient hypothesis-testing method in terms of time and cost. Stieber [28] successfully applied Wald’s SPRT to ensure software reliability. In this study, we extend the SPRT methodology to estimate software reliability.
The aim of this study is as follows: First, we present a new SRGM that considers both uncertain operating environments and dependent failures. Most of the SRGM research assumes either uncertain operating environments or only considers fault dependency. However, in this study, we develop a model that takes both assumptions into account. Subsequently, we evaluate the performance of each model using real datasets. Numerical examples demonstrate that the proposed model outperforms other models that solely account for uncertain operating environments or dependent failures. This provides a more accurate prediction of the number of failures. Additionally, we demonstrate the effectiveness of the SPRT by utilizing optimal assumption cases based on our proposed model. This allows testers to determine when to stop testing based on the software reliability.
In Section 2, we provide the basic background of NHPP SRGMs and introduce the existing NHPP SRGM models as well as the model proposed in this paper. The SPRT procedure is outlined in Section 3. Section 4 presents the datasets and criteria used in this numerical study. We compare the fit of each model to the datasets and apply the SPRT. In Section 5, we discuss the results of the numerical example. Finally, Section 6 presents the conclusions of this study.

2. Software Reliability Growth Model

2.1. Nonhomogeneous Poisson Process

Most SRGMs assume the nonhomogeneous Poisson process (NHPP), which can be represented by the following equation:
Pr N t = n = { m t } n n ! e m ( t ) , n = 0 , 1 , 2 , 3 , .
It characterizes the cumulative number of failures, denoted as N ( t ) ( t 0 ) , up to a given execution time t. The mean value function m(t) represents the expected cumulative number of failures at time t. The function m(t) can be obtained by integrating the intensity function λ t from 0 to t as follows:
m t = 0 t λ ( s ) d s .
The reliability function based on the NHPP can be expressed as follows using m(t) [29]. The reliability function R(t) is defined as the probability that there are no failures in the time interval (0, t), given by
R t = P N t = 0 = e m ( t ) .
Equation (3) means the probability that a software error does not occur in the interval (0, t). If t + x is given, then the software reliability can be expressed as a conditional probability R ( x | t ) as in Equation (4).
R x t = P N t + x N t = 0 = e [ m t + x m ( t ) ]
Here, R ( x | t ) is the probability that a software error does not occur in the interval (t, t + x), where t 0 , x > 0 . The density function of x is given by
f x = λ ( t + x ) e [ m t + x m ( t ) ]
where λ x = x [ m ( x ) ] .

2.2. Existing SRGMs

The mean value function m(t) of the NHPP SRGM is obtained by solving a differential equation. The form of the mean value function depends on the assumptions and specific environments being studied. The commonly used differential equation is as follows [30]:
d m t d t = b t a t m t
where the function a ( t ) and represents the expected number of initial failures and newly introduced errors until the start point of testing period and b t represents the failure detection rate per fault.
This paper presents a specific software reliability model with consideration of the uncertainty of the operating environment based on the work by Pham [26].
The NHPP SRGM is typically characterized by a differential equation that is widely recognized in the field. To account for the uncertain operating conditions considered in this study, the mean value function of the proposed model is obtained as follows [31]:
m t d t = η b t a ( t ) m t
where η is a random variable. In order to explain the uncertain operating environment, this equation utilizes the random variable η, which has a Γ ( α , β ) .

2.3. Proposed Model

Most existing NHPP SRGMs assume that testing and operating environments are the same and that failures occur independently. However, sometimes software failures occur dependently, and the operating environments may differ from the testing environments. For example, if an error occurs in a particular class within a program code, then it may cause errors in other classes that refer to the affected class. Conflicts between program codes due to background processes can potentially impact other codes. These situations lead to the occurrence of dependent failures. Furthermore, constructing a testing environments that consider all operating environments is difficult for testers. The operating environment means the environment in which consumers use the software, including hardware specifications (CPU, GPU, RAM, etc.), operating systems (Window, Mac, Linux, etc.), and various concurrently running programs in the background. The proposed model considers dependent failures and uncertain operating environments. Quantifying the operating environments numerically is difficult. So, looking at Equation (7), the uncertain operating environments are represented by η, the random variable. The assumption of dependent failure occurrences is represented by the parameters of the gamma distribution followed by η, which will be discussed in detail when explaining Equation (10).
In this paper, we propose a model that incorporates both uncertain operating environments and dependent failures. The inclusion of the latter is motivated by the need to consider situations in which failures can propagate from one component to another. With the functions a t = N and b t = c / ( 1 + α exp b t ) , we can obtain the mean value function m(t) from Equation (7), as shown below:
m t = N 1 e η 0 t b s d s d g ( η ) ,
m t = N 1 β α + b s d s α = N 1 β α + 0 t c 1 + α e b s d s α
= N 1 β α + c b l n α + e b t 1 + α α
The proposed model has five parameters, namely b, c, α, β, and N. In Equation (10), the parameters α and β are also the parameters of g η = β α η α 1 e β η Γ ( α ) , which is the gamma distribution, where η represents the uncertain operating environments in Equation (7).
The assumption of dependent failures in the proposed model arises from the interdependence of model parameters. Specifically, the values of α and β in Equation (10) depend on the probability distribution of η, which characterizes the uncertain operating environments. As these parameters appear in Equation (7), which expresses the failure detection rate, the assumption of dependent failures is a natural consequence of the model design. Therefore, the correlation between model parameters is a crucial factor that underlies the assumption of dependent failures.
Table 1 lists the mean value functions for the existing NHPP SRGMs and the proposed model. Each model is referred to by abbreviations of its characteristics or author names. DPF1 and DPF2 assume dependent failures, whereas the others assume independent failures. VTUB assumes uncertain operating environments, whereas the proposed model (NEW) assumes dependent failures and uncertain operating environments.

3. Sequential Probability Ratio Test

Wald’s SPRT is widely used as a hypothesis-testing technique [27]. It tests the probability ratios of two hypotheses, p 0 and p 1 , against a predetermined threshold value at each time point. The SPRT algorithm is iterative and requires additional data collection and testing if the probability ratio falls within a certain acceptance region (A and B). Equation (11) expresses the relationship between p 0 and p 1 and the thresholds A and B.
B < p 1 p 0 < A
where A and B are constants used to determine the acceptance and rejection of the null hypothesis H 0 . If p 1 / p 0 A , then H 0 is rejected. If p 1 / p 0 B , then H 0 is accepted.
Moreover, A and B depend on α and β , as shown in Equations (11) and (12). Here, α and β are type 1 and type 2 errors, respectively. In other words, α is the producer’s risk, and β is the consumer’s risk.
1 β A α ,   β 1 α B
A 1 β α , B β 1 α
The values of A and B depend on the prespecified risk probabilities, α and β, which represent type 1 (producer’s risk) and type 2 (consumer’s risk) errors and are typically set to 0.05 or 0.1, respectively. The upper line that determines rejection is called N U t , and the lower line that determines acceptance is called N L t are represented as follows:
N L t = a t b 1 , N U t = a t + b 2 .
where a , b 1 , and b 2 are given as follows
a = λ 1 λ 0 ln λ 1 λ 0 , b 1 = ln 1 α β ln λ 1 λ 0 , b 2 = ln 1 β α ln λ 1 λ 0
Figure 1 shows the reliable region of SPRT. If the data value (blue dot) at a certain time point exists within the reliable region, then it is labeled as “Continue”. If the value is outside the region, then a conclusion of “Reject” or “Accept” is made.
Stieber [28] applied the SPRT to estimate the reliability of NHPP SRGMs, which involved redefining the probability ratios p 0 and p 1 of Equation (11) in terms of the mean value function m(t). Here, p 0 and p 1 are expressed as follows:
p 0 = e m 0 ( t ) [ m 0 t ] N ( t ) N t ! , p 1 = e m 1 ( t ) [ m 1 t ] N ( t ) N t ! ,
ln β 1 α + m 1 t m 0 t ln m 1 t ln m 0 t < N t < ln 1 β α + m 1 t m 0 ( t ) ln m 1 t ln m 0 ( t ) .
The constant B of Equation (11) is on the left side of Equation (17), whereas the constant A of Equation (11) is on the right side of Equation (17). In addition, N ( t ) in Equation (17) is the probability ratio p 1 / p 0 .

4. Numerical Example

In this section, we fit the proposed model and existing models with actual data to estimate the criteria and compare their goodness of fit. We apply the sequential probability ratio test (SPRT) to evaluate the reliability of the data set. Firstly, we fit the data set to each model (mean value function) and estimate the parameters of each model using the least-squares estimation (LSE) method. Then, we calculate the criteria using the estimated parameter values ( m ^ ( t ) ) and compare the goodness of fit. Lastly, we construct an equidistant scale for the parameter set of the proposed model and determine the threshold for the SPRT test based on this parameter set. Finally, we examine the results of applying SPRT to Dataset 1.

4.1. Datasets

We employed two datasets to compare the goodness of fit of the different models [29]. The first dataset (Table 2) was collected by ABC Software Company. The project team comprised a unit manager, one user interface software engineer, and ten software engineers/testers. The dataset was observed over a period of 12 weeks (the unit of time in the table is weeks), and 55 failures were observed during this time.
The second dataset (Table 3) was collected from a real-time command and control system developed by Bell Laboratories. The failure data corresponds to the observed failures during system testing, and 136 failures were recorded within a period of 25 h.

4.2. Criteria

Different criteria have been suggested for evaluating how well a model fits the data [9]. This study discusses 10 criteria (MSE, PRR, PP, SAE, R2, AIC, PRV, RMSPE, MAE, and MEOP) to compare the proposed model with 10 existing NHPP SRGMs.
Table 4 presents various criteria used to evaluate the goodness of fit of different NHPP SRGMs and the proposed model. These criteria measure the distance or error between the predicted number of failures based on the mean value function of the model, denoted as m t i , and the actual observed data, denoted as y i . The number of data points is represented as n, and the number of parameters in the model is represented as m. The shorter the distance between the predicted and actual values, the better the mean value function of the model is at predicting the number of failures in the dataset.
The criteria used in the evaluation include the following: The MSE considers the number of parameters in the model, and the number of data points used to measure the distance between the predicted and actual values. The PRR measures the distance between the predicted and actual values while considering the value predicted by the model. The PP measures the distance between the predicted and actual values while considering only the actual data. The SAE measures the total distance between the predicted and actual values.
The coefficient of determination (R2) is a measure of the regression fit. It represents the proportion of the regression sum of squares to the total sum of squares in the model. The closer the value is to 1, the better the fit of the model.
The AIC is a statistical measure that evaluates the ability of a model to fit the data. The likelihood function ( L ) of the model is maximized, and the AIC is adjusted for the number of parameters in the model. Typically, a model with more parameters has a better fit; however, the AIC prevents overfitting by penalizing models with too many parameters. Specifically, the AIC is calculated as the log-likelihood function ( l o g L ) plus a penalty term that depends on the number of parameters in the model. The likelihood function ( L ) and the log-likelihood function ( l o g L ) are defined as follows:
L = i = 1 n ( m ^ t i m ^ ( t i 1 ) ) y i y i 1 y i y i 1 ! ,
log L = i = 1 n y i y i 1 log m ^ t i m ^ t i 1 m ^ t i m ^ t i 1 log ( y i y i 1 ! ) .
The PRV, also known as variation or variance, calculates the standard deviation of the prediction bias; a smaller value indicates a better model fit. The bias is defined as i = 1 n m ^ t i y i n . The RMSPE determines the closeness of the predicted value to the actual data, considering bias and the PRV. The MAE measures the mean absolute error between the predicted value and the actual data. The MEOP calculates the SAE with the predicted value of the model.
In summary, the goodness of fit of a model can be evaluated using 10 criteria. A larger value of R2 indicates a better fit of the model. Other criteria, such as the MSE, PRR, PP, SAE, AIC, PRV, RMSPE, MAE, and MEOP, indicate the degree of closeness between the predicted and actual values in comparison with other models on the same dataset. In general, smaller values for these criteria suggest a better fit of the model.

4.3. Results of Goodness of Fit

Table 5 and Table 6 present the estimated parameters of the models, which are obtained through the application of least-squares estimation.
Table 7 and Table 8 present the estimated criteria values of the models for the two datasets. For Dataset 1, the proposed model shows the smallest MSE, PRR, PP, SAE, AIC, RMSPE, MAE, and MEOP at 1.7560, 0.0078, 0.0079, 9.0969, 57.1869, 1.0571, 1.0571, and 1.2996, respectively. The proposed model shows the largest R2 at 0.9956 and the second smallest PRV at 59.6115. For Dataset 2, the proposed model shows the smallest MSE, PRR, PP, SAE, AIC, PRV, RMSPE, MAE, and MEOP at 7.2890, 0.0163, 0.0161, 45.6681, 116.7360, 122.8303, 2.4646, 2.4646, and 2.2834, respectively. The proposed model shows the largest R2 at 0.9936. The results indicate that the proposed model performs better than the other models in predicting the cumulative number of failures in the datasets.
The MSE, PRR, and PP are particularly commonly used criteria. Figure 2 and Figure 3 show the top three models for the criteria (MSE, PRR, and PP) in Table 7 and Table 8. In Figure 2, the goodness of fit of the proposed model for Dataset 1 is better than that of the DPF1 and DPF2 models, which assume only dependent failures. Similarly, Figure 3 shows that the goodness of fit of the proposed model for Dataset 2 is better than that of the VTUB model, which assumes only uncertain operating environments. Thus, the proposed model, which considers both dependent failures and uncertain operating environments, is a reasonable approach for studying software reliability.

4.4. Results of SPRT

As the model proposed herein is the best fit for the datasets, we propose a method for measuring reliability by applying the SPRT based on the proposed model. Dataset 1 is used in this study. To test reliability, the SPRT is used on individual parameters or a set of parameters. For the proposed model, applying the SPRT to the parameters α and β can lead to sensitivity issues and potentially skew the SPRT results. Therefore, the SPRT is applied specifically to parameters b, N, and c.
m 0 t = N 0 1 β α + c 0 b 0 ln α + e b 0 t 1 + α α ,   m 1 t = N 1 1 β α + c 1 b 1 ln α + e b 1 t 1 + α α .
Equation (20) shows the null and alternative hypotheses, m 0 t and m 1 t , created based on the interval scale of the parameter groups. The parameters ( b 0 , α , β , N 0 , and c 0 ) represent m 0 t , whereas m 1 t is represented by ( b 1 , α , β , N 1 , and c 1 ). The values of b 0 and b 1 are calculated as b ^ δ and b ^ + δ , respectively, where δ is set as the percentile of the parameter value. For instance, when δ is considered to be 1% of each parameter value, b 0 is computed as b ^ 0.01 × b ^ , and b 1 is calculated as b ^ + 0.01 × b ^ . Similarly, percentile values are used to determine the interval scales ( N 0 , N 1 , c 0 , and c 1 ) for N and c. The values m 0 ( t ) and m 1 ( t ) in Equation (20) are substituted into Equation (17) and based on whether N ( t ) satisfies Equation (17), the conclusion is reached as “Continue” if it is satisfied. If N ( t ) is smaller than the left term of Equation (17), then it is concluded as “Acceptance”. If N ( t ) is bigger than the right term of Equation (17), it is concluded as “Rejection”.
To compare the SPRT results, various cases of δ are considered in this study, and Table 9 presents 30 cases of δ values for b , N , and c .
Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15 show the SPRT results for Dataset 1. The SPRT results from case 1 to case 20 are “Continue.” This indicates “after collecting data for the next time point, and test for the next time point.” From case 21 to case 30, the results are “Reject” at t = 6 , which indicates “stop data collection, and reject the reliability.” If the result is “Accept”, then this indicates “stop the data collection, and accept the reliability”. As the value of δ increases, the area of acceptance and rejection increases, and as the value of δ decreases, the area of “Continue” increases. Therefore, determining an appropriate level of δ is important for the SPRT.
From Section 2.1, we can estimate reliability function R x t of Dataset 1, where x is given as 0.1. Figure 4 shows the results. In Figure 4, it can be observed that the reliability sharply decreases until just before time point 5. This is estimated to be due to the rapid increase in the number of failures in Dataset 1, as indicated in Table 2. The SPRT results (cases 21–30) concluded the rejection of product reliability at the 6th time point, which aligns with the substantial number of failures in Dataset 1 up to the 6th time point. Although Dataset 1 in this study is tested for 12 weeks, according to the results of SPRT, it can be concluded that testing should be discontinued at the 6th week and efforts should be made to improve reliability.

5. Discussion

Most NHPP SRGMs assume that the testing environment is the same as the operating environment or that software failures occur independently. In this study, we propose a new NHPP SRGM that assumes uncertain operating environments and dependent failures. The results of numerical examples demonstrate the superiority of the proposed model over the models that consider only uncertain operating environments (VTUB) or only dependent failures (DPF 1 and DPF 2). Thus, the proposed model estimates the number of failures better than the existing NHPP SRGMs. This study also demonstrates how we can estimate software reliability using the proposed model by applying the SPRT. As the value of δ increases, the “Continue” region becomes narrower, and the “Accept/Reject” regions become wider. Therefore, it is important to choose an appropriate level of δ , and further research on this matter is needed. Wood [43] explained that SRGMs can be used to predict the number of failures and provide software reliability to consumer. This study illustrates that the proposed model can be utilized in real environments.

6. Conclusions

This study had two objectives. First, we proposed a model that considered both dependent failures and uncertain operating environments. The results of the numerical examples demonstrated that the proposed model exhibited a significantly better fit than the models that considered only dependent failures (DPF 1 and DPF 2) or uncertain operating environments (VTUB).
Second, by leveraging the proposed model, we introduced a method for assessing software reliability through the application of the SPRT. Specifically, although the dataset was actually tested for 12 weeks, according to the results of the SPRT, testing was discontinued at the 6th week, and it was concluded that measures should be taken to improve the reliability. From the dataset and the values of the reliability function, it was observed that the number of failures in the observed dataset until a certain time point was higher than the number of failures after that point. In other words, even with a limited dataset, this study achieved the goal of an early reliability assessment by applying the SPRT. Further studies that link the SPRT with future software release policies will contribute to efficient development planning processes.

Author Contributions

Conceptualization, H.P.; funding acquisition, I.C.; software, D.L.; writing—original draft, D.L.; writing—review and editing, I.C. and H.P. All three authors contributed equally to this study. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Basic Science Research Program of the National Research Foundation of Korea (NRF), funded by the Ministry of Education (NRF-2021R1F1A1048592 and 2021R1A6A3A01086716).

Data Availability Statement

The data that support the findings of this study are openly available in reference number [29].

Acknowledgments

Many thanks to the reviewers for careful reading and valuable comments which improved the paper representation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huang, Y.-S.; Chiu, K.-C.; Chen, W.-M. A software reliability growth model for imperfect debugging. J. Syst. Softw. 2022, 188, 111267. [Google Scholar] [CrossRef]
  2. Luo, H.; Xu, L.; He, L.; Jiang, L.; Long, T. A Novel Software Reliability Growth Model Based on Generalized Imperfect Debugging NHPP Framework. IEEE Access 2023, 11, 71573–71593. [Google Scholar] [CrossRef]
  3. Chiu, K.C.; Huang, Y.S.; Huang, I.C. A study of software reliability growth with imperfect debugging for time-dependent potential errors. Int. J. Ind. Eng.-Theory Appl. Pract. 2019, 26. [Google Scholar] [CrossRef]
  4. Gupta, R.; Jain, M.; Jain, A. Software Reliability Growth Model in Distributed Environment Subject to Debugging Time Lag; Springer: Berlin/Heidelberg, Germany, 2019; pp. 105–118. [Google Scholar] [CrossRef]
  5. Zhang, C.; Yuan, Y.; Jiang, W.; Sun, Z.; Ding, Y.; Fan, M.; Li, W.; Wen, Y.; Song, W.; Liu, K. Software Reliability Model Related to Total Number of Faults under Imperfect Debugging; Springer: Berlin/Heidelberg, Germany, 2022; pp. 48–60. [Google Scholar] [CrossRef]
  6. Nguyen, H.C.; Huynh, Q.T. New non-homogeneous Poisson process software reliability model based on a 3-parameter S-shaped function. IET Softw. 2022, 16, 214–232. [Google Scholar] [CrossRef]
  7. Pradhan, V.; Dhar, J.; Kumar, A.; Bhargava, A. An S-Shaped Fault Detection and Correction SRGM Subject to Gamma-Distributed Random Field Environment and Release Time Optimization; Springer: Berlin/Heidelberg, Germany, 2020; pp. 285–300. [Google Scholar]
  8. Pradhan, S.K.; Kumar, A.; Kumar, V. A Testing Coverage Based SRGM Subject to the Uncertainty of the Operating Environment. In Proceedings of the 1st International Online Conference on Mathematics and Applications, online, 1–15 May 2023; MDPI: Basel, Switzerland, 2023; p. 44. [Google Scholar]
  9. Pradhan, S.K.; Kumar, A.; Kumar, V. A New Software Reliability Growth Model with Testing Coverage and Uncertainty of Operating Environment. Comput. Sci. Math. Forum. 2023, 7, 44. [Google Scholar] [CrossRef]
  10. Pradhan, V.; Dhar, J.; Kumar, A. Testing coverage-based software reliability growth model considering uncertainty of operating environment. Syst. Eng. 2023, 26, 449–462. [Google Scholar] [CrossRef]
  11. Haque, M.A.; Ahmad, N. Software reliability modeling under an uncertain testing environment. Int. J. Model. Simul. 2023, 1–7. [Google Scholar] [CrossRef]
  12. Chatterjee, S.; Saha, D.; Sharma, A.; Verma, Y. Reliability and optimal release time analysis for multi up-gradation software with imperfect debugging and varied testing coverage under the effect of random field environments. Ann. Oper. Res. 2022, 312, 65–85. [Google Scholar] [CrossRef]
  13. Lee, D.H.; Chang, I.H.; Pham, H. Software Reliability Model with Dependent Failures and SPRT. Mathematics 2020, 8, 1366. [Google Scholar] [CrossRef]
  14. Kim, Y.S.; Song, K.Y.; Pham, H.; Chang, I.H. A Software Reliability Model with Dependent Failure and Optimal Release Time. Symmetry 2022, 14, 343. [Google Scholar] [CrossRef]
  15. Raheem, A.R.; Akthar, S.; Rafi, S.M. An Imperfect Debugging Software Reliability Growth Model: Optimal Release Problems through Warranty Period based on Software Maintenance Cost Model. Rev. Geintec 2021, 11, 4623–4631. [Google Scholar] [CrossRef]
  16. Minamino, Y.; Inoue, S.; Yamada, S. Change-point–based software reliability modeling and its application for software development management. In Recent Advancements in Software Reliability Assurance; CRC Press: Boca Raton, FL, USA, 2019; pp. 59–92. [Google Scholar]
  17. Ke, S.Z.; Huang, C.Y. Software reliability prediction and management: A multiple change-point model approach. Qual. Reliab. Eng. Int. 2020, 36, 1678–1707. [Google Scholar] [CrossRef]
  18. Saxena, P.; Kumar, V.; Ram, M. A novel CRITIC-TOPSIS approach for optimal selection of software reliability growth model (SRGM). Qual. Reliab. Eng. Int. 2022, 38, 2501–2520. [Google Scholar] [CrossRef]
  19. Kumar, V.; Saxena, P.; Garg, H. Selection of optimal software reliability growth models using an integrated entropy–Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) approach. In Mathematical Methods in the Applied Sciences; Wiley: Hoboken, NJ, USA, 2021. [Google Scholar] [CrossRef]
  20. Garg, R.; Raheja, S.; Garg, R.K. Decision Support System for Optimal Selection of Software Reliability Growth Models Using a Hybrid Approach. IEEE Trans. Reliab. 2022, 71, 149–161. [Google Scholar] [CrossRef]
  21. Yaghoobi, T. Selection of optimal software reliability growth model using a diversity index. Soft Comput. 2021, 25, 5339–5353. [Google Scholar] [CrossRef]
  22. Zhu, M. A new framework of complex system reliability with imperfect maintenance policy. Ann. Oper. Res. 2022, 312, 553–579. [Google Scholar] [CrossRef]
  23. Wang, J.; Zhang, C. Software reliability prediction using a deep learning model based on the RNN encoder–decoder. Reliab. Eng. Syst. Saf. 2018, 170, 73–82. [Google Scholar] [CrossRef]
  24. San, K.K.; Washizaki, H.; Fukazawa, Y.; Honda, K.; Taga, M.; Matsuzaki, A. Deep Cross-Project Software Reliability Growth Model Using Project Similarity-Based Clustering. Mathematics 2021, 9, 2945. [Google Scholar] [CrossRef]
  25. Li, L. Software reliability growth fault correction model based on machine learning and neural network algorithm. Microprocess. Microsyst. 2021, 80, 103538. [Google Scholar] [CrossRef]
  26. Banga, M.; Bansal, A.; Singh, A. Implementation of machine learning techniques in software reliability: A framework. In Proceedings of the 2019 International Conference on Automation, Computational and Technology Management (ICACTM), London, UK, 24–26 April 2019; IEEE: New York, NY, USA, 2019; pp. 241–245. [Google Scholar] [CrossRef]
  27. Wald, A. Sequential Analysis; Dover Publications: Mineola, NY, USA, 2004; ISBN 978-0-486-61579-0. [Google Scholar]
  28. Stieber, H.A. Statistical quality control: How to detect unreliable software components. In Proceedings of the Eighth International Symposium on Software Reliability Engineering, Albuquerque, NM, USA, 2–5 November 1997; IEEE Computer Society: Washington, DC, USA, 1997; pp. 8–12. [Google Scholar]
  29. Pham, H. System Software Reliability; Springer: London, UK, 2006. [Google Scholar]
  30. Pham, H.; Nordmann, L.; Zhang, X. A general imperfect-software-debugging model with S-shaped fault-detection rate. IEEE Trans. Reliab. 1999, 48, 169–175. [Google Scholar] [CrossRef]
  31. Pham, H. A new software reliability model with Vtub-shaped fault-detection rate and the uncertainty of operating environments. Optimization 2014, 63, 1481–1490. [Google Scholar] [CrossRef]
  32. Yamada, S.; Ohba, M.; Osaki, S. S-shaped reliability growth modeling for software fault detection. IEEE Trans. Reliab. 1983, 32, 475–484. [Google Scholar] [CrossRef]
  33. Goel, A.L.; Okumoto, K. Time-Dependent Error-Detection Rate Model for Software Reliability and Other Performance Measures. IEEE Trans. Reliab. 1979, 28, 206–211. [Google Scholar] [CrossRef]
  34. Yamada, S.; Ohba, M.; Osaki, S. S-shaped Software Reliability Growth Models and Their Applications. IEEE Trans. Reliab. 1984, 33, 289–292. [Google Scholar] [CrossRef]
  35. Yamada, S.; Tokuno, K.; Osaki, S. Imperfect debugging models with fault introduction rate for software reliability assessment. Int. J. Syst. Sci. 1992, 23, 2241–2252. [Google Scholar] [CrossRef]
  36. Pham, H.; Zhang, X. An NHPP Software Reliability Model and Its Comparison. Int. J. Reliab. Qual. Saf. Eng. 1997, 04, 269–282. [Google Scholar] [CrossRef]
  37. Chang, I.H.; Pham, H.; Lee, S.W.; Song, K.Y. A testing-coverage software reliability model with the uncertainty of operating environments. Int. J. Syst. Sci. Oper. Logist. 2014, 1, 220–227. [Google Scholar] [CrossRef]
  38. Song, K.Y.; Chang, I.H.; Pham, H. A software reliability model with a Weibull fault detection rate function subject to operating environments. Appl. Sci. 2017, 7, 983. [Google Scholar] [CrossRef]
  39. Li, Q.; Pham, H. A testing-coverage software reliability model considering fault removal efficiency and error generation. PLoS ONE 2017, 12, e0181524. [Google Scholar] [CrossRef]
  40. Akaike, H. A new look at the statistical model identification. IEEE Trans. Automat. Contr. 1974, 19, 716–723. [Google Scholar] [CrossRef]
  41. Pillai, K.; Sukumaran Nair, V.S. A model for software development effort and cost estimation. IEEE Trans. Softw. Eng. 1997, 23, 485–497. [Google Scholar] [CrossRef]
  42. Anjum, M.; Haque, M.A.; Ahmad, N. Analysis and ranking of software reliability models based on weighted criteria value. Int. J. Inf. Technol. Comput. Sci. 2013, 5, 1–14. [Google Scholar] [CrossRef]
  43. Wood, A. Software Reliability Growth Models; TANDEM Technical Report; Tandem Computers: Cupertino, CA, USA, 1996; Volume 96. [Google Scholar]
Figure 1. Reliable region of SPRT: (a) rejection at the final time point (red dot); (b) acceptance at the final time point (red dot).
Figure 1. Reliable region of SPRT: (a) rejection at the final time point (red dot); (b) acceptance at the final time point (red dot).
Mathematics 11 03810 g001
Figure 2. The top three models of main criteria values for Dataset 1: (a) MSE; (b) PRR; (c) PP.
Figure 2. The top three models of main criteria values for Dataset 1: (a) MSE; (b) PRR; (c) PP.
Mathematics 11 03810 g002
Figure 3. The top three models of main criteria values for Dataset 2: (a) MSE; (b) PRR; (c) PP.
Figure 3. The top three models of main criteria values for Dataset 2: (a) MSE; (b) PRR; (c) PP.
Mathematics 11 03810 g003
Figure 4. Reliability function of Dataset 1.
Figure 4. Reliability function of Dataset 1.
Mathematics 11 03810 g004
Table 1. Mean value functions for the existing NHPP SRGMs and the proposed model.
Table 1. Mean value functions for the existing NHPP SRGMs and the proposed model.
No.Model m t
1DPF1 [13]
(dependent failure model 1)
a 1 + a h b + c c + b exp b t a b a t = a
b t = b 1 + c e b t
2DPF2 [14]
(dependent failure model 2)
a 1 + a h 1 + c c + exp b t a a t = N
b t = b 2 t 1 + b t
3DS [32]
(delayed S-shaped model)
a 1 1 + b t exp b t a t = a
b t = b 2 t 1 + b t
4GO [33]
(by Goel–Okumoto)
a 1 exp b t a t = a
b t = b
5IS [34]
(inflection S-shaped model)
a 1 exp b t 1 + β exp b t a t = a
b t = b 1 + β e b t
6YID [35]
(imperfect debugging model by Yamada)
a 1 exp b t 1 α b + α a t a t = a ( 1 + α t )
b ( t ) = b
7PNZ [30]
(by Pham–Nordmann–Zhang)
a 1 exp b t 1 α b + α a t 1 + β exp b t a t = a ( 1 + α t )
b t = b 1 + β e b t
8PZ [36]
(by Pham–Zhang)
c + a 1 exp b t a b b α exp a t exp b t 1 + β exp b t a t = c + a ( 1 e α t )
c t = b 1 + β e b t
9TC [37]
(testing coverage model)
N 1 β β + a t b α a t = N
c t = 1 e ( a t ) b
10VTUB [31]
(VTUB model)
N 1 β β + a t b 1 α a ( t ) = N
b t = b ln ( a ) t b 1 a t b
11NEW N 1 β α + c b l n α + e b t 1 + α α a t = N
b t = c 1 + α e b t
Table 2. Dataset 1 (the unit of time is weeks).
Table 2. Dataset 1 (the unit of time is weeks).
TimeFailuresCumulative FailuresTimeFailuresCumulative Failures
110107440
22128343
34169144
462210650
562811151
683612455
Table 3. Dataset 2 (the unit of time is hours).
Table 3. Dataset 2 (the unit of time is hours).
TimeFailuresCumulative FailuresTimeFailuresCumulative Failures
12727145111
21643155116
31154166122
41064170122
51175185127
6883191128
7184201129
8589212131
9392221132
10193232134
11497241135
127104251136
132106
Table 4. Criteria.
Table 4. Criteria.
No.Criteria
1Mean-square error (MSE) [29] i = 1 n m ^ t i y i 2 n m
2Predictive ratio risk (PRR) [29] i = 1 n m ^ t i y i m ^ t i 2
3Predictive power (PP) [29] i = 1 n m ^ t i y i y i 2
4Sum of absolute errors (SAE) [38] i = 1 n m ^ t i y i
5R-square ( R 2 ) [39] 1 i = 1 n m ^ t i y i 2 i = 1 n y i y ¯ i 2
6Akaike’s information criterion (AIC) [40] 2 log L + 2 m
7Predicted relative variation (PRV) [41] i = 1 n m ^ t i y i B i a s 2 n 1
8Root-mean-square prediction error (RMSPE) [41] B i a s 2 + P R V 2
9Mean absolute error (MAE) [42] i = 1 n m ^ t i y i n m
10Mean error of prediction (MEOP) [42] i = 1 n m ^ t i y i n m + 1
Table 5. Parameters estimation of Dataset 1.
Table 5. Parameters estimation of Dataset 1.
No.Model a ^ b ^ α ^ β ^ N ^ c ^
1GO94.3440.0733----
2DS57.4780.344----
3IS65.7810.206-1.293--
4PNZ64.9220.2080.0011.286--
5PZ7.6170.2100.0051.321-64.992
6TC0.0051.0752001.00084.68180.373-
7VTUB5.06931.7930.01810.000457.6685-
8DPF155.8930.004---0.548
9DPF256.0580.008---0.093
10NEW-1.49043.3712.09883.49617.472
Table 6. Parameters estimation of Dataset 2.
Table 6. Parameters estimation of Dataset 2.
No.Model a ^ b ^ α ^ β ^ N ^ c ^
1GO135.8000.139----
2DS124.6300.357----
3IS136.0900.138-0.0001--
4PNZ80.5640.3470.0340.0001--
5PZ0.00020.13910,0000.0001-135.800
6TC0.99200.7160.4684.740335.580-
7VTUB1.54160.704390.29430.6996187.0933-
8DPF1136.8300.001---0.724
9DPF2137.4160.007---4.477
10NEW-6.5710.4800.484232.7600.010
Table 7. Comparison of the criteria values of the models for Dataset 1.
Table 7. Comparison of the criteria values of the models for Dataset 1.
ModelMSEPRRPPSAER2AICPRVRMSPEMAEMEOP
GO4.02450.29320.162719.41700.985557.707658.67751.91201.91271.9417
DS8.20967.36790.617720.95400.970469.625170.59502.63052.72362.0954
IS4.05550.48150.190517.05200.986860.145161.59981.81261.82081.8947
PNZ4.56320.48180.190617.05660.986862.138964.07861.81281.82102.1321
PZ5.21530.48900.191717.04590.986864.168966.59341.81251.82102.4351
TC5.64200.43070.188818.37230.985764.251966.67651.89061.89452.6246
VTUB2.95160.03200.029612.60300.992559.551461.97601.36881.37041.8004
DPF12.82010.02760.027012.73890.991958.695860.63541.43211.43211.5924
DPF22.79460.02830.027512.75150.991958.627460.56701.42561.42561.5939
NEW1.75600.00780.00799.09690.995657.186959.61151.05711.05711.2996
Table 8. Comparison of the criteria values of the models for Dataset 2.
Table 8. Comparison of the criteria values of the models for Dataset 2.
ModelMSEPRRPPSAER2AICPRVRMSPEMAEMEOP
GO33.81210.46500.2567119.28160.9658122.0078124.44565.62465.68975.1862
DS134.574112.62731.1757239.49950.8641210.6486213.086311.142511.347910.4130
IS35.36210.47840.2614118.41770.9658123.8758127.53245.62035.69055.3826
PNZ9.89720.03240.029360.78470.9909118.4497123.32522.94142.94272.8945
PZ38.88750.46530.2567119.28280.9658128.0082134.10265.62465.68995.9641
TC7.67380.02000.020147.33120.9933116.9172123.01162.52882.52882.3666
VTUB7.70440.02230.022647.64640.9932116.8753122.96962.53372.53382.3823
DPF127.99810.19210.365486.12980.9742141.7791146.65464.94814.94954.1014
DPF229.51880.20800.398884.95040.9728143.8089148.68445.07375.08194.0453
NEW7.28900.01630.016145.66810.9936116.7360122.83032.46462.46462.2834
Table 9. Estimation parameters of Dataset 1.
Table 9. Estimation parameters of Dataset 1.
Parameter δ (30 Cases)
b ^ = 1.490 δ 1 = 1.490 × 0.01, δ 2 = 1.490 × 0.02, δ 3 = 1.490 × 0.03, …, δ 28 = 1.490 × 0.28, δ 29 = 1.490 × 0.29, δ 30 = 1.490 × 0.30
N ^ = 43.371 δ 1 = 43.371 × 0.01, δ 2 = 43.371 × 0.02, δ 3 = 43.371 × 0.03, …, δ 28 = 43.371 × 0.28, δ 29 = 43.371 × 0.29, δ 30 = 43.371 × 0.30
c ^ = 17.472 δ 1 = 17.472 × 0.01, δ 2 = 17.472 × 0.02, δ 3 = 17.472 × 0.03, …, δ 28 = 17.472 × 0.28, δ 29 = 17.472 × 0.29, δ 30 = 17.472 × 0.30
Table 10. Comparison of SPRT results for Dataset 1 (Cases 1–5).
Table 10. Comparison of SPRT results for Dataset 1 (Cases 1–5).
TDataCase 1Case 2Case 3Case 4Case 5
BABABABABA
110−91.6485111.9572−40.742961.0500−23.773244.0777−15.287835.5887−10.196530.4928
212−63.454886.9885−25.840849.3760−13.300736.8385−7.028730.5700−3.263626.8096
316−37.522769.4399−10.783042.7032−1.868933.79402.589729.34255.266526.6746
422−27.564672.2082−2.623947.26185.686038.94249.836934.778412.322932.2753
528−24.506282.11062.145255.446111.022046.547715.451942.087518.100539.4000
636−23.426892.29875.501763.353915.136053.692519.942648.847822.815045.9266
740−22.9201101.22778.113870.176118.449159.811123.605454.613426.686551.4790
843−22.6278108.832010.233975.951821.178264.976726.638459.473329.901356.1548
944−22.4486115.317911.989780.860723.459269.359729.181863.592932.601960.1160
1050−22.3433120.896413.463285.070725.388773.113331.339167.118134.895663.5040
1151−22.2903125.739114.713688.715927.038176.359133.187970.164236.863966.4301
1255−22.2751129.979815.785191.900128.461779.191234.787572.820138.569168.9802
ContinueContinueContinueContinueContinue
Table 11. Comparison of SPRT results for Dataset 1 (Cases 6–10).
Table 11. Comparison of SPRT results for Dataset 1 (Cases 6–10).
TDataCase 6Case 7Case 8Case 9Case 10
BABABABABA
110−6.802527.0931−4.378624.6624−2.561322.8372−1.1483921.41546−0.018820.2760
212−0.751724.30331.044222.51402.392921.17293.4435920.130814.285819.2981
3167.053024.89908.331223.63379.292022.687710.0415721.9548610.643521.3715
42213.975730.601915.151629.401916.028728.497116.7062127.7884917.243427.2169
52819.856337.596821.100136.297122.022335.310522.7288334.5313523.283133.8961
63624.717543.964426.063542.548027.059741.470527.8208340.6173228.415839.9195
74028.727149.373430.170547.853031.238546.696232.0542245.7797532.691545.0298
84332.062653.925733.591552.316534.723051.092335.5874950.1227936.263149.3296
94434.867557.781036.470756.095737.657554.814038.5645453.7993639.273952.9697
105037.252061.077338.919859.326240.154957.995041.0992556.9415341.838256.0806
115139.299963.923241.024462.114742.301960.740443.2791259.6531944.044258.7651
125541.075366.402742.850064.543844.165063.131445.1713162.0146545.959661.1028
ContinueContinueContinueContinueContinue
Table 12. Comparison of SPRT results for Dataset 1 (Cases 11–15).
Table 12. Comparison of SPRT results for Dataset 1 (Cases 11–15).
TDataCase 11Case 12Case 13Case 14Case 15
BABABABABA
1100.904519.34161.673118.56102.322517.89852.878117.32873.358616.8329
2124.976518.61795.553718.05216.043717.57446.465217.16616.832116.8132
31611.138420.897111.553220.504611.906620.175312.211919.895912.478919.6564
42217.678326.744518.036026.346118.334026.004518.585025.707218.798125.4452
52823.725533.364524.083132.909724.374532.513024.613232.161324.808931.8448
63628.888639.333229.268238.829329.575138.387429.823737.993330.024637.6362
74033.197544.399433.603543.857033.931143.381134.195942.956034.409342.5704
84336.799748.663037.230548.089737.578447.586837.859847.137738.086746.7305
94439.837752.272840.290751.673840.657051.148740.953750.680141.193550.2556
105042.426055.357842.898754.737043.281554.193243.592153.708443.843653.2694
115144.653258.020045.143557.380445.541056.820545.864056.321846.126255.8706
125546.587560.338147.093559.682247.504259.108447.838458.597648.110258.1359
ContinueContinueContinueContinueContinue
Table 13. Comparison of SPRT cases for Dataset 1 (Cases 16–20).
Table 13. Comparison of SPRT cases for Dataset 1 (Cases 16–20).
TDataCase 16Case 17Case 18Case 19Case 20
BABABABABA
1103.778016.39714.146916.01074.473715.66534.764915.35445.025815.0727
2127.154516.50567.440616.23527.696215.99597.926415.78298.134915.5922
31612.715019.449712.925719.269913.115319.112713.287418.974713.444618.8529
42218.980225.211619.136825.001419.271824.810619.388724.636019.490124.4751
52824.969131.556325.099531.290325.204531.042625.287530.809725.351630.5891
63630.185837.308430.313337.003730.412036.717530.485636.446030.537236.1863
74034.579742.215834.713841.885634.816541.574834.891841.279434.943040.9961
84338.268246.356138.411246.007638.520945.679538.601645.367638.656745.0686
94441.385849.865541.537849.502741.655149.161341.742148.837141.802348.5264
105044.045952.866444.206552.491944.331152.140044.424351.805944.489951.4860
115146.337755.456846.506155.072546.637654.711746.736854.369546.807554.0421
125548.330057.712848.505757.320248.643556.951948.748356.602948.823956.2693
ContinueContinueContinueContinueContinue
Table 14. Comparison of SPRT cases for Dataset 1 (Cases 21–25).
Table 14. Comparison of SPRT cases for Dataset 1 (Cases 21–25).
TDataCase 21Case 22Case 23Case 24Case 25
BABABABABA
1105.260714.81595.472914.58055.665414.36385.840614.16316.000413.9767
2128.324915.42078.499015.26588.659215.12538.807214.99758.944614.8808
31613.589118.745313.722818.649913.847018.565113.963118.489714.072118.4226
42219.578124.326019.654724.187119.721324.057019.779123.934619.829223.8192
52825.399030.378625.431730.176625.451229.981725.459129.792825.456429.6089
63630.569135.936130.583535.693530.582135.456830.566235.224830.537134.9965
740
1255
RejectRejectRejectRejectReject
Table 15. Comparison of SPRT cases for Dataset 1 (Cases 26–30).
Table 15. Comparison of SPRT cases for Dataset 1 (Cases 26–30).
TDataCase 26Case 27Case 28Case 29Case 30
BABABABABA
1106.146613.80276.280613.63966.403613.48636.516713.34176.620913.2047
2129.072714.77399.192314.67589.304514.58569.409914.50239.509314.4254
31614.174718.362814.271818.309514.364018.262114.451718.219914.535318.1823
42219.872723.709819.910223.606019.942423.507119.970123.412819.993723.3227
52825.444329.429325.423729.253525.395329.081025.359928.911325.318228.7442
63630.495834.770830.443334.547230.380334.325130.307734.103830.226033.8831
740
1255
RejectRejectRejectRejectReject
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, D.; Chang, I.; Pham, H. Study of a New Software Reliability Growth Model under Uncertain Operating Environments and Dependent Failures. Mathematics 2023, 11, 3810. https://doi.org/10.3390/math11183810

AMA Style

Lee D, Chang I, Pham H. Study of a New Software Reliability Growth Model under Uncertain Operating Environments and Dependent Failures. Mathematics. 2023; 11(18):3810. https://doi.org/10.3390/math11183810

Chicago/Turabian Style

Lee, Dahye, Inhong Chang, and Hoang Pham. 2023. "Study of a New Software Reliability Growth Model under Uncertain Operating Environments and Dependent Failures" Mathematics 11, no. 18: 3810. https://doi.org/10.3390/math11183810

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop