All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
Industries are constantly seeking ways to avoid corrective maintenance in order to reduce costs. Performing regular scheduled maintenance can help to mitigate this problem, but not necessarily in the most efficient way. In many real life applications, one wants to predict the future failure time of equipment or devices that are expensive, or with long lifetimes, to save costs and/or time. In this paper, statistical prediction was studied using the classical and Bayesian approaches based on a unified hybrid censoring scheme. Two prediction schemes were used: (1) a one-sample prediction scheme that predicted the unobserved future failure times of devices that did not complete the lifetime experiments; and (2) a two-sample prediction scheme to predict the ordered values of a future independent sample based on past data from a certain distribution. We chose to apply the results of the paper to the Burr-X model, due to the importance of this model in many fields, such as engineering, health, agriculture, and biology. Point and interval predictors of unobserved failure times under one- and two-sample prediction schemes were computed based on simulated data sets and two engineering applications. The results demonstrate the ability of predicting the future failure of equipment using a statistical prediction branch based on collected data from an engineering system.
Industries are constantly seeking ways to avoid corrective maintenance in order to reduce costs. Performing regular scheduled maintenance can help to mitigate this problem, but not necessarily in the most efficient way, see [1,2,3]. In condition-based maintenance, the main goal is to come up with ways to treat and transform data from an engineering system, so that it can be used to build a data set to make statistical predictions about how the equipment will act in the future and when it will break down.
In many practical situations, one desires to predict future observations from the same population of previous data. This may be done by constructing an interval that will include future observations with a certain probability.
Predictive interval accuracy depends on sample size; full testing is impractical in real testing, owing to the advancement of industrial design and technology, which results in very reliable products with long lifespans. Censoring has been implemented in this case for a variety of reasons, including a lack of available resources and the need to save costs. In general, only a small percentage of failure times are recorded when a is engaged in a test environment.
Let be the ordered failure times of n identical units placed on a life-test, from a certain distribution with , , where is the vector of parameters and , . For fixed and with and upon the relation between and , an is defined by Balakrishnan with six decisions, as follows:
(1)
Stopping the experiment at if ;
(2)
Stopping the experiment at if ;
(3)
Stopping the experiment at if ;
(4)
Stopping the experiment at if ;
(5)
Stopping the experiment at if ;
(6)
Stopping the experiment at if .
Let denote the number of failures until time , . Then, the of this censored sample is as follows:
Many well-known censoring schemes can be considered as special cases from the studied , such as generalized type-I HCS, see [4] when , generalized type-II HCS, see [4] when , type-I HCS, see [5], when and , type-II HCS, see [5], when and , type-I censoring, see [6], when , , and type-II censoring, see [6], when , and .
Among the advantages of is that it is more flexible than the generalized type-I and generalized type-II ; moreover, it guarantees us more observations, which will increase the accuracy of the predictive intervals.
Ref. [7] proposes the Burr-X distribution as a member of the Burr distribution family. This model is extremely useful in the fields of statistics and operations research. Engineering, health, agriculture, and biology are just some of the fields where it can be used to great effect.
A random variable X is said to have a Burr-X with a vector of parameters if the is given by
The corresponding and are given, respectively, as:
For more details about some Burr models with related inferences using classical and Bayesian approaches, see [8,9,10,11,12,13,14,15,16,17,18].
Many contributions found in this paper, such as: studying the prediction problem in a using the classical and Bayesian approaches with making some comparisons between the two approaches, analyzing two engineering real data sets using Burr-X distribution and applying the obtained results on these real data sets as illustrative examples.
This paper is organized as follows: the point and interval prediction problems under one- and two-sample prediction schemes were studied using the classical and Bayesian approaches in Section 2 and Section 3, respectively. In Section 4, the obtained results were applied on simulated and real data sets. Our conclusions are summarized in Section 5.
2. One-Sample Prediction
Assume that n items are placed in a life-time experiment and that this experiment will be terminated at a fixed time and the number of failures until this time is D. The previous ordered failures denoted by , which can be written for simplicity as , called (Informative sample). In Balakrishnan’s , will equal in the first case, in the second case, in the third case, in the fourth case, in the fifth case and in the sixth case. Moreover, D will equal in the first case, r in the second case, in the third case, r in the fourth case, in the fifth case, and k in the sixth case. In the one-sample prediction scheme, the future failure time will be predicted based on the informative sample.
In this section, the and of the future unknown failure time will be computed using classical and Bayesian methods.
First, the conditional of the future failure time given the vector of parameters should be derived as follows:
Based on the informative sample , the of given will be the of the sth ordered value from ordered values after , which can be written as (see [15,19,20,21]):
Using this , the conditional of the future failure time given based on all cases of Balakrishnan’s is:
In this subsection, the and of were obtained using the following (see [22]):
Substituting from (1) and (6) in (7), we have
Substituting from (2)–(4) in (8), we have
2.1.1. Point Predictor
In this subsection, the of will be obtained using two methods.
Method(1):
obtaining the values of , and , which maximize the logarithm of the , and will be denoted by , and , respectively. The values and will be called the and the value will be called the of .
To maximize the logarithm of the , we will differentiate with respect to , and , set the resulting derivatives to zero and solve the resulting nonlinear equations. The solution of the resulting nonlinear equations will be , and .
Method(2):
first, the of the parameters and , denoted by and , will be obtained, then replace and by and , respectively, in the , to obtain the in the form: and finally the of will be equal to , which represents the mathematical expectation of the random variable . To obtain the () of and , we will differentiate the logarithm of the then set the resulting equations to zero and solve the resulting nonlinear equations. The solution of the resulting nonlinear equations will be and .
Based on the studied , can be written in the form:
where A is a normalizing constant and has the value
So, the of will be
where
2.1.2. Interval Predictor
A of the future failure time can be obtained by solving the following two nonlinear equations:
From (10) and (11) in (14), the two nonlinear equations in (14) can be rewritten to be of the form
By solving the previous system, the of , , can be computed.
2.2. Bayesian Method (Bayesian Prediction)
Using the following bivariate prior suggested by [23,24]:
where , and are the prior parameters ( also known as hyperparameters) and (1) replace and by its definitions from (2) and (4), the posterior of and can be written as:
Using the previous posterior and the conditional of given and , (6), after using the definition of and from (2) and (4), the Bayesian predictive of given will be as follows (see [22]):
and the , , of can be obtained by solving the following two nonlinear equations:
It is clear that the previous system contains double integration on and , which will make the problem of finding the solution for this system very complicated. In this situation, the Gibbs sampler and Metropolis–Hastings algorithm were used to generate a random sample from the posterior ; the the system (21) will be of the form
By solving this system, the , , for will be obtained.
For more details about the Gibbs sampler and Metropolis–Hastings algorithms, see, for example [25,26,27,28].
3. Two-Sample Prediction
Assume that and represent the informative sample, from the studied and a future ordered sample of size m, respectively. It is assumed that the two samples are independent.
In this section, and of the observation will be obtained using the classical and Bayesian methods. The conditional of the observation given the vector of parameters is the of the sth ordered value from the m ordered values, which can be written as (see [15,22]):
Using the definitions of and from (2) and (4) in (23), the conditional of the observation given will be:
Based on the two-sample scheme and the same prior (16), the and of can be summarized as follows in the following subsections.
3.1. Maximum Likelihood Prediction (Point and Interval Predictors)
The can be obtained from (24) after replacing each parameter by its to be of the form
where B is a normalizing constant has the value
So, the of will be
A of can be obtained by solving the following two nonlinear equations:
From (25) and (26) in (28), the two nonlinear equations in (28) can be rewritten, to be of the form
By solving the previous system, the of , , can be computed.
3.2. Bayesian Prediction (Point and Interval Predictors)
The Bayesian predictive of given will be as follows:
where
where are normalizing constants.
The of will equal
and the , , of can be obtained by solving the following two nonlinear equations:
Using , which are generated from the posterior , then the system (33) will be of the form
By solving this system, the , , for will be obtained.
From the results of the second and third sections, it is clear that the classical method of prediction and inference in general, called the maximum likelihood approach, depends only on an informative sample from the studied distribution under a suggested censoring scheme, and does not depend on any additional information about the parameters of the population. However, for the Bayes method, it depends on the same informative sample, but in addition to additional information about the parameters of the population represented in the prior distribution of the parameters. This obviously leads to better results. The results obtained based on the samples in the next section will verify this fact.
In case of absence of information on the population parameters, we have two choices. The first is to use the Bayes approach under a vague prior and the second is to use the classical method.
4. Results
In this section, one- and two-sample and using the classical and Bayesian approaches were obtained based on simulated and real data sets.
4.1. Simulated Results
The predictive process is a process that takes in historical data to predict which areas and parts of an asset will fail and at what time. The technician can receive relevant and accurate data points, remotely. The collected data are then analyzed and predictive algorithms to determine which part are more likely to fail. This information is communicated to workers via collaboration tools and data visualization, with which they can perform maintenance work only on the parts that require it. By implementing a predictive maintenance solution (Figure 1), organizations will know when to schedule a specific part replacement and be alerted to future degradations due to faulty parts.
In this section, the and of future failure times are computed, in one- and two-sample schemes, using the classical and Bayesian methods based on a generated informative sample for different values of , and as follows:
For a given set of prior parameters , and , the population parameters and are generated from the joint prior (16).
Making use of and obtained in step 1, a sample of size n of upper ordered values from Burr-X is generated.
For different values of , and , a informative sample is generated from the complete sample in step 2.
For different values of , and , the and of the future failure times are computed using classical and Bayes methods in a one-sample scheme, as explained in Section 2.
The same is done in a two-sample scheme, as explained in Section 3.
For each future failure time, the , , length of the , and the of the are computed.
For fixed , and , the length and the of the increase by increasing s because the element or will be larger, which will widen its predictive interval and, therefore, its .
(b)
In all six cases of the studied :
The length and the of the decreases by increasing the ratio , which means that the results will be better by increasing the available information.
In the cases with constant ratio and fixed , and , the length and the of the decrease by increasing k, which show us that the results will be better by increasing k.
(c)
In all cases, the lengths of the are shorter in case of the Bayesian method than that computed by the classical method, which means that the Bayesian method is better than the classical method.
(d)
In all cases, the Bayesian are less than that computed by the classical method, which is also a criterion indicating that the results obtained by using the Bayes method is better than that obtained using the classical method.
(e)
The values , and have been chosen so as to give all six cases of the studied .
4.2. Data Analysis
In this section, two real data sets are introduced; they were analyzed using Burr-X. The studied real data sets are from [8]. The first data set represents the failure times in the hours of 15 devices, and the second represents the first failure times in the months of 20 electronic cards. These real data sets are:
In Table 3, the of the parameters and and the corresponding Kolmogorov–Smirnov () test statistic were computed under the Burr-X model.
Under the significance level () and using the Kolmogorov–Smirnov table, the critical value for the test statistic is , which is greater than the computed test statistics for the two real data sets under the -X model. This means that the studied model fits the two biological data sets well.
and of the remaining future failure times () and of the first four observations ( ) from an independent ordered sample based on a generated Balakrishnan informative sample from the given real data sets, were computed; they are summarized in Table 4, Table 5, Table 6 and Table 7.
From previous tables and figures, we can observe (for fixed , and ):
Increase the length of the predictive intervals by increasing s, because, as mentioned previously, the element or will be larger, which will widen its predictive interval.
The length of the predictive intervals computed by the Bayesian method is less that that computed by the classical method, which means that Bayes technique is better than the other technique.
For Bayes and classical approaches, and for all values of s, the exact value of lies in its predictive interval.
The red broken refracted line, which represents the true value of the observation to be predicted, is located between the two broken lines that represent the lower and upper bounds of the predictive internals, which confirms with 3.
The length of the predictive interval increase by increasing s, which confirms with 1.
(b)
The lengths of the predictive intervals obtained using the Bayes approach are less than that obtained by the classical approach, which confirms with 2.
5. Conclusions
In this paper, the and of the future failure times from Burr-X were computed based on a (suggested by Balakrishnan et al. (2008) ) informative sample using different values of , and , using classical and Bayesian approaches, making some comparisons between the two approaches. Two engineering real data sets were introduced and analyzed using the Burr-X model to emphasize that the studied model fits the given real data sets well. Based on a generated informative sample from the given real data sets, the and of the future failure times under one- and two-sample schemes were computed using classical and Bayesian approaches; it was found that the predictive intervals using the Bayesian approach were shorter than those computed by the classical approach, which means that the Bayesian approach is better than the other approach. In addition to the tabular description of the results related to the real data sets, graphical descriptions were also introduced. The results of the work confirm that it is possible to use statistical prediction to perform predictive tasks in relation to the conditions of industrial equipment.
Author Contributions
Data curation, A.S.A.; Formal analysis, S.F.A. and A.A.A.M.; Funding acquisition, S.F.A. and A.S.A.; Investigation, S.F.A. and A.A.A.M.; Project administration, A.S.A.; Resources, S.F.A. and A.S.A.; Software, S.F.A.; Supervision, A.A.A.M. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education, Saudi Arabia, for funding this research work through project number IFPRP:373-662-1442 and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
Bayesian predictor
Bayesian predictive interval
TLA
Three letter acronym
Censoring scheme
Cumulative distribution function
Coverage probability
Hybrid censoring scheme
IPs
Interval predictors
Likelihood function
Maximum likelihood
MLEs
Maximum likelihood estimates
Maximum likelihood predictor
Maximum likelihood predictive function
Maximum likelihood predictive interval
Probability density function
Predictive likelihood function
Predictive maximum likelihood estimates
PPs
predictors
Reliability function
Unified hybrid censoring scheme
References
Calabria, R.; Guida, M.; Pulcini, G. Point estimation of future failure times of a repairable system. Reliab. Eng. Syst. Saf.1990, 28, 23–34. [Google Scholar] [CrossRef]
Scalabrini Sampaio, G.; de Aguiar Vallim Filho, A.R.; da Silva, L.S.; da Silva, L.A. Prediction of Motor Failure Time Using An Artificial Neural Network. Sensors2019, 19, 4342. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Alghamdi, A.S. Partially accelerated model for analyzing competing risks data from Gompertz population under type-I generalized hybrid censoring scheme. Complexity2021, 2021, 9925094. [Google Scholar] [CrossRef]
Chandrasekar, B.; Childs, A.; Balakrishnan, N. Exact likelihood inference for the exponential distribution under generalized Type-I and Type-II hybrid censoring. Nav. Res. Logist.2004, 51, 994–1004. [Google Scholar] [CrossRef]
Childs, A.; Chandrasekar, B.; Balakrishnan, N.; Kundu, D. Exact likelihood inference based on Type- I and Type-II hybrid censored samples from the exponential distribution. Ann. Inst. Stat. Math.2003, 55, 319–330. [Google Scholar] [CrossRef]
Lawless, J.F. Statistical Models and Methods for Lifetime Data; John Wiley and Sons: New York, NY, USA, 1982. [Google Scholar]
Burr, W.I. Cumulative frequency distribution. Ann. Math. Stat.1942, 13, 215–232. [Google Scholar] [CrossRef]
Zimmer, W.J.; Keats, J.B.; Wang, F.K. The Burr-XII distribution in reliability analysis. J. Qual. Technol.1998, 30, 386–394. [Google Scholar] [CrossRef]
Kim, C.; Chung, Y. Bayesian estimation of P(Y < X) from Burr-type X model containing spurious observations. Stat. Pap.2006, 47, 643–651. [Google Scholar]
Rastogi, M.K.; Tripathi, Y.M. Inference on unknown parameters of a Burr distribution under hybrid censoring. Stat. Pap.2013, 54, 619–643. [Google Scholar] [CrossRef]
Abd EL-Baset, A.A.; Magdy, E.E.; Tahani, A.A. Estimation under Burr type X distribution based on doubly type II censored sample of dual generalized order statistics. J. Egyp. Math. Soc.2015, 23, 391–396. [Google Scholar]
Arabi Belaghi, R.; Noori Asl, M. Estimation based on progressively type-I hybrid censored data from the Burr XII distribution. Stat. Pap.2019, 60, 761–803. [Google Scholar] [CrossRef]
Mohammadi, M.; Reza, A.; Behzadi, M.H.; Singh, S. Estimation and prediction based on type-I hybrid censored data from the Poisson-Exponential distribution. Commun. Stat. Simul. Comput.2019, 1–26. [Google Scholar] [CrossRef]
Rabie, A.; Li, J. Inferences for Burr-X Model Based on Unified Hybrid Censored Data. Int. J. Appl. Math.2019, 49, 1–7. [Google Scholar]
Ateya, S.F.; Amein, M.M.; Heba, S. Mohammed Prediction under an adaptive progressive type-II censoring scheme for Burr Type-XII distribution. Commun. Stat.-Theory Methods2020, 1–13. [Google Scholar] [CrossRef]
Balakrishnan, N.; Rasouli, A.; Sanjari Farsipour, N. Exact likelihood inference based on an unified hybrid censored sample from the exponential distribution. J. Stat. Comput. Simul.2008, 78, 475–488. [Google Scholar] [CrossRef]
Osatohanmwen, F.; Oyegue, F.O.; Ogbonmwan, S.M. The Weibull-Burr XII {log logistic} Poisson lifetime model. J. Stat. Manag. Syst.2021, 1–36. [Google Scholar] [CrossRef]
Aslam, M.; Usman, R.M.; Raqab, M.Z. A new generalized Burr XII distribution with real life applications. J. Stat. Manag. Syst.2021, 24, 521–543. [Google Scholar] [CrossRef]
David, H.A. Order Statistics, 2nd ed.; John Wiley and Sons, Inc.: New York, NY, USA, 1981. [Google Scholar]
Balakrishnan, N.; Shafay, A.R. One- and Two-Sample Bayesian Prediction Intervals Based on Type-II Hybrid Censored Data. Communi. Stat.-Theory Methods2012, 41, 1511–1531. [Google Scholar] [CrossRef]
Ateya, S.F.; Mohammed, H.S. Prediction Under Burr-XII Distribution Based on Generalized Type-II Progressive Hybrid Censoring Scheme. JOEMS2018, 26, 491–508. [Google Scholar] [CrossRef]
Ateya, S.F. Estimation under modified Weibull distribution based on right censored generalized order statistics. J. Appl. Stat.2013, 40, 2720–2734. [Google Scholar] [CrossRef]
Ateya, S.F. Estimation under Inverse Weibull Distribution based on Balakrishnan’s Unified Hybrid Censored Scheme. Commun. Stat. Simul. Comput.2017, 46, 3645–3666. [Google Scholar] [CrossRef]
Jaheen, Z.F.; Al Harbi, M.M. Bayesian estimation for the exponentiated Weibull model via Markov chain Monte Carlo simulation. Commun. Stat. Simul. Comput.2011, 40, 532–543. [Google Scholar] [CrossRef]
Press, S.J. Subjective and Objective Bayesian Statistics: Principles, Models and Applications; John Wiley and Sons: New York, NY, USA, 2003. [Google Scholar]
Upadhyaya, S.K.; Gupta, A. A Bayes analysis of modified Weibull distribution via Markov chain Monte Carlo simulation. J. Stat. Comput. Simul.2010, 80, 241–254. [Google Scholar] [CrossRef]
Upadhyaya, S.K.; Vasishta, N.; Smith, A.F.M. Bayes inference in life testing and reliability via Markov chain Monte Carlo simulation. Sankhya A2001, 63, 15–40. [Google Scholar]
Figure 1.
Reactive periodic proactive predictive four stage engineering process.
Figure 1.
Reactive periodic proactive predictive four stage engineering process.
Figure 2.
(A) ML one-sample predictive intervals based on sample I; (B) Bayesian one-sample predictive intervals based on sample I; (C) lengths of the one-sample predictive intervals based on sample I.
Figure 2.
(A) ML one-sample predictive intervals based on sample I; (B) Bayesian one-sample predictive intervals based on sample I; (C) lengths of the one-sample predictive intervals based on sample I.
Figure 3.
(A) ML one-sample predictive intervals based on sample II; (B) Bayesian one-sample predictive intervals based on sample II; (C) lengths of one-sample predictive intervals based on sample II.
Figure 3.
(A) ML one-sample predictive intervals based on sample II; (B) Bayesian one-sample predictive intervals based on sample II; (C) lengths of one-sample predictive intervals based on sample II.
Figure 4.
(A) ML two-sample predictive intervals based on sample I; (B) Bayesian two-sample predictive intervals based on sample I; (C) lengths of two-sample predictive intervals based on sample I.
Figure 4.
(A) ML two-sample predictive intervals based on sample I; (B) Bayesian two-sample predictive intervals based on sample I; (C) lengths of two-sample predictive intervals based on sample I.
Figure 5.
(A) ML two-sample predictive intervals based on sample II; (B) Bayesian two-sample predictive intervals based on sample II; (C) lengths of two-sample predictive intervals based on sample II.
Figure 5.
(A) ML two-sample predictive intervals based on sample II; (B) Bayesian two-sample predictive intervals based on sample II; (C) lengths of two-sample predictive intervals based on sample II.
Table 1.
and of the future failure time , based on the generated Balakrishnan informative sample. , , .
Table 1.
and of the future failure time , based on the generated Balakrishnan informative sample. , , .
Values of
of
of
of
of
of
of
of
of
1.03809
1.11608
1.57305
1.65624
0.91676
1.0914
1.4711
1.6674
(0.87340,1.12306)
(0.82512,1.13621)
(1.10036,1.61367)
(1.42708,1.97736)
0.24966
0.31109
0.51331
0.55028
0.9176
0.92177
0.93001
0.93816
1.03809
1.11608
1.57305
1.65624
0.91053
1.1537
1.5018
1.6214
(0.87103,0.98636)
(1.02104,1.22278)
(1.16172,1.47264)
(1.29082,1.70564)
0.11523
0.20174
0.31092
0.41482
0.9037
0.9165
0.9275
0.9310
1.03809
1.11608
1.57305
1.65624
1.18077
1.32147
1.59917
1.79101
(1.09166,1.22885)
(1.21106,1.42664)
(1.38102,1.68120)
(1.49016,1.88918 )
0.13719
0.21558
0.30018
0.39902
0.9001
0.91130
0.92719
0.92991
1.03809
1.11608
1.57305
1.65624
1.17914
1.34221
1.62017
1.77082
(1.11057,1.23958)
(1.20184,1.4052)
(1.42061,1.69377)
(1.58062,1.94235)
0.12901
0.20336
0.27316
0.36173
0.8997
0.91055
0.9206
0.9227
Values of
0.66209
0.89038
0.91005
1.75215
0.58144
0.9102
0.88152
1.70119
(0.40068,0.79311)
(0.60106,1.09158)
(0.74061,1.27075)
(1.12105,1.85211)
0.39243
0.49052
0.53014
0.73106
0.95161
0.96173
0.96106
0.97015
0.662089
0.890378
0.910053
1.75215
0.64825
0.86239
0.94931
1.69046
(0.50151,0.86203)
(0.76175,1.16289)
(0.95276,1.46868)
(1.39173,1.90765)
0.36052
0.40114
0.51592
0.61058
0.9483
0.9553
0.9581
0.98813
0.662089
0.890378
0.910053
1.75215
0.66131
0.86561
0.94072
1.78334
(0.49106,0.78122)
(0.73083,1.11254)
(0.807016,1.26818)
(1.24608,1.94780)
0.29016
0.38171
0.46117
0.70172
0.94814
0.95231
0.95618
0.96131
0.662089
0.890378
0.910053
1.75215
0.67013
0.88172
0.90157
1.74105
(0.58043,0.79278)
(0.72063,1.07077)
(0.83803,1.24985)
(1.14473,1.82615)
0.21225
0.35041
0.41182
0.68142
0.9381
0.9481
0.9511
0.9591
1.42077
1.50944
1.54494
1.63888
1.39354
1.48593
1.51207
1.65472
(1.31075,1.56622)
(1.32194,1.86571)
(1.34528,1.70186)
(1.45619,1.84651)
0.25547
0.27377
0.35658
0.45032
0.9695
0.9726
0.9799
0.9801
1.42077
1.50944
1.54494
1.63888
1.43406
1.49225
1.53821
1.64152
(1.40089,1.52250)
(1.40916,1.59487)
(1.42534,1.67038)
(1.50294,1.77342)
0.12161
0.18571
0.24504
0.27048
0.9217
0.9317
0.9502
0.9573
Values of
1.42077
1.50944
1.54494
1.63888
1.40593
1.49593
1.57207
1.61472
(1.32194,1.56571)
(1.32194,1.58239)
(1.34528,1.68186))
(1.25619,1.70651)
0.24377
0.26045
0.33658
0.45032
0.9504
0.9551
0.9623
0.9708
1.42077
1.50944
1.54494
1.63888
1.41948
1.51021
1.54843
1.63451
(1.35285,1.47209)
(1.40421,1.65515)
(1.48797,1.81620)
(1.52675,1.95603)
0.11924
0.25094
0.32823
0.42928
0.9495
0.9525
0.9605
0.9693
Values of
1.42061
1.52062
1.63815
1.64518
1.41092
1.49332
1.61337
1.66319
(1.32901,1.59942)
(1.40162,1.69674)
(1.48054,1.78922)
(1.50512,2.14725)
0.27041
0.29512
0.34116
0.64213
0.9601
0.9664
0.9718
0.9804
1.42061
1.52062
1.63815
1.64518
1.41941
1.51804
1.64183
1.65184
(1.34162,1.56357)
(1.44076,1.69407)
(1.52184,1.82301)
(1.59042,2.00790)
0.22195
0.25331
0.30117
0.41748
9594
0.9614
0.9695
0.9748
Values of
1.51436
1.60734
1.71813
1.72479
1.49201
1.58801
1.77419
1.75118
(1.40162,1.663278)
(1.45281,1.74086)
(1.59042,1.91318)
(1.66492,2.25535)
0.23116
0.28805
0.32276
0.59043
0.9584
0.9615
0.9697
0.978
1.51436
1.60734
1.71813
1.72479
1.50184
1.58294
1.70174
1.73182
(1.41062,1.62090)
(1.50372,1.74565)
(1.60152,1.88221)
(1.64107,1.98290)
0.21028
0.24193
0.28069
0.34183
0.9533
0.9557
0.9614
0.9736
Table 2.
and of the future failure time , based on the generated Balakrishnan informative sample. , , .
Table 2.
and of the future failure time , based on the generated Balakrishnan informative sample. , , .
Values of
of
of
of
of
of
of
of
of
0.41303
0.78654
1.09878
1.31352
0.37017
0.69154
0.95124
1.24012
(0.25175,0.46227)
(0.43129,0.80322)
(0.71182,1.30162)
(1.00273,1.59152)
0.21052
0.37193
0.4898
0.58879
0.88153
0.9152
0.9205
0.9317
0.41303
0.78654
1.09878
1.31352
0.38718
0.71032
0.97182
1.27104
(0.33152,0.53268)
(0.51037,0.82988)
(0.81094,1.20606)
(1.17213,1.65357)
0.20116
0.31951
0.39512
0.48144
0.8781
0.9013
0.9114
0.9226
0.41303
0.78654
1.09878
1.31352
0.38053
0.71108
0.98155
1.30153
(0.35102,0.54184)
(0.51005,0.82108)
(0.81106,1.22128)
(1.10924,1.63025)
0.19082
0.31103
0.41022
0.52101
0.8771
0.9010
0.9113
0.9215
0.41303
0.78654
1.09878
1.31352
0.39012
0.73318
1.1192
1.34417
(0.38065,0.56341)
(0.54194,0.83207)
(0.79168,1.16883)
(1.01845,1.50039)
0.18276
0.29013
0.37715
0.48194
0.8771
0.8917
0.9016
0.9106
Values of
0.41303
0.78654
1.09878
1.31352
0.38026
0.82165
1.16271
1.41152
(0.15243,0.42406)
(0.40072,0.90224)
(0.87932,1.44074)
(1.27194,1.88367)
0.27163
0.50152
0.56152
0.61173
0.9201
0.9332
0.9441
0.9505
0.41303
0.78654
1.09878
1.31352
0.39012
0.73515
0.92166
1.28061
(0.27251,0.52427)
(0.50041,0.96225)
(0.88015,1.40234)
(1.18145,1.78310)
0.25176
0.46184
0.52219
0.60165
0.9115
0.9271
0.9396
0.9471
0.41303
0.78654
1.09878
1.31352
0.40712
0.0.81026
0.99172
1.28017
(0.35143,0.59310)
(0.53183,0.99795)
(0.80384,1.29498)
(0.93317,1.51480)
0.24167
0.46612
0.0.49114
0.0.58163
0.9195
0.9307
0.9421
0.9497
0.41303
0.78654
1.09878
1.31352
0.41005
0.79015
1.01823
1.31561
(0.37041,0.60357)
(0.55272,0.97563)
(0.79043,1.24077)
(0.93962,1.49109)
0.23316
0.42291
0.45037
0.55147
0.9061
0.9298
0.9402
0.9488
0.41303
0.78654
1.09878
1.31352
0.40815
0.77804
0.1.10926
1.28915
(0.31629,0.57135)
(0.56052,0.86964)
(0.80057,1.24265)
(1.01748,1.55254)
0.25506
0.30912
0.44208
0.53506
0.9479
0.95514
0.9609
0.9716
0.41303
0.78654
1.09878
1.31352
0.41105
0.78052
1.08805
1.30615
(0.30225,0.547738)
(0.52817,0.82633)
(0.93183,1.3335)
(1.13052,1.64131)
0.24513
0.29816
0.40167
0.0.51079
0.9397
0.9520
0.9593
0.9675
Values of
0.41303
0.78654
1.09878
1.31352
0.41201
0.80152
1.11026
1.28961
(0.43172,0.65335)
(0.60332,0.85448)
(0.82184,1.20366)
(1.10286,1.56910)
0.22163
0.25116
0.38182
0.46624
0.9418
0.9536
0.9592
0.9663
0.41303
0.78654
1.09878
1.31352
0.41052
0.79316
0.1.12052
1.29164
(0.45132,0.66194)
(0.65293,0.89786)
(0.87281,1.23444)
(1.17148.1.60341)
0.21062
0.24493
0.0.36163
0.43193
0.9406
0.9513
0.9554
0.9614
Values of
0.41303
0.78654
1.09878
1.31352
0.41903
0.79016
0.99173
0.1.3201
(0.32183,0.58324)
(0.53148,0.83165)
(0.81064,1.22247)
(1.10573,1.68725)
0.26141
0.30017
0.41183
0.58152
0.9726
0.9775
0.9802
0.9892
0.41303
0.78654
1.09878
1.31352
0.41525
0.77812
0.1.03124
1.31902
(0.35149,0.59301)
(0.56028,0.84226)
(0.79104,1.18118)
(0.97823,1.51955)
0.24152
0.28198
0.39014
0.54132
0.9618
0.9693
0.9715
0.9801
Values of
0.41303
0.78654
1.09878
1.31352
0.4111
0.7902
0.1.1058
0.1.3111
(0.31047,0.56566)
(0.55061,0.84878)
(0.81718,1.20735)
(1.03081,1.59187)
0.25519
0.29817
0.39017
0.56106
0.9594
0.9615
0.9694
0.9772
0.41303
0.78654
1.09878
1.31352
0.40183
0.79163
1.10815
1.29284
(0.28071,0.52488)
(0.51148,0.80667)
(0.79208,1.15819)
(1.10273,1.62478)
0.24417
0.29519
0.36611
0.52205
0.9523
0.9594
0.9611
0.9731
Table 3.
of the parameters and the associated based on the real data sets I and II.
Table 3.
of the parameters and the associated based on the real data sets I and II.
Data Set No.
I
,
0.193849
II
,
0.247625
Table 4.
and of the future failure time based on a generated Balakrishnan informative sample from real data set I.
Table 4.
and of the future failure time based on a generated Balakrishnan informative sample from real data set I.
Values of
True
of
of
of
of
of
of
of
of
8.01
8.27
12.06
31.75
7.5715
8.1129
11.3816
33.8818
5
(6.77191,9.97962)
(7.17305,10.60674)
(10.48435,14.55529)
(25.66194,39.64581)
3.20771
3.43369
4.07094
13.98387
8.01
8.27
12.06
31.75
7.70925
8.19027
11.69016
32.20172
(7.21062,9.51244)
(7.50592,10.42419)
(10.88017,14.53119)
(27.44018,38.34811)
2.30182
2.91827
3.65102
10.90793
Table 5.
and of the future failure time , based on a generated Balakrishnan’s informative sample from real data set II.
Table 5.
and of the future failure time , based on a generated Balakrishnan’s informative sample from real data set II.
Values of
True
of
of
of
of
of
of
of
of
5.0
6.2
7.5
8.3
4.6827
6.0196
7.3891
8.9017
6
(3.51081,6.01779)
(4.89192,7.91478)
(6.09047,9.82138)
(6.12081,10.55201)
2.50698
3.02286
3.73091
4.43120
5.0
6.2
7.5
8.3
4.7908
6.3005
7.4213
8.5105
(3.78207,5.66372)
(5.03927,7.07429)
(6.24718,8.57770)
(6.74039,10.55102)
1.88165
2.03502
2.33052
3.81063
Table 6.
and of the future failure time , based on a generated Balakrishnan informative sample from real data set I.
Table 6.
and of the future failure time , based on a generated Balakrishnan informative sample from real data set I.
Values of
Generated
of
of
of
of
of
of
of
of
0.38819
0.49016
0.52015
0.60823
0.42115
0.50284
0.51082
0.58807
5
(0.34905,0.51106)
(0.36373,0.54428)
(0.39017,0.60024)
(0.48165,0.7626)
0.16201
0.18055
0.21007
0.28095
0.38819
0.49016
0.52015
0.60823
0.41066
0.48107
0.52713
0.60153
(0.36105,0.49267)
(0.40552,0.56075)
(0.48174,0.65461)
(0.52315,0.76499)
0.13162
0.15523
0.17287
0.24184
Table 7.
and of the future failure time , based on a generated Balakrishnan informative sample from real data set II.
Table 7.
and of the future failure time , based on a generated Balakrishnan informative sample from real data set II.
Values of
Generated
of
of
of
of
of
of
of
of
0.28003
0.41052
0.48105
0.50185
0.30119
0.38826
0.41918
0.46817
6
(0.20275,0.37427)
(0.25594,0.4890)
(0.30142,0.71214)
(0.35107,0.82210)
0.171752
0.23306
0.41072
0.47103
0.28003
0.41052
0.48105
0.50185
0.29014
0.39082
0.45119
0.48017
(0.26023,0.42308)
(0.27684,0.48890)
(0.33206,0.61621)
(0.39082,0.70595)
0.16285
0.21206
0.28415
0.31513
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Ateya, S.F.; Alghamdi, A.S.; Mousa, A.A.A.
Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications. Mathematics2022, 10, 1450.
https://doi.org/10.3390/math10091450
AMA Style
Ateya SF, Alghamdi AS, Mousa AAA.
Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications. Mathematics. 2022; 10(9):1450.
https://doi.org/10.3390/math10091450
Chicago/Turabian Style
Ateya, Saieed F., Abdulaziz S. Alghamdi, and Abd Allah A. Mousa.
2022. "Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications" Mathematics 10, no. 9: 1450.
https://doi.org/10.3390/math10091450
APA Style
Ateya, S. F., Alghamdi, A. S., & Mousa, A. A. A.
(2022). Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications. Mathematics, 10(9), 1450.
https://doi.org/10.3390/math10091450
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.
Article Metrics
No
No
Article Access Statistics
For more information on the journal statistics, click here.
Multiple requests from the same IP address are counted as one view.
Ateya, S.F.; Alghamdi, A.S.; Mousa, A.A.A.
Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications. Mathematics2022, 10, 1450.
https://doi.org/10.3390/math10091450
AMA Style
Ateya SF, Alghamdi AS, Mousa AAA.
Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications. Mathematics. 2022; 10(9):1450.
https://doi.org/10.3390/math10091450
Chicago/Turabian Style
Ateya, Saieed F., Abdulaziz S. Alghamdi, and Abd Allah A. Mousa.
2022. "Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications" Mathematics 10, no. 9: 1450.
https://doi.org/10.3390/math10091450
APA Style
Ateya, S. F., Alghamdi, A. S., & Mousa, A. A. A.
(2022). Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications. Mathematics, 10(9), 1450.
https://doi.org/10.3390/math10091450
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.