Next Article in Journal
Weighted Signed Networks Reveal Interactions between US Foreign Exchange Rates
Previous Article in Journal
Entropy-Based Node Importance Identification Method for Public Transportation Infrastructure Coupled Networks: A Case Study of Chengdu
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Weighted Extropy with Focus on Its Use in Reliability Modeling

by
Muhammed Rasheed Irshad
1,
Krishnakumar Archana
1,
Radhakumari Maya
2 and
Maria Longobardi
3,*
1
Department of Statistics, Cochin University of Science and Technology, Cochin 682 022, Kerala, India
2
Department of Statistics, Government College for Women, Thiruvananthapuram 695 014, Kerala, India
3
Dipartimento di Biologia, Università degli Studi di Napoli Federico II, 80126 Naples, Italy
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(2), 160; https://doi.org/10.3390/e26020160
Submission received: 30 December 2023 / Revised: 6 February 2024 / Accepted: 9 February 2024 / Published: 11 February 2024
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In the literature, estimation of weighted extropy is infrequently considered. In this paper, some non-parametric estimators of weighted extropy are given. The validation and comparison of the estimators are implemented with the help of simulation study and data illustration. The usefulness of the estimators is demonstrated using real data sets.

1. Introduction

The concept of extropy and its use has been explored rapidly in the recent years. It measures the uncertainty contained in the probability distributions and is considered as the complimentary dual of entropy introduced in [1]. The entropy measure is shift-independent, that is, it is the same for both X and X + b and it cannot be applied in some fields such as neurology. Thus, in [2], the notion of weighted entropy measure was introduced. The authors pointed out that occurrence of an event has an impact on uncertainty in two ways. It presents both quantitative and qualitative information. That is, it initially reveals the probability of an event occurring and later demonstrates its efficacy in achieving qualitative features of a goal. It is important to note that the information obtained when a device fails to operate or a neuron fails to release spikes in a specific time interval differs significantly from the information obtained when such events occur in other equally wide intervals. This is why there is a need, in some cases, to employ a shift-dependent information measure that assigns varying measures to these distributions.The importance of the presence of weighted measures of uncertainty was exhibited in [3].
The concept of extropy for a continuous r v X has been presented and discussed across numerous works in the literature. The differential extropy defined by [4] is
J ( X ) = 1 2 0 + f X 2 ( x ) d x .
One can refer to [5] for the extropy properties of order statistics and record values. The applications of extropy in automatic speech recognition can be found in [6]. Various literature sources have presented a range of extropy measures and their extensions. Analogous to weighted entropy, in [7], the concept of weighted extropy was introduced ( W E ) in the literature. It is given as
J w ( X ) = 1 2 0 + x f X 2 ( x ) d x .
Variable x in the integral emphasizes the weight related to the occurrence of event X = x . Here, it assigns more significance to large values of X. In the literature, extropy, its different versions and their applications have been studied by several authors (see, for instance, [8,9,10]). In particular, a unified version of extropy in classical theory and in Dempster–Shafer theory was studied in [11].
There are several papers available in the literature that delve into the estimation of extropy and its various versions. Kernel estimation on the functionals of the density function was proposed in [12]. The optimal bandwidth for kernel density functionals is provided in [13]. In [14], a brief explanation was established on optimal bandwidth estimators of kernel density functionals for contaminated data. In [15], estimators of extropy were proposed, and also its application was worked on by testing uniformity. In [16], the concept of length biased sampling in estimating extropy was approached. Research on non-parametric estimation using dependent data is also well-explored in the literature. Work by [17] explained the recursive and non-recursive kernel estimation of negative cumulative extropy under the α -mixing dependence condition. Recently, in [18], the kernel estimation of the extropy function was discussed using α -mixing-dependent data. Moreover, in [19], the log kernel estimation of extropy was introduced.
Even if there are several works available in the literature related to the estimation of extropy, little has been published on W E and its estimation until now. There are situations in which we are forced to use W E instead of extropy. Unlike extropy, the qualitative characteristics of information are also represented here. In [20], the significance of employing W E as opposed to regular extropy in certain scenarios was demonstrated. There are instances where certain distributions possess identical extropy values but exhibit distinct W E values. In such situations, it becomes necessary to opt for W E . The estimators of W E can also be used in the selection of models in the reliability analysis. Here, we tried to find some estimators for W E and validated it using simulation study and data analysis.
The paper is organized as follows: In Section 2, we introduce the log kernel estimation of W E . In Section 3, an empirical kernel smoothed estimator of W E is given. A simulation study is conducted to evaluate the estimators, and we also compare log kernel to kernel estimators of W E in Section 4. Section 5 is devoted to the real data analysis to examine the proposed estimators. Finally, we conclude the study in Section 6.

2. Log Kernel Estimation of Weighted Extropy

In this section, we introduce the concept of log kernel-based estimation of W E .
Let us define an r v X with unknown p d f f X ( x ) . We assume that X is defined on R and f X ( x ) is continuously differentiable. We suppose X i ; 1 i n is a sequence of identically distributed r v s . The most commonly used estimator of f X ( x ) is the kernel density estimator ( K D E ), given by [21,22] as
f ^ X ( x ) = 1 n h i = 1 n K x X i h ,
where K ( x ) is the kernel function which satisfies the following conditions:
  • R K ( x ) d x = 1 ,
  • R x K ( x ) d x = 0 ,
  • R x 2 K ( x ) d x = 1 ,
  • R K 2 ( x ) d x < + .
Here, bandwidth parameter h → 0 and n h + as n + .
When probability density functions are estimated in a non-parametric way, standard K D E is frequently used. However, when we deal with data that fit distributions with heavy tails, multiple modes, or skewness, particularly those with positive values these estimators may lose their effectiveness. In all of these scenarios, applying a transformation, we can yield more consistent results. Such transformation involves a logarithmic transformation to create a non-parametric K D E . An important aspect of the logarithmic transformation is its ability to compress the right tail of the distribution. The obtained K D E are called logarithmic K D E (denoted as L K D E ) (refer to [23]). Let us define Y = l o g ( X ) , Y i = l o g ( X i ) ; i = 1 , 2 , , n and let f Y ( y ) be the p d f of Y. The L K D E is defined as
f ^ l o g ( x ) = 1 n h i = 1 n 1 x K l o g x l o g X i h = 1 n i = 1 n L ( x , X i , h ) ,
where L ( x , z , h ) = 1 x h K ( l o g ( x z ) 1 n ) is the log kernel function with bandwidth h > 0 at location parameter z. For any z , h ( 0 , + ) , L ( x , z , h ) satisfies conditions L ( x , z , h ) 0 for all x ( 0 , + ) and 0 + L ( x , z , h ) d x = 1 .
For any X ( 0 , + ) ,
B i a s ( f ^ l o g ( x ) ) = h 2 2 f X ( x ) + 3 x f X ( 1 ) ( x ) + x 2 f X ( 2 ) ( x ) + o ( h 2 ) ,
V a r ( f ^ l o g ( x ) ) = C k n h f X ( x ) x + o 1 n h ,
where C k = R K 2 ( z ) d z .
We let ( X 1 , X 2 , , X n ) be a sample of identically distributed observations. We obtain the L K D E for W E by using the estimator defined in Equation (4).
The L K D E for the W E function is
J ^ n w ( X ) = 1 2 0 + x f ^ l o g 2 ( x ) d x ,
which again can be alternatively expressed as
= 1 2 0 + d y y + f ^ l o g 2 ( x ) d x .
The following theorem gives the expression for bias and variance of the L K D E of W E .
Theorem 1.
Assume that the conditions given in Section 2 are satisfied in the case of log kernel function L ( x ) and bandwidth h . Then, the bias and variance of L K D E J ^ n w ( X ) are given, respectively, as
B i a s ( J ^ n w ( X ) ) 0 + d y y + h 2 2 f X ( x ) + 3 x f X ( 1 ) ( x ) + x 2 f X ( 2 ) ( x ) f X ( x ) d x + o ( h 2 ) ,
V a r ( J ^ n w ( X ) ) C k n h 0 + d y y + f X 3 ( x ) x d x + o 1 n h ,
where C k = R K 2 ( z ) d z .
Proof. 
The proof is omitted as it is similar to [19]. □
The following theorem shows that the proposed estimator is consistent.
Theorem 2.
J ^ n w ( X ) is a consistent estimator of J w ( X ) , where J ^ n w ( X ) and J w ( X ) are defined in Equations (2) and (7). Also, let L ( x ) be the log kernel function and h be the bandwidth which satisfies the conditions given in Section 2. Then, we can say that, as n tends to + ,
J ^ n w ( X ) = 1 2 0 + d y y + f ^ l o g 2 ( x ) p 1 2 0 + d y y + f X 2 ( x ) d x = J w ( X ) .
Proof. 
Since the proof is similar to that of [19], it is omitted. □
The below theorem shows that the L K D E of W E is integratedly uniformly consistent in the quadratic mean estimator of J w ( X ) .
Theorem 3.
Consider log kernel function L ( x ) and bandwidth parameter h that fulfills the conditions outlined in Section 2. If J ^ n w ( X ) is L K D E according to Equation (7), then J ^ n w ( X ) is integratedly uniformly consistent in the quadratic mean estimator of J w ( X ) .
Proof. 
As the proof resembles that of [19], it is omitted here. □
Here, we provide the expression for the optimal bandwidth of J ^ n w ( X ) .

Optimal Bandwidth

Here, we offer the expression for the optimal bandwidth using mean integrated square error ( M I S E ). The M I S E of J ^ n w ( X ) is given as
M I S E ( J ^ n w ( X ) ) = E 0 + J ^ n w ( X ) J w ( X ) 2 d x .
Using the expression for bias and variance given in Equations (9) and (10), the M I S E of J ^ n w ( X ) is given as
M I S E ( J ^ n w ( X ) ) 0 + [ h 2 2 0 + d y y + f X ( x ) + 3 x f X ( 1 ) ( x ) + x 2 f X ( 2 ) ( x ) f X ( x ) d x 2 + C k n h 0 + d y y + f X 3 ( x ) x d x ] d x + o h 4 + o 1 n h .
The asymptotic M I S E ( A M I S E ) can be obtained by ignoring the higher-order terms and is given as
A M I S E = h 4 4 0 + 0 + d y y + f X ( x ) + 3 x f X ( 1 ) ( x ) + x 2 f X ( 2 ) ( x ) f X ( x ) d x 2 d x + 1 n h 0 C k 0 + d y y + f X 3 ( x ) x d x d x .
The optimal bandwidth is then attained after minimizing A M I S E with respect to h , and it is given by
h = 0 + C k 0 + d y y + f X 3 ( x ) x d x d x 0 + 0 + d y y + f X ( x ) + 3 x f X ( 1 ) ( x ) + x 2 f X ( 2 ) ( x ) f X ( x ) d x 2 d x 1 5 n 1 5 = o ( n 1 5 ) .

3. Empirical Estimation of Weighted Extropy

Non-parametric estimation is a widely employed technique in various research papers for estimating extropy and its associated measures. One common approach within non-parametric estimation is the use of kernel density estimation, which is a popular method in the literature used in order to obtain smoothed estimates.
In this section, we introduce the empirical method for estimating p d f to assess W E . This estimation is achieved through the utilization of a non-parametric K D E (see [24,25]). The empirical kernel smoothed estimator for W E is
J ^ n 1 w ( X ) = 1 2 0 + x f ^ X 2 ( x ) d x = 1 2 i = 1 n 1 X i : n X i + 1 : n x f ^ X 2 ( x ) d x = 1 2 i = 1 n 1 X i : n 2 X i + 1 : n 2 2 f ^ X 2 ( X i : n ) = 1 4 i = 1 n 1 X i : n 2 X i + 1 : n 2 f ^ X 2 ( X i : n ) ,
where f X ( . ) is the K D E given by [21] and X i : n is the i t h order statistic of the random sample.
Example 1.
Let samples X i s be from the distribution with p d f = 2 x , 0 < x < 1 . Then, X 2 follows standard uniform distribution. Moreover, Z i + 1 = X i : n 2 X i + 1 : n 2 2 is a beta distribution with mean and variance, respectively, as 1 2 ( n + 1 ) and n 4 ( n + 1 ) 2 ( n + 2 ) . Then, the mean and variance of J ^ n 1 w ( X ) are given by
E J ^ n 1 w ( X ) = 1 4 ( n + 1 ) i = 1 n 1 f ^ X 2 ( X i : n ) ,
and
V J ^ n 1 w ( X ) = n 16 ( n + 1 ) 2 ( n + 2 ) i = 1 n 1 f ^ X 4 ( X i : n ) ,
where f ^ X ( . ) is defined in Equation (3).
Table 1 shows the values of mean and variance of the samples of Example 1. Hence, it is clear that the values of mean is changing and the variance is tending to zero when the sample size increases. It is therefore clear that the mean and variance of empirical estimators are influenced by the size of the sample.
Example 2.
Suppose X follows Rayleigh distribution with parameter 1. Then, X 2 follows exponential distribution and Z i + 1 = X i : n 2 X i + 1 : n 2 2 is distributed as exponential distribution with mean = 1 2 ( n i ) , for i = 1 , 2 , , n 1 . The mean and variance of J ^ n 1 w ( X ) are
E J ^ n 1 w ( X ) = 1 4 i = 1 n 1 f ^ X 2 ( X i : n ) n i ,
V J ^ n 1 w ( X ) = 1 16 i = 1 n 1 f ^ X 4 ( X i : n ) ( n i ) 2 .
From Table 2, it is clear that the variance is decreasing to zero and the mean is increasing in the case of Rayleigh distribution with parameter one, which indicates the dependence of empirical estimators on sample size.

4. Simulation Study

We manage a simulation study to evaluate the performance of the presented estimators. Random samples are generated corresponding to different sample sizes from some standard distributions, and then both bias and root mean square ( R M S E ) are calculated for 10,000 samples. Bandwidth parameter h is determined using the plug-in method as proposed in [26].
To enable a comparison between L K D E and K D E of W E , we again propose a K D E for W E using Equation (3). The estimator is given by
J ^ n k w ( X ) = 1 2 0 + x f ^ X 2 ( x ) d x = 1 2 0 + d y y + f ^ X 2 ( x ) d x ,
where f ^ X ( x ) is the K D E given in [21]. Using the consistency property of the K D E , it is clear that the proposed estimator in Equation (18) for W E is also consistent. To lay the ground work for comparison, we generate samples from exponential distribution, log normal distribution, a heavy-tailed distribution and uniform distribution. The Gaussian log transformed kernel and the Gaussian kernel are the kernel functions used for simulation.
From the above Table 3, Table 4 and Table 5, it is clear that the R M S E and bias of both estimators are decreasing with sample size. The decreasing R M S E indicates that estimator predictions are approaching the true values with larger sample sizes, demonstrating enhanced accuracy and efficiency in estimation. The decreasing bias also shows the accuracy of the estimators.
The comparison of bias and R M S E between the presented estimators in the simulation for W E reveals that L K D E slightly outperforms K D E in certain scenarios, particularly when dealing with heavy-tailed distribution and skewed distributions.

5. Data Analysis

In this section, we performed a comparison study and validated the accuracy of the proposed estimators using real data analysis. In each of the three scenarios, the bandwidth parameter employed for estimation was derived from the bandwidth proposed in [26].

5.1. Data 1

The comparison between L K D E and K D E of W E was demonstrated using the data given in [27]. The data demonstrate the quantity of thousands of cycles to failure for electrical appliances in a life test.
The graphical representation in Figure 1 indicates the presence of slight skewness in the dataset. We fit exponential distribution with parameter 0.640 to the data. Upon analyzing the Q-Q plot in Figure 2, it becomes evident that the exponential distribution is a suitable model for the observed data. The p-value obtained for the Kolmogorov–Smirnov test (0.124) is 0.390, which reveals that exponential distribution is a good fit to the data. The estimate obtained using maximum likelihood estimation is −0.125.
The estimate of W E earned using log kernel and kernel estimation are J ^ n w ( X ) = 0.127 , J ^ n k w ( X ) = 0.144 and J ^ n 1 w ( X ) = 0.148 . Hence, from the closeness of estimates to the maximum likelihood estimate of W E , it is clear that estimator J ^ n w ( X ) performs better than the other two estimators.

5.2. Data 2 (Heavy-Tailed Data)

Again, we illustrate the comparison between the three estimators using the data from [28]. The data represent the remission times (months) of 137 cancer patients. A kurtosis value of 15.195 is obtained. It is exceptionally high and suggests a very heavy-tailed or leptokurtic distribution. Hence, log normal distribution is fitted to the data and the parameters obtained are
μ ^ = 1.756 , σ ^ = 1.066 .
Figure 3 indicates the presence of rightly skewed heavy-tailed data. Upon examination of the Q-Q plot presented in Figure 4, it is clear that the data align well with the characteristics of log normal distribution, indicating that the log normal model is an appropriate fit for the observed dataset. Using the Kolmogorov–Smirnov test with a statistic of 0.06 and a p-value of 0.591, it is clear that log normal distribution is the best fit here.
The estimates of W E using the proposed estimators and by maximum likelihood estimation are calculated for these data. We obtain J ^ n w ( X ) = 0.1346 , J ^ n k w ( X ) = 0.1418 , and J ^ n k w ( X ) = 18.952 . The estimate of W E using maximum likelihood estimation is secured as −0.1323, which signifies that the L K D E of W E performs better than the W E estimated with standard kernel estimation methods when dealing with heavy-tailed data.

5.3. Data 3 (The Time until Failure of the Three Systems)

The data are obtained from [29]. The observations represent three reparable systems observed until the time of their 12th failure. They clarify that the three identically designed systems exhibit distinct behaviors, with their repair rates demonstrating a decreasing trend indicative of improvement in one system, a stable linear trend in another system, and an increasing trend signifying deterioration in the third system. Figure 5 shows the density plot of the three systems. Table 6 shows the value of suggested estimators of W E for these systems.
According to [30], the system or component which is said to have high uncertainty is less reliable. In accordance with this concept, we can infer that System 3 is less reliable than System 1 and System 2 with regard to the three proposed estimators. Using repair rates, in [29], System 3 was also mentioned as the deteriorating system. This example vividly demonstrates how the estimation of W E is useful in choosing a reliable system among the several available competing models.

6. Conclusions

In this article, we considered non-parametric estimation of W E . L K D E and the empirical kernel smoothed estimator for W E were depicted. The bias, variance, optimal bandwidth and some properties of the L K D E of the extropy function were also established here. K D E was also proposed to enable a comparison with the proposed L K D E . We ensured the accuracy of the three estimators by evaluating their performance using measures such as bias and R M S E . We determined that in some situations, for example, when dealing with heavy tailed or skewed data sets, the L K D E of W E performs slightly better than the other two estimators. The real data analyses also involved an assessment of the performance of the estimator and its utility in reliability modeling. We also demonstrated how W E is beneficial when choosing a reliable system from various competing models, highlighting its practicality in the selection process.

Author Contributions

Conceptualization, M.R.I., K.A. and R.M.; methodology, M.R.I., K.A. and R.M.; software, K.A.; validation, M.R.I., K.A., R.M. and M.L.; formal analysis, M.R.I., R.M. and M.L.; investigation, M.R.I., K.A. and R.M.; resources, M.R.I. and R.M.; data curation, K.A.; writing-original draft preparation, K.A.; writing-review and editing, M.R.I., K.A., R.M. and M.L.; visualization, M.R.I., K.A., R.M. and M.L.; supervision, M.R.I., R.M. and M.L.; project administration, M.R.I., K.A., R.M. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data utilized in this study is taken from published articles, appropriately cited and listed in the references.

Acknowledgments

Maria Longobardi is partially supported by the GNAMPA research group of INDAM (Istituto Nazionale di Alta Matematica) and MIUR-PRIN 2022 PNRR Statistical Mechanics of Learning Machines: from algorithmic and information-theoretical limits to new biologically inspired paradigms.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Belis, M.; Guiasu, S. A quantitative-qualitative measure of information in cybernatic systems. IEEE Trans. Inf. Theory 1968, 14, 593–594. [Google Scholar] [CrossRef]
  3. Di Crescenzo, A.; Longobardi, M. On weighted residual and past entropies. Sci. Math. Jpn. 2006, 64, 255–266. [Google Scholar]
  4. Lad, F.; Sanfilippo, G.; Agro, G. Extropy: Complementary dual of entropy. Stat. Sci. 2015, 30, 40–58. [Google Scholar] [CrossRef]
  5. Qiu, G. Extropy of order statistics and record values. Stat. Probab. Lett. 2017, 120, 52–60. [Google Scholar] [CrossRef]
  6. Becerra, A.; de la Rosa, J.I.; Gonzalez, E.; Pedroza, A.D.; Escalante, N.I. Training deep neural networks with non-uniform frame-level cost function for automatic speech recognition. Multimed. Tools Appl. 2018, 77, 27231–27267. [Google Scholar] [CrossRef]
  7. Balakrishnan, N.; Buono, F.; Longobardi, M. On weighted extropies. Commun. Stat.-Theory Methods 2022, 51, 6250–6267. [Google Scholar] [CrossRef]
  8. Balakrishnan, N.; Buono, F.; Longobardi, M. On Tsallis extropy with an application to pattern recognition. Stat. Probab. Lett. 2020, 180, 109241. [Google Scholar] [CrossRef]
  9. Buono, F.; Kamari, O.; Longobardi, M. Interval extropy and weighted interval extropy. Ric. Mat. 2023, 72, 283–298. [Google Scholar] [CrossRef]
  10. Kazemi, R.; Tahmasebi, S.; Calì, C.; Longobardi, M. Cumulative residual extropy of minimum ranked set sampling with unequal samples. Results Appl. Math. 2021, 10, 100156. [Google Scholar] [CrossRef]
  11. Buono, F.; Deng, Y.; Longobardi, M. The unified extropy and its versions in classical and Dempster-Shafer theories. J. Appl. Probab. 2023. [Google Scholar] [CrossRef]
  12. Wand, M.P.; Jones, M.C. Kernel Smoothing; Chapman and Hall: London, UK, 1995. [Google Scholar]
  13. Chen, S. Optimal bandwidth selection for kernel density functionals estimation. J. Probab. Stat. 2015, 2015, 242683. [Google Scholar] [CrossRef]
  14. Gündüz, N.; Aydın, C. Optimal bandwidth estimators of kernel density functionals for contaminated data. J. Appl. Stat. 2021, 48, 2239–2258. [Google Scholar] [CrossRef]
  15. Qiu, G.; Jia, K. Extropy estimators with applications in testing uniformity. J. Nonparametr. Stat. 2018, 30, 182–196. [Google Scholar] [CrossRef]
  16. Rajesh, R.; Rajesh, G.; Sunoj, S. Kernel estimation of extropy function under length-biased sampling. Stat. Probab. Lett. 2022, 181, 109290. [Google Scholar] [CrossRef]
  17. Maya, R.; Irshad, M.R.; Archana, K. Recursive and non-recursive kernel estimation of negative cumulative residual extropy under α-mixing dependence condition. Ric. Mat. 2021, 55, 119–139. [Google Scholar] [CrossRef]
  18. Maya, R.; Irshad, M.R.; Bakouch, H.; Krishnakumar, A.; Qarmalah, N. Kernel Estimation of the Extropy Function under α-Mixing Dependent Data. Symmetry 2023, 15, 796. [Google Scholar] [CrossRef]
  19. Irshad, M.R.; Maya, R. Non-parametric log kernel estimation of extropy function. Chil. J. Stat. 2022, 13, 155–163. [Google Scholar]
  20. Sathar, E.A.; Nair, R.D. On dynamic weighted extropy. J. Comput. Appl. Math. 2021, 393, 113507. [Google Scholar] [CrossRef]
  21. Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  22. Rosenblatt, M. Remarks on some nonparametric estimates of a density function. Ann. Math. Stat. 1956, 27, 832–837. [Google Scholar] [CrossRef]
  23. Charpentier, A.; Flachaire, E. Log-transform kernel density estimation of income distribution. L’Actual. Econ. Rev. Anal. Econ. 2015, 91, 141–159. [Google Scholar] [CrossRef]
  24. Jahanshahi, S.M.A.; Zarei, H.; Khammar, A.H. On Cumulative Residual Extropy. Probab. Eng. Inf. Sci. 2019. [Google Scholar] [CrossRef]
  25. Noughabi, H.A.; Jarrahiferriz, J. On estimation of extropy. J. Nonparametr. Stat. 2019, 31, 88–99. [Google Scholar] [CrossRef]
  26. Sheather, S.J.; Jones, M.C. A reliable data-based bandwidth selection method for kernel density estimation. J. R. Stat. Soc. Ser. B 1991, 53, 683–690. [Google Scholar] [CrossRef]
  27. Lawless, J.F. Statistical Models and Methods for Lifetime Data; Wiley: Hoboken, NJ, USA, 2011; Volume 362. [Google Scholar]
  28. Lee, E.T.; Wang, J.W. Statistical Methods for Survival Data Analysis, 3rd ed.; Wiley and Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  29. Rai, R.N.; Chaturvedi, S.K.; Bolia, N. Repairable Systems Reliability Analysis: A Comprehensive Framework; John Wiley and Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  30. Ebrahimi, N. How to measure uncertainty in the residual life time distribution. Sankhya Indian J. Stat. Ser. A 1996, 58, 48–56. [Google Scholar]
Figure 1. Histogram for “Failure time of Electrical Appliances” data.
Figure 1. Histogram for “Failure time of Electrical Appliances” data.
Entropy 26 00160 g001
Figure 2. The Q-Q plot depicting the goodness of fit for an exponential distribution.
Figure 2. The Q-Q plot depicting the goodness of fit for an exponential distribution.
Entropy 26 00160 g002
Figure 3. Histogram for “Remission time of cancer patients” data.
Figure 3. Histogram for “Remission time of cancer patients” data.
Entropy 26 00160 g003
Figure 4. The Q-Q plot depicting the goodness of fit for log normal distribution.
Figure 4. The Q-Q plot depicting the goodness of fit for log normal distribution.
Entropy 26 00160 g004
Figure 5. Density plot of “Failure time of three systems”.
Figure 5. Density plot of “Failure time of three systems”.
Entropy 26 00160 g005
Table 1. Mean and Variance of J ^ n 1 w ( X ) for the distribution with p d f = 2 x , 0 < x < 1 .
Table 1. Mean and Variance of J ^ n 1 w ( X ) for the distribution with p d f = 2 x , 0 < x < 1 .
nMeanVariance
10−0.586020.03039
20−0.578200.02182
30−0.556220.01888
40−0.561810.01083
50−0.523690.00777
100−0.522690.00697
500−0.499370.00347
Table 2. Mean and Variance of J ^ n 1 w ( X ) for Rayleigh distribution with parameter = 1.
Table 2. Mean and Variance of J ^ n 1 w ( X ) for Rayleigh distribution with parameter = 1.
nMeanVariance
10−0.462200.02453
20−0.339560.01626
30−0.292130.00173
40−0.289190.00121
50−0.266770.00094
100−0.259750.00024
500−0.223310.00008
Table 3. Estimated value (H), | b i a s | and R M S E of J ^ n w ( X ) , J ^ n k w ( X ) , and J ^ n 1 w ( X ) from standard exponential distribution with J w ( X ) = 0.125 .
Table 3. Estimated value (H), | b i a s | and R M S E of J ^ n w ( X ) , J ^ n k w ( X ) , and J ^ n 1 w ( X ) from standard exponential distribution with J w ( X ) = 0.125 .
J ^ n w ( X ) n50100150200250
H−0.1183−0.11886−0.11904−0.11909−0.11978
| b i a s | 0.006700.006140.005960.005960.00591
R M S E 0.017600.013030.010480.009480.00836
n300350400450500
H−0.11952−0.11975−0.12025−0.12028−0.12061
| b i a s | 0.005480.005250.004750.004720.00439
R M S E 0.008360.007740.007070.007070.00632
J ^ n k w ( X ) n50100150200250
H−0.13364−0.13254−0.13088−0.13002−0.12998
| b i a s | 0.008640.007540.005880.005020.00498
R M S E 0.017320.011400.009480.008740.00855
n300350400450500
H−0.12989−0.12976−0.12965−0.12946−0.12955
| b i a s | 0.004890.004760.004650.004460.00455
R M S E 0.008520.008320.008270.008170.00807
J ^ n 1 w ( X ) n50100150200250
H−0.16379−0.14418−0.13857−0.13496−0.13285
| b i a s | 0.038790.019180.013570.009960.00785
R M S E 0.050590.022800.016430.012240.01
n300350400450500
H−0.13234−0.13111−0.13043−0.13005−0.12976
| b i a s | 0.007340.006110.005430.005050.00476
R M S E 0.009480.008360.008240.008070.00807
Table 4. Estimated value (H), | b i a s | and R M S E of J ^ n w ( X ) , J ^ n k w ( X ) , and J ^ n 1 w ( X ) from lognormal distribution with J w ( X ) = 0.14105 .
Table 4. Estimated value (H), | b i a s | and R M S E of J ^ n w ( X ) , J ^ n k w ( X ) , and J ^ n 1 w ( X ) from lognormal distribution with J w ( X ) = 0.14105 .
J ^ n w ( X ) n50100150200250
H−0.1437−0.14243−0.14199−0.14199−0.14189
| b i a s | 0.002650.001390.000950.000950.00084
R M S E 0.015810.010950.008940.008940.00707
n300350400450500
H−0.14175−0.14121−0.14127−0.14155−0.1414
| b i a s | 0.00070.000160.000120.000110.00006
R M S E 0.006320.004500.004470.004230.00411
J ^ n k w ( X ) n50100150200250
H−0.14621−0.14375−0.14241−0.14207−0.14144
| b i a s | 0.005170.002710.001360.001030.00039
R M S E 0.016120.011400.008360.007070.00632
n300350400450500
H−0.14139−0.14138−0.14127−0.14126−0.14093
| b i a s | 0.000370.000340.000230.000220.00012
R M S E 0.006320.005470.005470.005470.00547
J ^ n 1 w ( X ) n50100150200250
H−0.223−0.17574−0.16491−0.15942−0.15744
| b i a s | 0.081950.034690.023860.018370.01639
R M S E 0.061030.050490.032860.027380.02645
n300350400450500
H−0.15401−0.15218−0.15072−0.15015−0.14888
| b i a s | 0.012960.011130.009670.009110.00783
R M S E 0.020000.015810.014140.014490.01183
Table 5. Estimated value (H), | b i a s | and R M S E of J ^ n w ( X ) , J ^ n k w ( X ) , and J ^ n 1 w ( X ) from standard uniform distribution with J w ( X ) = 0.25 .
Table 5. Estimated value (H), | b i a s | and R M S E of J ^ n w ( X ) , J ^ n k w ( X ) , and J ^ n 1 w ( X ) from standard uniform distribution with J w ( X ) = 0.25 .
J ^ n w ( X ) n50100150200250
H−0.2097−0.21562−0.21826−0.22059−0.22285
| b i a s | 0.040300.034380.031740.029410.02715
R M S E 0.053470.042890.037810.034640.03146
n300350400450500
H−0.22277−0.22426−0.22511−0.22575−0.22668
| b i a s | 0.027230.025740.024890.024250.02332
R M S E 0.031140.029660.028100.027560.02607
J ^ n k w ( X ) n50100150200250
H−0.22576−0.22786−0.22829−0.23056−0.23045
| b i a s | 0.024240.022140.021710.019550.01944
R M S E 0.041230.031620.028280.025880.02569
n300350400450500
H−0.23201−0.23276−0.23325−0.23399−0.23379
| b i a s | 0.017990.017240.016750.016210.01601
R M S E 0.023020.021670.020970.019740.01974
J ^ n 1 w ( X ) n50100150200250
H−0.22669−0.22713−0.22892−0.23084−0.23194
| b i a s | 0.023310.022870.021080.019160.01806
R M S E 0.041470.032710.029150.025690.02366
n300350400450500
H−0.23168−0.23195−0.23317−0.23335−0.23361
| b i a s | 0.018320.018050.016830.016650.01639
R M S E 0.023450.022130.021210.020240.01974
Table 6. Values of J ^ n w ( X ) , J ^ n k w ( X ) and J ^ n 1 w ( X ) for the three systems.
Table 6. Values of J ^ n w ( X ) , J ^ n k w ( X ) and J ^ n 1 w ( X ) for the three systems.
J ^ n w ( X ) J ^ nk w ( X ) J ^ n 1 w ( X )
System 1−0.09638−0.12426−0.14345
System 2−0.19953−0.20431−0.19666
System 3−0.39227−0.41138−0.30690
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Irshad, M.R.; Archana, K.; Maya, R.; Longobardi, M. Estimation of Weighted Extropy with Focus on Its Use in Reliability Modeling. Entropy 2024, 26, 160. https://doi.org/10.3390/e26020160

AMA Style

Irshad MR, Archana K, Maya R, Longobardi M. Estimation of Weighted Extropy with Focus on Its Use in Reliability Modeling. Entropy. 2024; 26(2):160. https://doi.org/10.3390/e26020160

Chicago/Turabian Style

Irshad, Muhammed Rasheed, Krishnakumar Archana, Radhakumari Maya, and Maria Longobardi. 2024. "Estimation of Weighted Extropy with Focus on Its Use in Reliability Modeling" Entropy 26, no. 2: 160. https://doi.org/10.3390/e26020160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop