Next Article in Journal
A Data-Driven Machine Learning Algorithm for Predicting the Outcomes of NBA Games
Previous Article in Journal
Continuum Energy Eigenstates via the Factorization Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel Estimation of the Extropy Function under α-Mixing Dependent Data

by
Radhakumari Maya
1,
Muhammed Rasheed Irshad
2,
Hassan Bakouch
3,4,
Archana Krishnakumar
2 and
Najla Qarmalah
5,*
1
Department of Statistics, University College, Trivandrum 695 034, Kerala, India
2
Department of Statistics, Cochin University of Science and Technology, Cochin 682 022, Kerala, India
3
Department of Mathematics, College of Science, Qassim University, Buraydah 51452, Saudi Arabia
4
Department of Mathematics, Faculty of Science, Tanta University, Tanta 31111, Egypt
5
Department of Mathematical Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(4), 796; https://doi.org/10.3390/sym15040796
Submission received: 16 February 2023 / Revised: 18 March 2023 / Accepted: 21 March 2023 / Published: 24 March 2023

Abstract

:
Shannon developed the idea of entropy in 1948, which relates to the measure of uncertainty associated with a random variable X. The contribution of the extropy function as a dual complement of entropy is one of the key modern results based on Shannon’s work. In order to develop the inferential aspects of the extropy function, this paper proposes a non-parametric kernel type estimator as a new method of measuring uncertainty. Here, the observations are exhibiting α -mixing dependence. Asymptotic properties of the estimator are proved under appropriate regularity conditions. For comparison’s sake, a simple non-parametric estimator is proposed, and in this respect, the performance of the estimator is investigated using a Monte Carlo simulation study based on mean-squared error and using two real-life data.

1. Introduction

Reference [1] made a significant contribution to statistics by coining the term “entropy”, which refers to the measurement of uncertainty in a probability distribution. If X is a non-negative random variable (rv) that admits an absolutely continuous cumulative distribution function (cdf) F ( x ) with the corresponding probability density function (pdf) f ( x ) , then the Shannon entropy concept associated with X can be defined as follows:
R ( X ) = 0 + f ( x ) log f ( x ) d x .
Previous research explores numerous extended and generalized versions of Shannon’s entropy function. Indeed, an excellent review of various developments on Shannon’s entropy function and its inferential aspects is covered by [2].
One of the main contemporary findings based on Shannon’s work is the contribution of a study by [3], which suggests the extropy function as a dual complement of entropy. It is defined for an absolutely continuous and non-negative rv X with pdf f ( x ) as the following:
J ( X ) = 1 2 0 + f 2 ( x ) d x .
It is evident from Equation (2) that J(X) < 0, and hence extropy, in contrast to entropy, is always negative. The research by [3] notes that a binary distribution’s entropy and extropy are equal and as in the case of entropy, the maximum extropy distribution is the uniform distribution. Following on from the work of [3] the study of extropy has increased considerably both from a theoretical and applied point of view. Reference [4] made a comparison with extropy and some existing measures of uncertainty available in the literature and showed that there are situations where extropy can be utilized to deliver more information than those measures of uncertainty. One of the main statistical application of extropy is the total log scoring rule, which is used to score the forecasting distributions. In the application point of view, the use of extropy in automatic speech recognition was given by [5]. One can refer to [6] for the application of extropy in thermodynamics and statistical mechanics. Reference [7] explored some properties of it, involving some characterization results using order statistics and record values. Several works are available in the literature related to the inferential aspect of extropy function based on independent observations. Reference [8] proposed the development of extropy estimators in applications for testing uniformity and also used extropy to compare the uncertainties of two rvs. A study by [9] provided kernel estimation of extropy function under length-biased sampling. More recently, Reference [10] developed non-parametric log kernel estimator of extropy function.
In practice, it seems more realistic to drop the idea of independence, and replace it with some mode of dependence. However, in the case of extropy function, no inferential aspects have been proposed in previous research based on dependent data. With this in mind, the goal of this current research is to propose a non-parametric estimator of extropy function that relies on recursive kernel type estimation based on dependent data. Even various mixing conditions are available in the literature; α -mixing is the better mixing condition and has many applications (see, Reference [11]), which motivates us to estimate extropy function under α -mixing dependence condition. Unlike other research relating to recursive estimation, this paper develops a detailed analysis on recursive kernel estimation using simulation and time series data.
Let ( Ω , A , P ) be a probability space and A i m be the σ -algebra of events obtained by the rvs { X j ; i j m } . The stationary process { X j } is said to satisfy the α -mixing (strong mixing) condition if
sup E A m F A i + m + | P ( E F ) P ( E ) P ( F ) | = α ( m ) 0
as m + . This implies that as m approaches infinity, then the rvs X i and X i + m become asymptotically independent. Here, the coefficient α ( m ) is known as the mixing coefficient.
Previous researchers in the field have studied non-parametric estimation for dependent data. These investigations include the non-parametric kernel type estimation of past extropy, and residual extropy, under  α -mixing dependence condition (see, References [12,13]). Reference [14] developed non-parametric kernel type estimators for Mathai-Haubold entropy and its residual version based on α -mixing dependent data. Furthermore, a study by Reference [15] explores the recursive and non-recursive kernel estimation of negative cumulative residual extropy under α -mixing dependence condition.
This current paper adopts the following structure: Section 2 explores non-parametric recursive kernel estimator for the extropy function, while Section 3 presents the asymptotic properties of the proposed estimator. Section 4 outlines a simulation study in order to illustrate the performance of the proposed estimator. Section 5 discusses how the application of the estimator to real-life data is implemented. Finally, Section 6 provides a conclusion with some future aspects.

2. Estimation of Extropy Function

In this section, the idea for the development of a non-parametric recursive kernel estimator for the extropy function will be proposed. The main feature of recursive density estimators in comparison with non-recursive estimators is that they can be updated with each additional observation. In the case of non-recursive estimators, they must be entirely recomputed.
Let { X i ; 1 i n } be a sequence of identically distributed rvs representing the life-times for n components. Here, the life-times are assumed to be α -mixing. The assumptions are made in the study by [16]. Reference [17] is used for deriving the asymptotic properties of the estimator and for the purpose of comparison, the definition of a simple non-parametric estimator of J ( X ) can be given as follows:
J n * ( X ) = 1 2 1 n i = 1 n f n 2 ( X i ) ,
where
f n ( X i ) = 1 ( n 1 ) j i = 1 n 1 ψ j κ X i X j ψ j .
This represents the kernel estimator obtained from the sample without X i (see, Reference [18]).
Following on from this, the non-parametric recursive kernel estimator for J ( X ) is:
J ^ n ( X ) = 1 2 0 + f ^ n 2 ( x ) d x ,
where f ^ n ( x ) is a non-parametric estimator of density function.
However, the mostly used non-parametric recursive estimator of f ( x ) is the kernel estimator (see, Reference [19]), which is given as:
f ^ n ( x ) = 1 n j = 1 n 1 ψ j κ x X j ψ j ,
where κ ( x ) satisfies the conditions: κ ( x ) is bounded, non-negative, symmetric, κ i ( x ) = ( 1 / ψ i ) κ ( x / ψ i ) , + κ ( x ) d x = 1 , + x κ ( x ) d x = 0 and { ψ n } is a sequence of positive bandwidths such that ψ n 0 and n ψ n + as n + , lim n ( 1 / n ) j = 1 n ψ j / ψ n l = β l < + , l = 1 , 2 , , s + 1 and lim n + ( 1 / n ) j = 1 n ψ n / ψ j l = θ l < + , 1 l < 2 .
Under α -mixing dependence condition, the expressions for the bias and variance of f ^ n ( x ) are outlined by (see, Reference [16]):
Bias ( f ^ n ( x ) ) ψ n s ζ s s ! f ( s ) ( x ) β s
and
Var ( f ^ n ( x ) ) θ 1 f ( x ) n ψ n ζ κ ,
where ζ s = + u s κ ( u ) d u , f ( s ) ( x ) is the s t h derivative of the pdf and ζ κ = + κ 2 ( u ) d u .

3. Recursive Property and Asymptotic Results

This section supports that the proposed estimator J ^ n ( X ) exhibits a recursive property and the asymptotic results of the corresponding estimator are shown here.
Theorem 1.
Let J ^ n ( X ) be a non-parametric estimator of J ( X ) as defined in Equation (6). Then, it meets the recursive property:
J ^ n ( X ) = ( n 1 ) 2 n 2 J ^ n 1 ( X ) 1 n 2 ψ n 0 + κ x X n ψ n ( ( n 1 ) f ^ n 1 ( x ) + 1 2 ψ n κ x X n ψ n ) d x .
Proof of Theorem 1.
We have,
J ^ n ( X ) = 1 2 0 + f ^ n 2 ( x ) d x ,
J ^ n 1 ( X ) = 1 2 0 + f ^ n 1 2 ( x ) d x ,
and
f n ( x ) = n 1 n f ^ n 1 ( x ) + 1 n ψ n κ x X n ψ n .
here,
2 J ^ n ( X ) = 0 + f ^ n 2 ( x ) d x , = 0 + n 1 n f ^ n 1 ( x ) + 1 n ψ n κ x X n ψ n 2 d x .
then, we get
2 J ^ n ( X ) = ( n 1 ) 2 n 2 0 + f ^ n 1 2 ( x ) d x + 1 n 2 ψ n 2 0 + κ 2 x X n ψ n d x + 2 ( n 1 ) n 2 ψ n 0 + f ^ n 1 ( x ) κ x X n ψ n d x = ( n 1 ) 2 n 2 2 J ^ n 1 ( X ) + 1 ( n ψ n ) 2 0 + κ 2 x X n ψ n d x + 2 ( n 1 ) n 2 ψ n 0 + f ^ n 1 ( x ) κ x X n ψ n d x .
The result of rearranging the equation above is Equation (10). Hence, the theorem is proved.    □
Theorem 2.
Let κ ( x ) be a kernel of order s and let ψ n be a sequence of numbers that satisfies the conditions given in Section 2. Then, J ^ n ( X ) is a consistent estimator of J ( X ) , that is
J ^ n ( X ) p J ( X ) .
Proof of Theorem 2.
By using Taylor’s series expansion, we have
0 + f ^ n 2 ( x ) d x 0 + f 2 ( x ) d x + 2 0 + f ^ n ( x ) f ( x ) f ( x ) d x .
Hence, the expressions for the bias, variance and mean-squared error ( M S E ) of 0 + f ^ n 2 ( x ) d x are, respectively,
Bias 0 + f ^ n 2 ( x ) d x 2 0 + Bias f ^ n ( x ) f ( x ) d x , = 2 ψ n s β s ζ s s ! 0 + f ( s ) ( x ) f ( x ) d x
and
Var 0 + f ^ n 2 ( x ) d x 4 0 + Var f ^ n ( x ) f 2 ( x ) d x = 4 θ 1 n ψ n ζ κ 0 + f 3 ( x ) d x .
M S E 0 + f ^ n 2 ( x ) d x 2 ψ n s β s ζ s s ! 0 + f ( s ) ( x ) f ( x ) d x 2 + 4 θ 1 n ψ n ζ κ 0 + f 3 ( x ) d x .
From Equation (19), as  n + , M S E 0 + f ^ n 2 ( x ) d x 0 .
Therefore,
J ^ n ( X ) p J ( X ) .
Thus, the theorem is proved.    □
Remark 1.
The bias and variance of the estimator J ^ n ( X ) is obtained as
B i a s J ^ n ( X ) ψ n s β s ζ s s ! 0 + f ( s ) ( x ) f ( x ) d x
and
V a r J ^ n ( X ) θ 1 n ψ n ζ κ 0 + f 3 ( x ) d x .
Theorem 3.
Suppose J ^ n ( X ) is a non-parametric estimator of J ( X ) as defined in Equation (6). Then, J ^ n ( X ) is integratedly consistent in quadratic mean estimator of J ( X ) .
Proof of Theorem 3.
(Proof of Theorem 3). Consider the mean integrated square error of J ^ n ( X ) denoted as Δ ( J ^ n ( X ) ) . Then,
Δ ( J ^ n ( X ) ) = E + J ^ n ( X ) J ( X ) 2 d x = + Var J ^ n ( X ) + Bias J ^ n ( X ) 2 d x = + M S E J ^ n ( X ) d x .
By using Equations (20) and (21), we get
M S E ( J ^ n ( X ) ) ψ n s β s ζ s s ! 0 + f ( s ) ( x ) f ( x ) d x 2 + θ 1 n ψ n ζ κ 0 + f 3 ( x ) d x .
From Equation (23), as  n +
M S E ( J ^ n ( X ) ) 0 ,
and from Equation (22), we get
Δ ( J ^ n ( X ) ) 0 , n + .
Hence, the theorem is proved using Equation (24), (see Reference [20]) .    □
Theorem 4.
Suppose J ^ n ( X ) is a non-parametric estimator of J ( X ) as defined in Equation (6), satisfying the assumptions given in Section 2, then
( n ψ n ) 1 2 J ^ n ( X ) ϕ J J ( X ) ϕ J
has a standard normal distribution (N(0,1)) as n + with
ϕ J 2 θ 1 ζ κ 0 + f 3 ( x ) d x .
Proof of Theorem 4.
We have
( n ψ n ) 1 2 J ^ n ( X ) J ( X ) ( n ψ n ) 1 2 0 + ( f ^ n ( x ) f ( x ) ) f ( x ) d x .
By using the asymptotic normality of f ^ n ( x ) given in Reference [16], we can conclude that
( n ψ n ) 1 2 J ^ n ( X ) J ( X ) ϕ J
has N(0,1) as n + with ϕ J 2 given in Equation (26). Hence, the theorem results.    □

4. Monte Carlo Simulation

A simulation study is carried out to compare the kernel estimator J ^ n ( X ) and J n * ( X ) in terms of the M S E . In the first case, { X i } is generated from the exponential AR(1) process with correlation coefficient ρ = 0.2 and parameter λ = 1.5 . The Gaussian kernel is used as the kernel function for the estimation. The estimated value, bias and M S E of J ^ n ( X ) and J n * ( X ) for various sample sizes (n) 10, 20, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500 and these values are calculated in the case of the exponential AR(1) process, as shown in Table 1 and Table 2.
From Table 1 and Table 2, it can be seen that the proposed estimate J ^ n ( X ) for the extropy function performs better than the estimate J n * ( X ) based on the obtained MSE.
Furthermore, Gaussian distribution with parameters μ = 5 and σ = 3 is considered, from which { X i } is generated for constructing the Gaussian AR(1) process with correlation coefficient ρ = 0.5. The estimation is conducted using the Gaussian kernel function. The bias and MSE of J ^ n ( X ) and J n * ( X ) for various sample sizes 10, 20, 50, 100, 150, 200, 250, 300, 350, 400, 450 and 500 are calculated for the Gaussian AR(1) process, as shown in Table 3 and Table 4.
Table 3 and Table 4 show that the proposed estimate J ^ n ( X ) for the extropy function is better than the estimate J n * ( X ) based on the MSE. All simulations were executed using R programming language with standard computation time. Moreover, R-code to find J ^ n ( X ) by simulation for size n = 50 is given in the Appendix A.

5. Data Analysis

5.1. Application 1

For the first application, time series data relating to International Airline Passengers: Monthly Totals from January 1949 to December 1960 (comprising thousands of passengers) are considered for analysis. This time series data are also used in a study by [21]. The time series plot, sample auto correlation function (ACF) and partial autocorrelation function (PACF) plot of the data are shown in Figure 1, Figure 2 and Figure 3.
From Figure 1, Figure 2 and Figure 3, the data show random components, seasonality, and trends. The time series data are decomposed in order to arrive at estimates of trends, seasonal and random components, using the moving average method. Figure 4 shows the decomposition of the time series data into trends, seasonal and random components.
After removing the seasonality and trend components, a time series plot of the data, with random components is produced, as shown in Figure 5.
The AR(1) model is fitted to the data with a correlation coefficient of ϕ = 0.4069 and an intercept = 0.9981. The ACF plot of residuals of the fitted AR(1) model is shown in Figure 6.
Gaussian distribution with the parameters μ and σ is fitted to the data. And the acquired Kolmogorov-Smirnov (KS) statistic value is 0.083416, with corresponding p-value = 0.3173. This shows that Gaussian distribution is an appropriate fit for the data. The maximum likelihood estimates of the parameter are μ ^ = 0.9982 and σ ^ = 0.0332.
Using the maximum likelihood estimates, the estimate of extropy is 3.873615 . The extropy values are estimated using the proposed estimates given in Equations (6) and (4). The corresponding estimates are J ^ n ( X ) = 3.62753 and J n * ( X ) = 42.45683 . Thus, it becomes clear that the proposed estimate J ^ n ( X ) sits closer to the value 3.873615 and the estimate J n * ( X ) is far removed from the same value. As a result, the estimate J ^ n ( X ) is considered better than J n * ( X ) as a fit for the extropy data.

5.2. Application 2

For this application, the time series data ‘The Failure of Computer Patterns’ used in [22] is considered. These data comprise 257 observations, where the quantities relate to successive times-to-failures. The time series plot of the given data for this application is shown in Figure 7.
From an analysis of this time series plot, it becomes clear that data are non-stationary. Hence, the data are made stationary by calculating the difference. The time series plot of the stationary data is shown in Figure 8.
For the given observations, eight outliers are present, and these are removed from the data. The time series plot, ACF and PACF of the remaining observations are shown in Figure 9, Figure 10 and Figure 11.
The AR(1) model is fitted to the data and the correlation coefficient ϕ = 0.5854 is obtained. Also, exponential distribution with the rate parameter λ is fitted to the data and we obtain KS = 0.07501 and p-value = 0.1121. Hence, it is apparent that the exponential distribution is a satisfactory fit to the data. The maximum likelihood estimate for the data is λ ^ = 0.00327 and the corresponding estimate for extropy is −0.0008186.
The estimates of extropy using kernel estimation are J ^ n ( X ) = 0.0008317 and J n * ( X ) = 0.000002574 . Thus, it becomes clear that the proposed estimate is closer to the estimate of extropy using the maximum likelihood estimation method, rather than the estimate of J n * ( X ) . Thus, J ^ n ( X ) is seen to be a better estimator than J n * ( X ) when fitting this data.
From the two applications discussed above, it is clear that the proposed estimator performed well in real-life scenario applications.

6. Conclusions and Future Works

This paper has explored the non-parametric estimators for extropy function using kernel type estimation, where observations under consideration are α -mixing dependent. Certain asymptotic properties of the proposed estimator are investigated and proved. Furthermore, a simulation study has been conducted to compare the performance of the estimates, and the suitability of the estimator is shown using real-life data applications. It can be concluded that the proposed estimator J ^ n ( X ) is superior to J n * ( X ) .
Several generalizations of extropy function, such as weighted extropy and Tsallis extropy are proposed in the literature. Estimation of those generalizations based on dependent observations can be considered as one of the future works. In addition to the α -mixing dependence condition one can develop inferential aspects of extropy and its various generalizations based on ϕ -mixing and ρ -mixing dependence condition.

Author Contributions

Conceptualization, R.M., M.R.I. and H.B.; methodology, R.M., M.R.I. and A.K.; writing—original draft preparation, A.K.; review and editing, M.R.I., H. B. and N.Q.; validation, N.Q.; software, A.K. and N.Q.; visualization, A.K. and N.Q.; funding acquisition, N.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R376), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors gratefully acknowledge Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R376), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia on the financial support for this project.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

       R-code to find J ^ n ( X ) by simulation for size n = 50
rm(list=ls())
library(MASS)
 
intt=function(x){
fn=h=rep()
for(i in 1:n)
{
h[i]=1/i^(0.5)
fn[i]=exp((-0.5/h[i]^2)*(x-X[i])^2)/(sqrt(2*pi)*h[i])
}
return((sum(fn)/n)^2)
}
n=50
for(s in 1:1000){
Jn[s]=(-1/2)*(integrate(Vectorize(intt),0,Inf)$value)
}
 
Jncap=mean(Jn)

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley Series in Telecommunications and Signal Processing; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  3. Frank, L.; Sanfilippo, G.; Agro, G. Extropy complementary dual of entropy. Stat. Sci. 2015, 30, 40–58. [Google Scholar]
  4. Jose, J.; Abdul Sathar, E.I. Extropy for past life based on classical records. J. Indian Soc. Probab. Stat. 2021, 22, 27–46. [Google Scholar] [CrossRef]
  5. Becerra, A.; de la Rosa, J.I.; Gonzàlez, E.; Pedroza, A.D.; Escalante, N.I. Training deep neural networks with non-uniform frame-level cost function for automatic speech recognition. Multimed. Tools Appl. 2018, 77, 27231–27267. [Google Scholar] [CrossRef]
  6. Martinas, K.; Frankowicz, M. Extropy-reformulation of the Entropy Principle. Period. Polytech. Chem. Eng. 2000, 44, 29–38. [Google Scholar]
  7. Qiu, G. The extropy of order statistics and record values. Stat. Probab. Lett. 2017, 120, 52–60. [Google Scholar] [CrossRef]
  8. Qiu, G.; Jia, K. Extropy estimators with applications in testing uniformity. J. Nonparametric Stat. 2018, 30, 182–196. [Google Scholar] [CrossRef]
  9. Rajesh, R.; Rajesh, G.; Sunoj, S. Kernel estimation of extropy function under length-biased sampling. Stat. Probab. Lett. 2022, 181, 109290. [Google Scholar] [CrossRef]
  10. Irshad, M.R.; Maya, R. Non-parametric log kernel estimation of extropy function. Chil. J. Stat. 2022, 13, 155–163. [Google Scholar]
  11. Rosenblatt, M. A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. USA 1956, 42, 43–47. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Irshad, M.R.; Maya, R. Non-parametric estimation of the past extropy under α-mixing dependence condition. Ric. Mat. 2022, 71, 723–734. [Google Scholar] [CrossRef]
  13. Maya, R.; Irshad, M.R. Kernel estimation of the residual extropy under α-mixing dependence condition. S. Afr. Stat. J. 2019, 53, 65–72. [Google Scholar] [CrossRef]
  14. Maya, R.; Irshad, M.R. Kernel estimation of Mathai-Haubold entropy and residual Mathai-Haubold entropy functions under α-mixing dependence condition. Am. J. Math. Manag. Sci. 2022, 41, 148–159. [Google Scholar] [CrossRef]
  15. Maya, R.; Irshad, M.R.; Archana, K. Recursive and non-recursive kernel estimation of negative cumulative residual extropy under α-mixing dependence condition. Ric. Mat. 2021, 55, 1–21. [Google Scholar] [CrossRef]
  16. Masry, E. Recursive probability density estimation for weakly dependent stationary processes. IEEE Trans. Inf. Theory 1986, 32, 254–267. [Google Scholar] [CrossRef]
  17. Härdle, W.K. Smoothing Techniques: With Implementation in S; Springer Science and Business Media: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  18. Hall, P.; Morton, S.C. On estimation of entropy. Ann. Inst. Stat. Math. 1993, 45, 69–88. [Google Scholar] [CrossRef]
  19. Wolverton, C.; Wagner, T. Asymptotically optimal discriminant functions for pattern classification. IEEE Trans. Inf. Theory 1969, 15, 258–265. [Google Scholar] [CrossRef]
  20. Wegman, E.J. Nonparametric probability density estimation: I. A summary of available methods. Technometrics 1972, 14, 533–546. [Google Scholar] [CrossRef]
  21. Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley and Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  22. Lewis, P.A.W. A branching poisson process model for the analysis of computer failure patterns. J. R. Stat. Soc. Ser. (Methodol.) 1964, 26, 398–456. [Google Scholar] [CrossRef]
Figure 1. Time series plot of monthly totals of International Airline Passengers data.
Figure 1. Time series plot of monthly totals of International Airline Passengers data.
Symmetry 15 00796 g001
Figure 2. ACF plot of monthly totals of International Airline Passengers data.
Figure 2. ACF plot of monthly totals of International Airline Passengers data.
Symmetry 15 00796 g002
Figure 3. PACF plot of the data monthly totals of International Airline Passengers.
Figure 3. PACF plot of the data monthly totals of International Airline Passengers.
Symmetry 15 00796 g003
Figure 4. Decomposition plot of the data.
Figure 4. Decomposition plot of the data.
Symmetry 15 00796 g004
Figure 5. Time series plot of random component.
Figure 5. Time series plot of random component.
Symmetry 15 00796 g005
Figure 6. Sample ACF of residuals of fitted AR(1) model.
Figure 6. Sample ACF of residuals of fitted AR(1) model.
Symmetry 15 00796 g006
Figure 7. Time series plot of the data “Failure of computer patterns”.
Figure 7. Time series plot of the data “Failure of computer patterns”.
Symmetry 15 00796 g007
Figure 8. Time series plot of stationary data.
Figure 8. Time series plot of stationary data.
Symmetry 15 00796 g008
Figure 9. Time series plot of computer failure data patterns without outliers.
Figure 9. Time series plot of computer failure data patterns without outliers.
Symmetry 15 00796 g009
Figure 10. ACF plot of computer failure data patterns without outliers.
Figure 10. ACF plot of computer failure data patterns without outliers.
Symmetry 15 00796 g010
Figure 11. PACF plot of computer failure data patterns without outliers.
Figure 11. PACF plot of computer failure data patterns without outliers.
Symmetry 15 00796 g011
Table 1. Exponential AR(1), Estimated value (E), Bias and M S E of J ^ n ( X ) with J ( X ) = 0.375.
Table 1. Exponential AR(1), Estimated value (E), Bias and M S E of J ^ n ( X ) with J ( X ) = 0.375.
nEBiasMSE
10−0.17560.19940.0406
20−0.20890.16610.0284
50−0.24760.12740.0169
100−0.27310.10190.011
150−0.28700.08800.0083
200−0.29710.07790.0065
250−0.30370.07130.0054
300−0.30810.06690.0048
350−0.31190.06310.0043
400−0.31550.05950.0038
450−0.31890.05610.0034
500−0.32110.05390.0031
Table 2. Exponential AR(1), E, Bias and MSE of J n * ( X ) with J ( X ) = 0.375.
Table 2. Exponential AR(1), E, Bias and MSE of J n * ( X ) with J ( X ) = 0.375.
nEBiasMSE
10−0.12540.24960.0657
20−0.15730.21770.0509
50−0.20260.17240.0331
100−0.23610.13890.022
150−0.25180.12320.0177
200−0.26430.11070.0141
250−0.27460.10040.0117
300−0.27970.09530.0107
350−0.28550.08950.0096
400−0.28970.08530.0087
450−0.29460.08040.0077
500−0.29790.07710.0071
Table 3. Gaussian AR(1), E, Bias and MSE of J ^ n ( X ) with J ( X ) = 0.0399.
Table 3. Gaussian AR(1), E, Bias and MSE of J ^ n ( X ) with J ( X ) = 0.0399.
nEBiasMSE
10−0.0534−0.01350.0015
20−0.0516−0.01170.0010
50−0.0461−0.00630.0005
100−0.0435−0.00360.0003
150−0.0431−0.00320.0002
200−0.0427−0.00280.0001
250−0.0424−0.00250.0001
300−0.0422−0.00230.0001
350−0.0413−0.00140.0001
400−0.0413−0.00140.0001
450−0.0414−0.00150.0001
500−0.0416−0.00170.0001
Table 4. Gaussian AR(1), E, Bias and MSE of J n * ( X ) with J ( X ) = 0.0399.
Table 4. Gaussian AR(1), E, Bias and MSE of J n * ( X ) with J ( X ) = 0.0399.
nEBiasMSE
10−0.03370.00620.0020
20−0.02790.01200.0012
50−0.02330.01660.0006
100−0.02190.01800.0005
150−0.02120.01870.0005
200−0.02100.01890.0005
250−0.02130.01860.0004
300−0.02130.01860.0004
350−0.02020.01960.0004
400−0.02070.01920.0004
450−0.02070.01920.0004
500−0.02080.01910.0004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maya, R.; Irshad, M.R.; Bakouch, H.; Krishnakumar, A.; Qarmalah, N. Kernel Estimation of the Extropy Function under α-Mixing Dependent Data. Symmetry 2023, 15, 796. https://doi.org/10.3390/sym15040796

AMA Style

Maya R, Irshad MR, Bakouch H, Krishnakumar A, Qarmalah N. Kernel Estimation of the Extropy Function under α-Mixing Dependent Data. Symmetry. 2023; 15(4):796. https://doi.org/10.3390/sym15040796

Chicago/Turabian Style

Maya, Radhakumari, Muhammed Rasheed Irshad, Hassan Bakouch, Archana Krishnakumar, and Najla Qarmalah. 2023. "Kernel Estimation of the Extropy Function under α-Mixing Dependent Data" Symmetry 15, no. 4: 796. https://doi.org/10.3390/sym15040796

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop