Weighted Cumulative Past Extropy and Its Inference

This paper introduces and studies a new generalization of cumulative past extropy called weighted cumulative past extropy (WCPJ) for continuous random variables. We explore the following: if the WCPJs of the last order statistic are equal for two distributions, then these two distributions will be equal. We examine some properties of the WCPJ, and a number of inequalities involving bounds for WCPJ are obtained. Studies related to reliability theory are discussed. Finally, the empirical version of the WCPJ is considered, and a test statistic is proposed. The critical cutoff points of the test statistic are computed numerically. Then, the power of this test is compared to a number of alternative approaches. In some situations, its power is superior to the rest, and in some other settings, it is somewhat weaker than the others. The simulation study shows that the use of this test statistic can be satisfactory with due attention to its simple form and the rich information content behind it.


Introduction
In recent years, there has been strong interest in the measurement of the uncertainty of probability distributions, which is called entropy. The probabilistic concept of entropy was developed by [1]. For an absolutely continuous random variable X, the Shannon entropy is defined as where " log " means the natural logarithm, and f (x) is the probability density function (pdf) of a random variable X. Several applications of entropy in information theory, economics, communication theory, and physics are well developed in the literature, (see Cover and Thomas, [2]). Belis and Guiasu [3] and Guiasu [4] considered a weighted entropy measure as where by assigning greater importance to larger values of X, the weight x in (1) emphasizes the occurrence of the event X = x. Reference [5] stated the necessity of the existence of the weighted measures of uncertainty. In the Shanon entropy H(X), only the pdf of the random variable X is regarded. Moreover, it is known that this information measure is shift-independent, in the sense that the information content of a random variable X is equal to that of X + b. Indeed, some applied fields such as neurobiology do not tend to deal with shift-independent but shift-dependent. Further research was conducted to generalize the concept of entropy, for example, by replacing the pdf f (x) with the survival functionF(x), ref. [6] introduced the cumulative residual entropy (CRE) as Moreover, continued generalizations recently include a more tractable measure of information, which is the dual of entropy, called extropy, introduced by [7], which has the following form: After that, a number of researchers have worked to identify the behavior of this concept in some complex schemes. In fact, both entropy and extropy provide us the information content associated with the random variable X. As stated before, the extropy is a measure of uncertainty introduced as dual to the entropy. The most important advantage of extropy is that it is easy to compute. References [8][9][10] characterized the behavior of extropy and its generalization in record values, order statistics schemes, and mixed systems, respectively. Moreover, the extropy properties of the ranked set sampling were given in [11]. In addition to the work of [12] in which the concept of extropy was generalized to cumulative residual extropy, reference [13] investigated the properties of this term in both theoretical and applied aspects based on a version of the ranked set sampling. Moreover, Vaselabi et al., Buono and Longobardi, Kazemi et al. [14][15][16] considered varextropy, Deng extropy, and fractional Deng extropy as generalizations of extropy. Furthermore, References [17][18][19] considered dynamic weighted extropy, the extropy of past lifetime distribution, and the extropy of k-records, respectively. For the problem of estimation and inference of the extropy, one can see, for example [20,21], and so on. An alternative measure of the uncertainty of a random variable X called cumulative residual extropy (CRJ) was introduced by [12] as As we see, this measure is a generalization of the so-called extropy of [7] in which the survival functionF(x) plays the role of the pdf f (x) in (3). Since the pdf f (x) is the derivative of the cumulative distribution function F(x) (cdf), cdf is more convenient to work with. Therefore, because of its convenience, some researchers prefer to work with CRJ than extropy. In the following, the basic idea is to replace the pdf with the cdf in the extropy definition (3). The cdf is more regular than the pdf, because the pdf is computed as the derivative of the cdf. For the dual measure for a random variable X, we can define the cumulative past extropy (CPJ) as TheĒ J(X) is suitable to measure information when uncertainty is related to the past, and the empirical version of the CPJ can be easily obtained rather than the empirical version of the extropy itself. So, one can explore the applications of the CPJ in providing inferential methods. It is reasonable to define the CPJ only for random variables with bounded support, since this measure will be equal to −∞ for all random variables with unbounded support. The rest of this paper is organized as follows. In Section 2, we introduce the weighted cumulative past extropy as well as analyzing some of its properties, and some examples are presented. Section 3 considers the WCPJ of order statistics. Furthermore, we explore that when WCPJs of the last order statistic are equal for two distributions, these two distributions will be equal. In Section 4, some bounds and inequalities are achieved. Section 5 focuses on certain connections to reliability theory. Finally, in Section 6, an empirical version of the WCPJ is provided, and a hypothesis testing problem is carried out for a goodness of fit test of the standard uniform distribution.

Weighted Cumulative Past Extropy
In this section, we introduce a new information measure called weighted cumulative past extropy (WCPJ). The cumulative past extropy can be generalized to weighted cumulative past extropy. The main objective of the study is to extend weighted extropy to random variables with continuous distributions. Definition 1. Let X be a nonnegative absolutely continuous random variable having cdf F(x). We define the WCPJ of X byĒ The following equality can be used in the sequel.
As stated in the introduction, similar to the CPJ, the value of the WCPJ is −∞ for all random variables with unbounded support. So, our definition for WCPJ should be restricted to all random variables with bounded support. Let X be a nonnegative random variable with bounded support S; then, the WCPJ of X is defined as Now, we evaluate the WCPJ of some distributions.

Example 2.
Let X be a uniform random variable such that X ∼ U(a, b). Then, We concludeĒ In the following, the effect of the linear transformation on the WCPJ will be studied.
Theorem 1. Let X be a nonnegative continuous random variable with bounded support S.Then, we have (9) and by Fubini's theorem, we havē

Proof. From Equation
On the other hand, The proof of part (ii) then follows from the substitution of (17) in (16).
In the following, we express an upper bound of the WCPJ in terms of the extropy.

Theorem 2.
Let J(X) be the extropy of the random variable X and f (x) ≤ 1 for all n; then, Proof. The proof is similar to that of Theorem 2.3 in [22].

Remark 1.
For a nonnegative and absolutely continuous random variable X with bounded support S, the weighted cumulative past extropy is nonpositive.

Some Characterization Results Based on the Order Statistics
In this section, for some characterization results, the following lemma is needed.
In the following, we provide the WCPJ of the last and first order statistics. As before, we assume that the random variable X has bounded support S. The WCPJ of the last order statistic isĒ w J(X n: With a change in variable, u = F X (x), we are able to writē Moreover, by using F X 1: with another change in variable, u =F(x) in (21), we havē Since Λ * > 0, the uncertainty of X n:n is more than that of X, for all n. If n = 1, thenĒ w J(X n:n ) =Ē w J(X).
Now, we evaluate the WCPJ of X n:n for some distributions.

Example 4.
Assume that X has a uniform distribution with support on (a, b). Then,Ē J(X n: Theorem 3. Let X 1 , · · · , X n and Y 1 , · · · , Y n be random samples from nonnegative continuous cdfs F(x) and G(x) and pdfs f (x) and g(x), respectively, with a common bounded support. Then, F(x) = G(x) if and only ifĒ w J(X n:n ) =Ē w J(Y n:n ), for all n.
Proof. The necessity is trivial. Therefore, it remains to prove the sufficiency part. If E w J(X n:n ) =Ē w J(Y n:n ), for all n, then we have By using Lemma 1, we obtain In the following, we have
Proof. SupposeĒ w J(X j:n |X j+1:n = x * ) =Ē w J(Y j:n |Y j+1:n = x * ); that is, the WCPJ of the last order statistic for two distributions F(x) and G(x) truncated at x * are equal. Thus, by Theorem 3, these two truncated distributions are equal, which leads to , and F(x) and G(x) truncated at x * are equal for x < x * ; that is, The distribution of X j:n , given that X j+1:n = x * , is the same as the distribution of the last order statistic obtained from a sample of size n − j − 1 from a population whose distribution F(x) is truncated at x * . For more details, see [23]. By Theorem 3, we conclude thatĒ w J(X j:n |X j+1:n = x * ) =Ē w J(Y j:n |Y j+1:n = x * ).

Some Inequalities
In this section, we obtain some upper and lower bounds for the WCPJ.
Proposition 2. Let X be a nonnegative continuous random variable with the cdf F X (x) and bounded support S = [k, sup S). Then, we obtain E w J(X) ≤ kĒ J(X).
Corollary 1. Let X be a continuous random variable with the cdf F(x) and support [0, k]. Then, (i) kĒ J(X) ≤Ē w J(X).
In the following, stochastic orders of two distributions in terms of their characteristics are considered. For more details, one can see [24]. In the sequel, we show that the ordering of the WCPJ is implied by the usual stochastic order.

Definition 2.
A random variable X 1 is said to be smaller than X 2 in the usual stochastic order, denoted by

Definition 3.
A random variable X 1 is said to be smaller than X 2 in the WCPJ order, denoted by Proposition 3. Let X 1 and X 2 be nonnegative and continuous random variables. If X 1 ≤ st X 2 , then X 1 ≤ wcpj X 2 .
Example 5. Let X and Y be two random variables with the cdfs F X (x) = x, x ∈ [0, 1] and F Y (x) = x 2 , x ∈ [0, 1], respectively. It is seen that X ≤ st Y, and X ≤ wcpj Y.
In the following, we find a lower bound forĒ w J(X).

Remark 3. Let X be a nonnegative random variable with the cdf F(x) and bounded support S. Then,Ē
where K = − 1 2 sup S 0 Proof. By using the log-sum inequality, we obtain UsingF 2 (x) ≤F(x) and the inequality 1 − x ≤ − log x for 0 < x, we obtain By multiplying both sides of (28) by −1/2, we havē which completes the proof.

Connections to Reliability Theory
In this section, the connection between the WCPJ and reliability theory will be considered. The inactivity time function is of interest in many fields such as survival analysis, actuarial studies, economics, reliability, etc. The inactivity time is thus the duration of the time occurring between the inspection time t and the failure time X, given that at time t the system was found to be down. If X is the lifetime of a system, then the inactivity time of the system is denoted by (t − X|X ≤ t), t ≥ 0. Let X be a nonnegative continuous random variable with the cdf F(x), such that E(X) is finite. The mean inactivity time (MIT) function of X is defined as This function has been used in various contexts of survival analysis and reliability theory involving characterization and stochastic orders of random lifetime. For more details, see [25][26][27][28][29][30]. In the following theorem, we prove that the WCPJ has a relation to the second moment of the inactivity time (SMIT) function.

Definition 4.
Let X be a nonnegative continuous random variable. Then, for all t ≥ 0, we define the second moment of the inactivity time (SMIT) as It can be easily seen that Theorem 5. Let X be a nonnegative continuous random variable with bounded support S, reversed hazard rate function rh(x), SMIT function, and weighted cumulative extropyĒ w J(X). Thus, Proof.
In the sequel, we havē and the proof is complete.
Equation (32) is useful when we have some information about the SMIT or its behavior. An alternative expression to (32) can be given in terms of the hazard rate function. The hazard rate function of a random variable X with pdf f (x) and survival functionF(x) is defined as h(x) = f (x)/F(x). where

Empirical WCPJ
In this part, an estimator of the WCPJ is constructed by means of the empirical WCPJ. Suppose that X 1 , · · · , X n is a nonnegative, continuous, independent, and identically distributed random sample from a population having the cdf F(x). By using the plug-in method, we define the empirical weighted cumulative past extropy as where F n (x) is the empirical distribution function. Let X (1) , X (2) , ..., X (n) be the ordered statistics corresponding to the underlying random sample. Then,Ē w n J(X) can be rewritten in the form of the ordered statistics In the following, we useĒ w n J(X) in (34) for testing the uniformity of the random sample X 1 , · · · , X n . Before dealing with a test statistic, we give the following nice property of uniform distribution among all distributions defined on interval (0, 1). For a random variable X with the cdf F and for p ∈ (0, 1), let ψ p J(F) be defined as It is trivial that for the uniform random variable X on interval (0, 1) with the cdf F 0 (x) = x, ψ p J(F 0 ) = −p 4 /8. Suppose that for a cdf F in the class of cdfs defined on interval (0, 1), ψ p J(F) = −p 4 /8. This means that F and F 0 have the same measure based on ψ p J(·), i.e., ψ p J(F) = ψ p J(F 0 ). So, one can see that It is known that the (0, p) generate the Borel σ-algebra of Ω = (0, 1]. Therefore, one can write So, F(x) = F 0 (x), almost everywhere is obtained. ψ p J(F) is uniquely determined by the uniform distribution in the sense that for some cdfs defined on (0, 1), they take a value lower than −p 4 /8 and for some of them, they take higher than −p 4 /8, and only for the standard uniform distribution, we have ψ p J(F 0 ) = −p 4 /8.

Uniform Goodness of Fit Test
Based on this last property, a test statistic can be designed for the uniform goodness of fit test. One can construct a test statistic based onĒ w n J(X) in (34), which is the sampling counterpart of the WCPJ measure. For this goodness of fit test problem, we want to test whether the given random sample X 1 , · · · , X n is supported by the standard uniform distribution. In other words, we want to test a hypothesis testing H 0 : F = F 0 against an alternative H 1 : F = F 0 , where F 0 is the cdf of the standard uniform distribution. A simple nonparametric test statistic is based onĒ w n J(X), as mentioned before. Indeed,Ē w n J(X) is our test statistic. In the next stage of our hypothesis testing, we should provide the critical region for the uniform goodness of fit test problem. The critical region is then obtained in the sense thatĒ w n J(X) is less than or greater than two values K 1 (α) and K 2 (α), respectively, where α is a prespecified type I error rate; that is, one needs to determine K 1 (α) and K 2 (α), and wheneverĒ w n J(X) < K 1 (α) orĒ w n J(X) > K 2 (α), then the null hypothesis of having a standard uniform distribution is rejected in favor of an alternative one. Since the distribution of theĒ w n J(X) is not easy to derive, then K 1 (α) and K 2 (α) can be estimated using the empirical quantile of the test statisticĒ w n J(X) under the standard uniform distribution. For a given type I error rate α and a large run number N, we generate a random sample X 1 , · · · , X n from the standard uniform distribution and then compute the value ofĒ w n J(X). After that, we repeat this step for a large number of runs, i.e., N = 100000. We sort these N values ofĒ w n J(X). Then, K 1 (α) and K 2 (α) can be estimated by the quantiles α/2-th and 1 − α/2-th of the empirical distribution ofĒ w n J(X), respectively. In Table 1, we obtain the values of K 1 (α) and K 2 (α), for some sizes of sample n. In this part, the power of the proposed test statistic is compared with some others. These competing approaches are the one-sample Kolmogorov-Smirnov, [31,32]. To compute the p-values of these tests, a package called "uniftest" in R software version 4.0.5 was used. The results of the test statistics of our proposedĒ w n J(X), the Kolmogorov-Smirnov, Quesenberry and Miller, and the Frosini are symbolically shown by WCPJ, K-S, Q-M, and FRO, respectively.
To compute the power of the tests, a random sample, which assumed all possible values in the interval (0, 1), was generated from the non-standard uniform distribution, such as beta or Kumaraswamy distributions, see, for example [33], whose supports varied between 0 and 1. After that, the powers were estimated empirically. We considered the following alternative distributions to compute the tests' power: (1) Beta distribution with pdf: Beta (10, 1); (2) Kumaraswamy distribution with cdf: Kuma (10, 10); Piec (5) The results are depicted in Figure 1 for the different values of the sample size n as 20, 30, 40, and 50. Figure 1 shows that the power of our proposed test based on the WCPJ was comparable to that of others for the beta and Kumaraswamy distributions. Even in some cases for these distributions, its power was superior to the other tests. For third alternative distribution, the power of the test based on the WCPJ was weaker than that of the rest. However, as the sample size n became larger, its power improved, and the test learned to discriminate the observations arising from the standard uniform distribution from those generated from nonuniform distributions. This test statistic can be satisfactory with due attention to its simple form and the rich information content behind it. Note that the plots for comparing the powers of the proposed test statistics are not shown in Figure 1 for beta (10, 1), kuma (0.5, 5) and kuma (10,10), because the powers of all the tests were equal to 1.

Conclusions
The use of the extropy measure and its generalizations have become widespread in all scientific fields. One updated generalization of this measure is known as weighted extropy. In this paper, we introduced a new measure of uncertainty, related to cumulative extropy, named weighted cumulative past extropy (WCPJ). The properties of the WCPJ and a number of results including inequalities and various bounds to the WCPJ were considered. Studies related to reliability theory were discussed. A topic that may attract the attention of researchers is the dynamic version of the extropy in the sense that the uncertainty of the system depends on time t. Further research should investigate the uncertainty measure based on the weighted dynamic cumulative past or residual extropy. As an application of the proposed method, the empirical WCPJ was proposed to estimate this new information measure, and a test statistic was provided for the problem of the goodness of fit test of the standard uniform distribution based on the proposed WCPJ. Several applications of extropy and its generalizations, such as in information theory, economics, communication theory, and physics, can be found in the literature. Here, we cite some references. Ref. [34] studied the stock market in OECD countries based on a generalization of extropy known as negative cumulative extropy. Ref. [35] applied another version of extropy known as the Tsallis extropy to a pattern recognition problem. Ref. [16] explored an application of a generalization of extropy known as the fractional Deng extropy to a problem of classification. Ref. [36] used some extropy measures for the problem of compressive sensing.
Author Contributions: All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript. the activities of the project 000009_ALTRI_CDA_75_2021_FRA_LINEA_B funded by "Programma per il finanziamento della ricerca di Ateneo-Linea B" of the University of Naples Federico II.

Conflicts of Interest:
The authors declare no conflict of interest.