Next Article in Journal
Convergence and Best Proximity Points for Generalized Contraction Pairs
Next Article in Special Issue
Monte Carlo Algorithms for the Parabolic Cauchy Problem
Previous Article in Journal
Normed Dual Algebras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cumulative Measure of Inaccuracy and Mutual Information in k-th Lower Record Values

by
Maryam Eskandarzadeh
1,
Antonio Di Crescenzo
2,* and
Saeid Tahmasebi
1
1
Department of Statistics, Persian Gulf University, Bushehr 7516913817, Iran
2
Dipartimento di Matematica, Università degli Studi di Salerno, Via Giovanni Paolo II n. 132, 84084 Fisciano (SA), Italy
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(2), 175; https://doi.org/10.3390/math7020175
Submission received: 17 December 2018 / Revised: 31 January 2019 / Accepted: 5 February 2019 / Published: 14 February 2019
(This article belongs to the Special Issue Stochastic Processes: Theory and Applications)

Abstract

:
In this paper, we discuss the cumulative measure of inaccuracy in k-lower record values and study characterization results of dynamic cumulative inaccuracy. We also present some properties of the proposed measures, and the empirical cumulative measure of inaccuracy in k-lower record values. We prove a central limit theorem for the empirical cumulative measure of inaccuracy under exponentially distributed populations. Finally, we analyze the mutual information for measuring the degree of dependency between lower record values, and we show that it is distribution-free.

1. Introduction and Background

The Information Theory provides various concepts of broad use in Probability and Statistics which are finalized to measure the information content of stochastic models. Apart from the classical differential entropy, which constitutes a useful tool for the analysis of absolutely continuous random variables, some information measures based on cumulative notions have been attracting an increasing amount of attention in the recent literature. Among such measures, in this paper we focus on the cumulative (past) inaccuracy of bivariate random lifetimes, which is a suitable extension of the cumulative entropy. We also deal with the mutual information, which is strictly related to the Shannon entropy, and is one of the most commonly adopted notions for bivariate random variables. Indeed, the mutual information is a measure of the mutual dependence of two random variables, and can be evaluated by means of the joint (and marginal) distributions.
We recall some recent papers dealing with stochastic models and information measures of interest in the reliability theory. Navarro et al. [1] presented some stochastic ordering and properties of aging classes of dynamic cumulative residual entropy, where Psarrakos and Navarro [2] generalized the concept of cumulative residual entropy by relating this concept to the mean time between record values, and also considered the dynamic version of this new measure. Moreover, Tahmasebi and Eskandarzadeh [3] proposed a new extension of the cumulative entropy based on k-th lower record values. Sordo and Psarrakos [4] provided comparison results for the cumulative residual entropy of systems and their dynamic versions.
Motivated by some of the articles mentioned above, in this paper we aim to investigate some applications of the previously mentioned information measures to the k-lower record values. Record values are widely studied in the literature as a suitable tool to convey essential information in stochastic models. More recently, they have also been attracting attention in applied contexts related to high-dimensional data, where it is computationally more convenient to determine the rank rather than the specific values of the observations under investigation.
More specifically, within the scope of this paper, we propose to study the cumulative measure of inaccuracy in k-lower record values and the parent random variable of a random sample. The context of dynamic observations will be also considered by analyzing the dynamic cumulative inaccuracy and related characterization results. We present some properties of the proposed measures, as well as the empirical cumulative measure of inaccuracy in k-lower record values. Moreover, the investigation focuses also on the mutual information measure, finalized to measure the degree of dependency between lower record values.
In the remaining part of this section, we recall the relevant notions that will be used in the following, and provide the plan of the paper. Specifically, we discuss the basic definitions and properties of the information measures mentioned above, i.e., the cumulative inaccuracy and the mutual information. Then, we mention the essential results on the lower record values and recall certain useful stochastic orders. We shall look into nonnegative random variables, with the case of truncated support being treatable in a similar way.
Throughout the paper, “log” means natural logarithm, prime denotes derivative, and the terms “increasing” and “decreasing” are used in a non-strict sense. Finally, as is customary, we assume that 0 log 0 is vanishing.

1.1. Notions of Information Theory

Consider an absolutely continuous random vector ( X , Y ) having nonnegative components. We denote by f ( x , y ) the joint probability density function (PDF) by f ( x ) and g ( x ) , the marginal PDFs, and by F ( x ) and G ( x ) , the cumulative distribution functions (CDFs) of X and Y, respectively.
Bearing in mind the applications in the reliability theory, we assume that X and Y denote random lifetimes of suitable systems having support ( 0 , ) . Let
H ( X ) = E [ log ( f ( X ) ) ] = 0 f ( x ) log ( f ( x ) ) d x
be the (Shannon) differential entropy of X. H ( Y ) is defined similarly, and the bivariate entropy of ( X , Y ) is given by H ( X , Y ) = E [ log ( f ( X , Y ) ) ] = 0 d x 0 f ( x , y ) log ( f ( x , y ) ) d y . Moreover, the conditional entropy of Y given by X is expressed by:
H ( Y | X ) = 0 d x 0 f ( x , y ) log f ( x , y ) f ( x ) d y ,
whereas the mutual information of ( X , Y ) is defined as:
M X , Y = 0 d x 0 f ( x , y ) log f ( x , y ) f ( x ) g ( y ) d y .
We recall that M X , Y is a measure of dependence between X and Y, with M X , Y = 0 if, and only if X and Y are independent. Furthermore, M X , Y is largely adopted to assess the information content in a variety of applied fields, such as signal processing and pattern recognition. Due to (3), the mutual information can be expressed in terms of suitable entropies as follows (see, for example, Ebrahimi et al. [5]):
M X , Y = H ( X ) + H ( Y ) H ( X , Y ) = H ( Y ) H ( Y | X ) .
See also Ebrahimi et al. [6] and Ahmadi et al. [7] for various results of interest in the reliability theory involving dynamic measures for multivariate distributions based on the mutual information.
Among the notions involving cumulative versions of information measures, we now recall the cumulative (past) inaccuracy of ( X , Y ) , given by:
I ( X , Y ) = 0 F ( x ) log ( G ( x ) ) d x .
As specified in Section 1 of Kundu et al. [8], the measure given in (5) can be viewed as the cumulative analogue of the Kerridge inaccuracy measure of X and Y, which is expressed as 0 f ( x ) log ( g ( x ) ) d x (cf. Kerridge [9]). The relevant difference is that Equation (5) involves the CDFs instead of the PDFs. In many real situations, it is more convenient to deal with distribution functions which carry information about the fact that an event occurs prior or after the current time. Moreover, the measure given in (5) provides information content when using G ( x ) , the CDF asserted by the experimenter due to missing or incorrect information in experiments, instead of the true distribution F ( x ) . Clearly, if F and G are identical, then I ( X , Y ) identifies with the cumulative entropy studied by Di Crescenzo and Longobardi [10] and by Navarro et al. [1]. See Section 5 of Kumar and Taneja [11] for various results involving (5) and the related dynamic version, i.e., the dynamic cumulative past inaccuracy measure. We finally recall that the cumulative inaccuracy (5) and the cumulative entropy are also involved in the definition of other information measures of interest (see, for instance, Park et al. [12], and Di Crescenzo and Longobardi [13] for the cumulative Kullback-Leibler information).

1.2. Lower Record Values

Record values are often studied in various fields due to their relevance in specific applications. If statistical observations are difficult to obtain, or when experimental observations are destroyed and access to them is not available, then the researchers are forced to make inference about the distribution of the observations of used record amounts. Suppose, for example, that it is required to estimate the water level of a river solely based on the available records of previous flooding. Similarly, consider variables such as record rainfall, record temperature, wind speed record, and other quantities of interest in meteorology. In such cases, the analysis of statistical observations, if performed by resorting to lower or upper record values.
Let us now recall some basic notions about lower record values that will be used in this paper. Let X be an absolutely continuous nonnegative random variable with CDF F ( x ) and PDF f ( x ) , and let { X n , n 1 } be a sequence of independent random variables, distributed identically as X. An observation X j , j 1 , will be called a lower record value if its value is less than the values of all previous observations. Thus, X j is a lower record value if X j < X i for every i < j . For a fixed positive integer k, similarly to Dziubdziela and Kopociński [14], we define the sequence { T n ( k ) , n 1 } of k-th lower record times for the sequence { X n , n 1 } as follows:
T 1 ( k ) = 1 , T n + 1 ( k ) = min j > T n ( k ) : X k : T n ( k ) + k 1 > X k : k + j 1 ,
where X j : m denotes the j-th order statistic in a sample of size m (see also Malinowska and Szynal [15]). Then,
L n ( k ) : = X k : T n ( k ) + k 1 ,
is called a sequence of k-th lower record values of { X n , n 1 } . Since the ordinary record values are contained in the k-records, the results for usual records can be obtained as a special case by setting k = 1 . The PDF of L n ( k ) , for n 1 and k 1 , is given by:
f n ( k ) ( x ) = k n ( n 1 ) ! [ F ( x ) ] k 1 [ Λ ˜ ( x ) ] n 1 f ( x ) ,
and the joint PDF of ( L m ( k ) , L n ( k ) ) , for 1 m < n , k 1 , is:
f m ( k ) , n ( k ) ( x , y ) = k n ( m 1 ) ! ( n m 1 ) ! [ Λ ˜ ( y ) Λ ˜ ( x ) ] n m 1 × [ Λ ˜ ( x ) ] m 1 h ( x ) [ F ( y ) ] k 1 f ( y ) , x > y ,
where
Λ ˜ ( x ) = log F ( x ) and h ( x ) = Λ ˜ ( x ) = f ( x ) F ( x ) , x > 0 ,
are the cumulative reversed hazard rate and the reversed hazard rate of X, respectively. Hence, the conditional PDF of L n ( k ) given L m ( k ) , 1 m < n , is given by:
f n | m ( y | x ) = k n [ Λ ˜ ( y ) Λ ˜ ( x ) ] n m 1 f ( y ) [ F ( y ) ] k 1 k m ( n m 1 ) ! [ F ( x ) ] k , x > y .
By using the well-known relation
z λ n ( n 1 ) ! x n 1 e λ x d x = i = 0 n 1 [ λ z ] i i ! e λ z ,
we see that the CDF corresponding to Equation (7) can be obtained as:
F n ( k ) ( x ) = 0 x k n ( n 1 ) ! [ F ( y ) ] k 1 [ Λ ˜ ( y ) ] n 1 f ( y ) d y = [ F ( x ) ] k i = 0 n 1 [ k Λ ˜ ( x ) ] i i ! .
We note that the sequence of k-th upper record times is denoted as { U n ( k ) , n 1 } . It is defined similarly to T n ( k ) by reverting the last inequality in (6) (see, for instance, Dziubdziela and Kopociński [14] or Tahmasebi et al. [16]). Record values apply in problems such as industrial stress testing, meteorological analysis, hydrology, sport, and economics. In reliability theory, record values are used to study things such as technical systems which are subject to shocks, e.g., peaks of voltages. For more details about records and their applications, one may refer to Arnold et al. [17]. Several authors investigated measures of inaccuracy for ordered random variables. Thapliyal and Taneja [18] proposed the measure of inaccuracy between the i-th order statistic and the parent random variable. Moreover, Thapliyal and Taneja [19] developed measures of dynamic cumulative residual and past inaccuracy. They studied characterization results of these dynamic measures under a proportional hazard model and proportional reversed hazard model. The same authors introduced the measure of residual inaccuracy of order statistics and proved a related characterization result (cf. Thapliyal and Taneja [20]). Equality of Rényi entropies of upper and lower k-records is also known to provide a characteristic property of symmetric distributions (see Fashandi and Ahmadi [21]). Furthermore, it is worth mentioning that the analysis of lower record values is related to the generalized cumulative entropy. For instance, its role in the study of a new measure of association based on the log-odds rate has recently been pinpointed by Asadi [22]. Finally, recent contributions on a measure of past entropy for nth upper k-record values can be found in Goel et al. [23].

1.3. Stochastic Orders and Related Notions

Aiming to use stochastic orders to perform suitable comparisons, here we recall some relevant definitions. Let X and Y be random variables, where X is said to be smaller than Y, according to the
-
usual stochastic ordering (denoted by X s t Y ) if P ( X x ) P ( Y x ) for all x R ; it is known that X s t Y E ( ϕ ( X ) ) E ( ϕ ( Y ) ) for all increasing functions ϕ ;
-
likelihood ratio ordering (denoted by X l r Y ) if g ( x ) f ( x ) is increasing in x;
-
decreasing convex order, denoted by X d c x Y , if E ( ϕ ( X ) ) E ( ϕ ( Y ) ) for all decreasing convex functions ϕ , such that the expectations exist.
Moreover, we say that X has a decreasing reversed hazard rate (DRHR) if h ( x ) = f ( x ) F ( x ) is decreasing in x. For specific details on these notions, see, for instance, Shaked and Shanthikumar [24], and for applications of the decreasing convex order, see Ma [25].

1.4. Plan of the Paper

In this investigation, we propose the cumulative measure of inaccuracy and study characterization results of a dynamic cumulative inaccuracy measure. Also, we study the degree of dependency among the sequence of k-th lower record values through the mutual information of record values.
The paper is organized as follows: In Section 2, we consider a measure of inaccuracy associated with L n ( k ) and X. We provide some results and properties of such a measure, including an application to the proportional reversed hazards model. In Section 3, we propose the dynamic version of inaccuracy associated with L n ( k ) and X, and provide a characterization result. In Section 4, we study the problem of estimating the cumulative measure of inaccuracy by means of the empirical cumulative inaccuracy in k-lower record values. The rest of the section is devoted to a simple application to real data, with the discussion of some special cases, and a central limit theorem for the empirical cumulative measure of inaccuracy in the case of exponentially distributed random samples. Finally, in Section 5 we investigate the mutual information between sequences of lower record values, aiming to measure their degree of dependency. Specifically, we show that this measure is distribution-free and can be computed by using the distribution of the k-th lower record values of the sequence from the uniform distribution.

2. Cumulative Measure of Inaccuracy

Let us now consider the cumulative measure of inaccuracy between L n ( k ) and the parent non-negative random variable, say X. Recalling (5) and (11), we have:
I ( L n ( k ) , X ) = 0 F n ( k ) ( x ) log F ( x ) d x = i = 0 n 1 k i i ! 0 [ F ( x ) ] k [ Λ ˜ ( x ) ] i + 1 d x ,
with Λ ˜ ( x ) given in (9). According to the comments given in Section 1.1, I ( L n ( k ) , X ) can be used to gain information concerning an experiment for which the distribution of the k-lower record values is compared with the parent distribution. Noting that, due to (7), L i + 2 ( k ) being a random variable with density function
f i + 2 ( k ) ( x ) = k i + 2 ( i + 1 ) ! [ F ( x ) ] k 1 [ Λ ˜ ( x ) ] i + 1 f ( x ) ,
and recalling that h ( x ) is the reversed hazard rate of X (see (9)), from (12) we obtain:
I ( L n ( k ) , X ) = i = 0 n 1 i + 1 k 2 0 k i + 2 ( i + 1 ) ! [ F ( x ) ] k [ Λ ˜ ( x ) ] i + 1 d x = i = 0 n 1 i + 1 k 2 E 1 h ( L i + 2 ( k ) ) .
Remark 1.
Making use of the generalized cumulative entropy introduced in Definition 1.1 of Tahmasebi and Eskandarzadeh [3], given by:
CE i + 1 , k ( X ) = 0 k i + 2 ( i + 1 ) ! [ F ( x ) ] k [ Λ ˜ ( x ) ] i + 1 d x E 1 h ( L i + 2 ( k ) ) ,
from (13) one has that the cumulative measure of inaccuracy between L n ( k ) and X can be expressed: as a size-biased combination of generalized cumulative entropies through the following weighted sum with linearly increasing weights:
I ( L n ( k ) , X ) = 1 k 2 i = 1 n i CE i , k ( X ) .
Furthermore, from (11) and (12) we obtain an alternative expression, that is:
I ( L n ( k ) , X ) = i = 0 n 1 0 Λ ˜ ( x ) [ F i + 1 ( k ) ( x ) F i ( k ) ( x ) ] d x .
In the following proposition we provide another form of I ( L n ( k ) , X ) .
Proposition 1.
Let X be a nonnegative random variable with cdf F; for the cumulative measure of inaccuracy between L n ( k ) and X, we have:
I ( L n ( k ) , X ) = i = 0 n 1 k i i ! 0 + h ( z ) 0 z [ F ( x ) ] k [ Λ ˜ ( x ) ] i d x d z .
Proof. 
By (12) and the relation log F ( x ) = x h ( z ) d z , we have:
I ( L n ( k ) , X ) = i = 0 n 1 k i i ! 0 + x h ( z ) [ F ( x ) ] k [ Λ ˜ ( x ) ] i d z d x .
By using Fubini’s theorem, we get:
I ( L n ( k ) , X ) = i = 0 n 1 k i i ! 0 + 0 z h ( z ) [ F ( x ) ] k [ Λ ˜ ( x ) ] i d x d z ,
and the result thus follows. □
Hereafter, we present some examples and properties of I ( L n ( k ) , X ) .
Example 1.
(i) 
If X is uniformly distributed in [ 0 , θ ] , then:
I ( L n ( k ) , X ) = θ k 2 i = 0 n 1 ( i + 1 ) k k + 1 i + 2 .
(ii) 
If X is exponentially distributed with mean 1 λ , then:
I ( L n ( k ) , X ) = 1 λ i = 0 n 1 ( i + 1 ) k i j = 0 1 j + k + 1 i + 2 .
(iii) 
If X has an inverse Weibull distribution with cdf F ( x ) = exp { ( α x ) β } , x > 0 , with α > 0 and β > 1 , then:
I ( L n ( k ) , X ) = α β k 1 β 1 i = 0 n 1 1 i ! Γ ( i + 1 ) β 1 β .
For suitable choices of n and k, such inaccuracy measures are plotted in Figure 1, where the parameters are chosen so that the considered distributions have a unity mean, i.e., (i) θ = 2 , (ii) λ = 1 , and (iii) α = 1 / Γ ( 1 β 1 ) , recalling that the mean of an inverse Weibull distribution is E ( X ) = α Γ ( 1 β 1 ) (see de Gusmão et al. [26]). In all cases, I ( L n ( k ) , X ) is decreasing in k and increasing in n.
Let us now discuss the effect of identical linear transformations on I ( L n ( k ) , X ) .
Proposition 2.
Let a > 0 and b 0 ; for n N , it holds that:
I ( a L n ( k ) + b , a X + b ) = a I ( L n ( k ) , X ) .
Proof. 
From (15), we have:
I ( a L n ( k ) + b , a X + b ) = 1 k 2 i = 1 n i CE i , k ( a X + b ) .
Recalling (14), it is not hard to see that: CE i , k ( a X + b ) = a CE i , k ( X ) , so that the proof immediately follows. □
Remark 2.
Let X be a symmetric random variable with respect to the finite mean μ = E ( X ) , i.e., F ( x + μ ) = 1 F ( μ x ) for all x R . Then the following relation holds:
I ( L n ( k ) , X ) = I ¯ ( R n ( k ) , X ) : = 0 F ¯ n ( k ) ( x ) log F ¯ ( x ) d x ,
where, similarly to (12), the latter term defines the cumulative residual measure of inaccuracy between the k-th upper record value and X. Here, as usual, F ¯ ( x ) = 1 F ( x ) denotes the survival function of X, and F ¯ n ( k ) ( x ) denotes the survival function of the k-th upper record value R n ( k ) .
Hereafter, we obtain an upper bound for I ( L n ( k ) , X ) .
Proposition 3.
Let X be an absolutely continuous non-negative random variable with a cumulative reversed hazard rate Λ ˜ ( x ) , such that the following function is finite:
h i + 1 ( t ) = t [ Λ ˜ ( x ) ] i + 1 d x , t > 0 .
Then, for n N we have:
I ( L n ( k ) , X ) i = 0 n 1 k i i ! E h i + 1 ( X ) .
Proof. 
By (12), from [ F ( x ) ] k F ( x ) ] and Fubini’s theorem, we obtain:
I ( L n ( k ) , X ) i = 0 n 1 k i i ! 0 [ Λ ˜ ( x ) ] i + 1 0 x f ( t ) d t d x = i = 0 n 1 k i i ! 0 f ( t ) t [ Λ ˜ ( x ) ] i + 1 d x d t = i = 0 n 1 k i i ! E h i + 1 * ( X ) ,
thus completing the proof. □
Now we can prove a property of the considered inaccuracy measure by means of stochastic orderings. For that, we use the notions recalled in Section 1.3.
Theorem 1.
Suppose that the non-negative random variable X is DRHR. Then, for n N , one has:
I ( L n + 1 ( k ) , X ) I ( L n ( k ) , X ) 1 k 2 m = 1 n + 1 E 1 h ( L m ( k ) ) .
Proof. 
Recalling that f n ( k ) ( x ) is the PDF of L n ( k ) , provided in Equation (7), then the ratio f n ( k ) ( x ) f n + 1 ( k ) ( x ) = n k log F ( x ) is increasing in x. Therefore, L n + 1 ( k ) l r L n ( k ) , and this implies that L n + 1 ( k ) s t L n ( k ) , i.e., E [ ϕ ( L n + 1 ( k ) ) ] E [ ϕ ( L n ( k ) ) ] for all increasing functions ϕ such that these expectations exist. (For more details, see Shaked and Shanthikumar [24]). Thus, since X is DRHR and h ( x ) is its reversed hazard rate, then 1 h ( x ) is increasing in x. As a consequence, from (13) we have:
I ( L n + 1 ( k ) , X ) = i = 0 n i + 1 k 2 E 1 h ( L i + 2 ( k ) ) i = 0 n i + 1 k 2 E 1 h ( L i + 1 ( k ) ) = j = 1 n 1 j + 2 k 2 E 1 h ( L j + 2 ( k ) ) = j = 0 n 1 j + 2 k 2 E 1 h ( L j + 2 ( k ) ) + 1 k 2 E 1 h ( L 1 ( k ) ) = I ( L n ( k ) , X ) + 1 k 2 i = 1 n + 1 E 1 h ( L i ( k ) ) ,
where I ( L n ( k ) , X ) is expressed in (13). The proof is thus completed. □
Theorem 2.
Let X and Y be two non-negative random variables, such that X d c x Y ; then we have:
I ( L n ( k ) , X ) I ( L n ( k ) G , Y ) : = 0 G n ( k ) ( y ) log ( G ( y ) ) d y .
Here, similarly to (6), L n ( k ) G denotes the k-th lower record times for the sequence { Y n , n 1 } with distribution function G n ( k ) ( y ) , and G ( y ) is the distribution function of Y.
Proof. 
Due to (18), h j + 1 ( x ) is a decreasing convex function in x. The proof then immediately follows from Proposition 3. □
Let us now investigate the cumulative measure of inaccuracy within the proportional reversed hazards model (PRHM). We recall that two random variables X and X θ * satisfy the PRHM if their distribution functions are related by the following identity, for θ > 0 :
F θ * ( x ) = [ F ( x ) ] θ , x R .
For some properties of such a model associated with aging notions and the reversed relevation transform, see Gupta and Gupta [27] and Di Crescenzo and Toomaj [28], respectively.
In this case, we assume that X and X θ * are non-negative, absolutely continuous random variables. Due to Equation (20) and making use of (5) and (11), and noting that Λ ˜ θ * ( x ) = θ Λ ˜ ( x ) , we obtain the cumulative measure of inaccuracy between L n ( k ) * and X θ * as follows, for θ > 0 :
I ( L n ( k ) * , X θ * ) = 0 + F n ( k ) * ( x ) log F θ * ( x ) d x = i = 0 n 1 k i θ i + 1 0 + [ Λ ˜ ( x ) ] i + 1 i ! [ F ( x ) ] k θ d x ,
with Λ ˜ ( x ) expressed in (9). Moreover, if θ is a positive integer, then the last expression can be rewritten in terms of the generalized cumulative entropy (14) as follows:
I ( L n ( k ) * , X θ * ) = 1 k 2 θ i = 0 n 1 ( i + 1 ) CE i + 1 , k θ ( X ) .
We recall that in this case, i.e., when θ N , the PRHM expresses that X θ * is distributed as the first-order statistics of a random sample having size θ and taken from the distribution of X.
Let us now obtain suitable bounds under the PRHM.
Proposition 4.
Let X and X θ * be non-negative, absolutely continuous random variables satisfying the PRHM as specified in (20), with θ > 0 . If θ ( ) 1 , then for any n N we have:
I ( L n ( k ) * , X θ * ) ( ) θ i + 1 k 2 i = 0 n 1 ( i + 1 ) CE i + 1 , k ( X ) .
Proof. 
Clearly, for θ ( ) 1 it is [ F ( x ) ] k θ ( ) [ F ( x ) ] k for all x 0 , and then the thesis immediately follows from (14) and (21). □
We conclude this section with a remark on the cumulative measure of inaccuracy for bivariate first lower record values.
Remark 3.
Consider two identically distributed sequences of random variables { X n , n 1 } and { Y m , m 1 } , and denote by L n , X = L n ( 1 ) , X and L m , Y = L m ( 1 ) , Y the corresponding first lower record times. Then, for k = 1 and making use of (11) it is not hard to see that the cumulative measure of inaccuracy between L n , X and L m , Y is given by:
I ( L n , X , L m , Y ) = 0 + F n ( x ) log ( F m ( x ) ) = i = 0 n 1 ( i + 1 ) E 1 h ( L i + 1 ( x ) ) i = 0 n 1 0 + F ( x ) [ Λ ˜ ( x ) ] i i ! log i = 0 m 1 [ Λ ˜ ( x ) ] i i ! ,
where h ( x ) is the reversed hazard rate of the underlying distribution.

3. Dynamic Cumulative Measure of Inaccuracy

In this section, we study the dynamic version of the inaccuracy measure I ( L n ( k ) , X ) . Let X be the random lifetime of a brand new system that begins to work at time 0 and is observed only at deterministic inspection times. Clearly, if the system is found failed at time t, then the conditional distribution function of [ X | X t ] , known as the past lifetime, is given by F ( x ) F ( t ) , 0 x t . In a sequence of i.i.d. failure times having the same distribution as X, if the information about the k-th lower failure times is available, then F n ( k ) ( x ) F n ( k ) ( t ) is the conditional probability that the k-th lower failure time is smaller that x, given that it is smaller than t, for 0 x t , where F n ( k ) ( x ) is the CDF given in (11). Hence, the dynamic cumulative measure of inaccuracy between L n ( k ) and X is expressed by the inaccuracy measure between the corresponding past lifetimes, i.e.,:
I ( L n ( k ) , X ; t ) = 0 t F n ( k ) ( x ) F n ( k ) ( t ) log F ( x ) F ( t ) d x = μ n ( k ) ( t ) log ( F ( t ) ) 1 F n ( k ) ( t ) 0 t F n ( k ) ( x ) log ( F ( x ) ) d x = μ n ( k ) ( t ) log ( F ( t ) ) + 1 F n ( k ) ( t ) i = 0 n 1 0 t k i i ! [ F ( x ) ] k [ Λ ˜ ( x ) ] i + 1 d x ,
where
μ n ( k ) ( t ) = 0 t F n ( k ) ( x ) F n ( k ) ( t ) d x
is the mean inactivity time of the random variable [ t L n ( k ) L n ( k ) < t ] . Clearly, recalling (12) and assuming that X is a bona fide random variable, from (22) we have lim t I ( L n ( k ) , X ; t ) = I ( L n ( k ) , X ) . Moreover, if F ( t ) > 0 for all t > 0 , since log F ( t ) 0 , we immediately have:
I ( L n ( k ) , X ; t ) I ( L n ( k ) , X ) F n ( k ) ( t ) .
We can now obtain a characterization result for I ( L n ( k ) , X ; t ) .
Theorem 3.
Let X be a non-negative, absolutely continuous random variable with distribution function F ( x ) . If the dynamic cumulative inaccuracy (22) is finite for all t > 0 , then I ( L n ( k ) , X ; t ) characterizes the distribution function of X.
Proof. 
Differentiating both sides of (22) with respect to t, we obtain:
d d t I ( L n ( k ) , X ; t ) = h ( t ) μ n ( k ) ( t ) h n ( k ) ( t ) I ( L n ( k ) , X ; t ) = h ( t ) μ n ( k ) ( t ) c ( t ) I ( L n ( k ) , X ; t ) ,
where h ( t ) = f ( t ) F ( t ) and h n ( k ) ( t ) = f n ( k ) ( t ) F n ( k ) ( t ) are the reversed hazard rates, and where we have set
c ( t ) = k n [ Λ ˜ ( t ) ] n 1 ( n 1 ) ! i = 0 n 1 k i i ! [ Λ ˜ ( t ) ] i .
Taking again the derivative with respect to t, we get:
h ( t ) = ( h ( t ) ) 2 c ( t ) I ( L n ( k ) , X ; t ) + c ( t ) d d t I ( L n ( k ) , X ; t ) 1 + c ( t ) h ( t ) μ n ( k ) ( t ) d d t I ( L n ( k ) , X ; t ) .
Suppose that there are two CDFs, F and F ^ , such that for all t,
I ( L n ( k ) , X ; t ) = I ( L ^ n ( k ) , X ^ ; t ) = z ( t ) ,
and having reversed hazard rates h ( t ) and h F ^ ( t ) , respectively. Then, from (23) we get, for all t:
h ( t ) = φ ( t , h ( t ) ) , h F ^ ( t ) = φ ( t , h F ^ ( t ) ) ,
where
φ ( t , y ) : = y 2 c ( t ) z ( t ) + c ( t ) z ( t ) 1 + c ( t ) y w ( t ) z ( t ) ,
for w ( t ) : = μ n ( k ) ( t ) and y : = h ( t ) . By using Theorem 2.1 and Lemma 2.2 of Gupta and Kirmani [29], we obtain h ( t ) = h F ^ ( t ) , for all t. Since the reversed hazard rate function characterizes the distribution function uniquely, the proof is completed. □

4. Empirical Cumulative Measure of Inaccuracy

In this section, we address the problem of estimating the cumulative measure of inaccuracy by means of the empirical cumulative inaccuracy in lower record values. Let X 1 , X 2 , , X m be a random sample of size m from an absolutely continuous CDF F ( x ) . Then, according to (12), the empirical cumulative measure of inaccuracy is defined as:
I ^ ( L n ( k ) , X ) = i = 0 n 1 k i i ! 0 [ F ^ m ( x ) ] k log F ^ m ( x ) i + 1 d x ,
where
F ^ m ( x ) = 1 m i = 1 m 1 { X i x } , x R .
is the empirical distribution of the sample, 1 { · } being the indicator function. Let X ( 1 ) X ( 2 ) X ( m ) denote the order statistics of the sample. Then, (24) can be written as:
I ^ ( L n ( k ) , X ) = i = 0 n 1 k i i ! j = 1 m 1 X ( j ) X ( j + 1 ) [ F ^ m ( x ) ] k log F ^ m ( x ) i + 1 d x .
Finally, recalling that:
F ^ m ( x ) = 0 , x < X ( 1 ) , j m , X ( j ) x < X ( j + 1 ) , j = 1 , 2 , , m 1 1 , x X ( m ) ,
from Equation (25) we see that the empirical cumulative measure of inaccuracy can be expressed as:
I ^ ( L n ( k ) , X ) = i = 0 n 1 k i i ! j = 1 m 1 U j + 1 j m k log j m i + 1 ,
where
U j + 1 = X ( j + 1 ) X ( j ) , j = 1 , 2 , , m 1
are the sample spacings.
The following example provides an application of the empirical cumulative measure of inaccuracy to real data.
Example 2.
Consider the sample data by Abouammoh and Abdulghani [30] concerning the lifetimes (in days) of m = 40 patients suffering from blood cancer. The evaluation of the corresponding empirical cumulative measure of inaccuracy, obtained by means of Equation (26), shows that the values of I ^ ( L n ( k ) , X ) are decreasing in k and increasing in n (see Figure 2).
Let us now discuss two special cases concerning populations from the uniform distribution and the exponential distribution.
Example 3.
Consider the random sample X 1 , X 2 , , X m from a population uniformly distributed in [ 0 , 1 ] . In this case, the sample spacings (27) are independent and follow the beta distribution with parameters 1 and m (for more details, see Pyke [31]). Hence, making use of (26), the mean and the variance of the empirical cumulative measure of inaccuracy are, respectively:
E I ^ ( L n ( k ) , X ) = i = 0 n 1 j = 1 m 1 k i i ! ( m + 1 ) j m k log j m i + 1 ,
and
V a r I ^ ( L n ( k ) , X ) = i = 0 n 1 j = 1 m 1 m m + 2 k i i ! ( m + 1 ) j m k log j m ( i + 1 ) 2 .
Table 1 shows the values of the mean (28) and the variance (29) for k = 2 , and for sample sizes m = 10 , 15 , 20 , with n = 2 , 3 , 4 , 5 . We note that E [ I ^ ( L n ( 2 ) , X ) ] is increasing in m and n.
Example 4.
Let X 1 , X 2 , , X m be a random sample drawn from the exponential distribution with parameter λ. Then, from (26) we see that the empirical cumulative measure of inaccuracy can be expressed as the following sum of independent and exponentially distributed random variables:
I ^ ( L n ( k ) , X ) = j = 1 m 1 Y j , where Y j : = U j + 1 i = 0 n 1 k i i ! j m k log j m i + 1 .
Indeed, in this case, the sample spacings U j + 1 defined in (27) are independent and exponentially distributed with mean 1 λ ( m j ) (for more details, see Pyke [31]), so that the mean and the variance of I ^ ( L n ( k ) , X ) are given by:
E I ^ ( L n ( k ) , X ) = j = 1 m 1 μ j , V a r I ^ ( L n ( k ) , X ) = j = 1 m 1 s j 2 ,
where
μ j : = E [ Y j ] = 1 λ i = 0 n 1 k i i ! ( m j ) j m k log j m i + 1 ,
and
s j 2 : = V a r Y j = 1 λ 2 i = 0 n 1 k i i ! ( m j ) j m k log j m i + 1 2 .
In Table 2, for k = 2 , we show the values of the mean and the variance (31) for sample sizes m = 10 , 15 , 20 , with λ = 0.5 , 1 , 2 and n = 2 , 3 , 4 , 5 . One can easily see that E [ I ^ ( L n ( 2 ) , X ) ] is increasing in m, whereas V a r [ I ^ ( L n ( 2 ) , X ) ] is decreasing in m.
Hereafter, we show a central limit theorem for the empirical cumulative measure of inaccuracy in the same case as Example 4.
Theorem 4.
If X 1 , X 2 , , X m is a random sample drawn from the exponential distribution with parameter λ, then:
I ^ ( L n ( k ) , X ) E [ I ^ ( L n ( k ) , X ) ] ( V a r [ I ^ ( L n ( k ) , X ) ] ) 1 / 2 j = 1 m 1 ( Y j μ j ) ( j = 1 m 1 s j 2 ) 1 / 2 d N ( 0 , 1 ) .
Proof. 
With reference to the notation adopted in Example 4, by setting α j , r = E [ | Y j E ( Y j ) | r ] , r = 2 , 3 , for large m, we have:
j = 1 m 1 α j , 2 = j = 1 m 1 s j 2 = 1 λ 2 i = 0 n 1 k i i ! 2 1 m 2 j = 1 m 1 1 ( 1 j / m ) j m k log j m i + 1 2 1 λ 2 m i = 0 n 1 k i i ! 2 c 2 ( i , k )
and recalling that E [ | Y j E ( Y j ) | 3 ] = 2 ( 6 e ) [ E ( Y j ) ] 3 / e for the exponential distribution:
j = 1 m 1 α j , 3 = 2 ( 6 e ) e j = 1 m 1 μ j 3 = 2 ( 6 e ) e j = 1 m 1 1 λ i = 0 n 1 k i i ! ( m j ) j m k log j m i + 1 3 2 ( 6 e ) e λ 3 m 2 i = 0 n 1 k i i ! c 1 ( i , k ) 3 ,
where
c h ( i , k ) : = 0 1 x k 1 x ( log x ) i + 1 h d x , h = 1 , 2 .
Hence, for a suitable function C ( λ , i , k ) , for large m, it holds that:
j = 1 m 1 α j , 3 1 / 3 j = 1 m 1 α j , 2 1 / 2 C ( λ , i , k ) m 1 / 6 0 as m .
The Lyapunov’s condition of the central limit theorem is thus fulfilled, this giving the proof. □

5. Mutual Information of Lower Record Values

In this section, we study the degree of dependency between the sequences of lower record values by means of the mutual information. Recall the basic relations given in Equations (3) and (4).
First, with reference to (1), in the following theorem we obtain the entropy of k-th lower record values.
Theorem 5.
Let { X n , n 1 } be a sequence of IID random variables having finite entropy. The entropy of L n ( k ) for all n 2 is given by:
H ( L n ( k ) ) = log k ( n 1 ) ψ ( n ) + log ( ( n 1 ) ! ) + n 1 1 k k n φ f ( n 1 ) ,
where ψ ( n ) = Γ ( n ) / Γ ( n ) is the digamma function, and:
φ f ( n 1 ) : = 0 + z n 1 ( n 1 ) ! e z k log ( f ( F 1 ( e z ) ) ) d z .
Proof. 
As customary, we denote by f ( x ) and F ( x ) the PDF and CDF of X 1 . By (7), we have:
H ( L n ( k ) ) = + f n ( k ) ( x ) log ( f n ( k ) ( x ) ) d x = + k n ( n 1 ) ! [ F ( x ) ] k 1 [ Λ ˜ ( x ) ] n 1 f ( x ) log k n ( n 1 ) ! [ F ( x ) ] k 1 [ Λ ˜ ( x ) ] n 1 f ( x ) d x = k n ( n 1 ) ! + { n [ Λ ˜ ( x ) ] n 1 [ F ( x ) ] k 1 f ( x ) log k + ( n 1 ) [ Λ ˜ ( x ) ] n 1 [ F ( x ) ] k 1 f ( x ) log ( Λ ˜ ( x ) ) + ( k 1 ) [ Λ ˜ ( x ) ] n 1 [ F ( x ) ] k 1 f ( x ) log ( F ( x ) ) + [ Λ ˜ ( x ) ] n 1 [ F ( x ) ] k 1 f ( x ) log ( f ( x ) ) [ Λ ˜ ( x ) ] n 1 [ F ( x ) ] k 1 f ( x ) log ( ( n 1 ) ! ) } d x .
By taking z = Λ ˜ ( x ) , we get:
H ( L n ( k ) ) = k n 0 + [ n z n 1 ( n 1 ) ! e z k log k + z n 1 ( n 2 ) ! e z k log z ( k 1 ) z n ( n 1 ) ! e z k + z n 1 ( n 1 ) ! e z k log ( f ( F 1 ( e z ) ) ) z n 1 ( n 1 ) ! e z k log ( ( n 1 ) ! ) ] d z .
Hence, making use of Equation (A.8) of Zahedi and Shakil [32], after some calculations we finally get: Equation (32). □
It is worth noting the analogies between Equation (33) and the function ϕ f ( n ) considered by Baratpour et al. [33] for the analysis of the upper record values.
In the following lemma we express the mutual information between Z m ( k ) and Z n ( k ) in terms of the digamma function.
Lemma 1.
Let Z n ( k ) be the k-th lower record values from the uniform distribution over interval ( 0 , 1 ) . Then, the mutual information between Z m ( k ) and Z n ( k ) , for n > m , is given by:
M ( Z m ( k ) , Z n ( k ) ) = m + log ( ( n 1 ) ! ) log ( ( n m 1 ) ! ) + ( n m 1 ) ψ ( n m ) ( n 1 ) ψ ( n ) .
Proof. 
In this case, recalling (9), we have Λ ˜ ( x ) = log x , 0 < x < 1 . Hence, making use of Equations (2), (4), (8) and (10), for n > m we get:
H ( Z n ( k ) | Z m ( k ) ) = 0 1 0 x k n [ log ( y ) + log ( x ) ] n m 1 [ log ( x ) ] m 1 y k 1 ( m 1 ) ! ( n m 1 ) ! x × log k n y k 1 k m x k [ log ( y ) + log ( x ) ] n m 1 ( n m 1 ) ! d y d x = r = 1 5 I r ,
where
I 1 : = ( n m ) log ( k ) 0 1 0 x f m ( k ) , n ( k ) ( x , y ) d y d x = ( n m ) log ( k ) , I 2 : = log [ ( n m 1 ) ! ] 0 1 0 x f m ( k ) , n ( k ) ( x , y ) d y d x = log [ ( n m 1 ) ! ] , I 3 : = 0 1 0 x k n + 1 [ log ( y ) + log ( x ) ] n m 1 [ log ( x ) ] m 1 y k 1 ( m 1 ) ! ( n m 1 ) ! x log ( x ) d y d x = m , I 4 : = 0 1 0 x k n [ log ( y ) + log ( x ) ] n m 1 [ log ( x ) ] m 1 y k 1 ( m 1 ) ! ( n m 1 ) ! x log [ log ( y ) + log ( x ) ] n m 1 d y d x = ( n m 1 ) [ ψ ( n m ) log ( k ) ] , I 5 : = 0 1 0 x ( k 1 ) k n [ log ( y ) + log ( x ) ] n m 1 [ log ( x ) ] m 1 y k 1 ( m 1 ) ! ( n m 1 ) ! x log ( y ) d y d x = n 1 1 k .
Hence, by straightforward calculations we obtain:
H ( Z n ( k ) | Z m ( k ) ) = log ( ( n m 1 ) ! ) ( n m ) log ( k ) ( n m 1 ) [ ψ ( n m ) log k ] + k 1 k ( n 2 m ) m ,
where ψ ( n m ) = 0 + t n m e t log ( t ) d t . Similarly, by (32) we have
H ( Z n ( k ) ) = log k ( n 1 ) ψ ( n ) + log ( ( n 1 ) ! ) + n 1 1 k .
Recalling (4), the thesis thus follows from (35) and (36). □
Let us now come to the main result of this section. Recall that the mutual information of ( X , Y ) is defined in (3).
Theorem 6.
Under the assumptions of Theorem 5, the following result holds:
(i) 
The mutual information between the m-th and the n-th k-lower records is distribution-free, and is given by:
M ( L m ( k ) , L n ( k ) ) = m + log ( ( n 1 ) ! ) log ( ( n m 1 ) ! ) + ( n m 1 ) ψ ( n m ) ( n 1 ) ψ ( n ) .
(ii) 
The mutual information M ( L m ( k ) , L m + 1 ( k ) ) is increasing in m.
Proof. 
(i)
Let Z m ( k ) = F 1 ( L m ( k ) ) and Z n ( k ) = F 1 ( L n ( k ) ) , where F 1 denotes the pseudo-inverse function of F, i.e., the quantile function of X. Then, Z m ( k ) and Z n ( k ) are the m-th and n-th k-lower records of the uniform distribution over the interval ( 0 , 1 ) . By the invariance property of mutual information, we have: M ( L m ( k ) , L n ( k ) ) = M ( Z m ( k ) , Z n ( k ) ) , thus the result follows from Lemma 1.
(ii)
By taking n = m + 1 in (37), we get:
M ( L m ( k ) , L m + 1 ( k ) ) = m + log ( m ! ) m ψ ( m + 1 ) ,
so that
M ( L m + 1 ( k ) , L m + 2 ( k ) ) M ( L m ( k ) , L m + 1 ( k ) ) = log ( ( m + 1 ) ! ) log ( m ! ) + m ψ ( m + 1 ) ( m + 1 ) ψ ( m + 2 ) + 1 .
It is not hard to see that the right-hand-side is positive, and the proof thus follows. □
It is useful to assess the mutual information between the k-th lower record values, such as when such values correspond to successive failures of a repairable system. Hence, the information provided in Theorem 6 is useful for constructing suitable replacement criteria of components, in order to avoid failures and to improve system availability.
The mutual information between the m-th and the n-th k-lower records, determined in Equation (37), is shown in Figure 3 for different choices of m and n. For fixed values of m, such an information measure is decreasing in n, whereas for fixed n, it is increasing in m.
Finally, Figure 4 shows that M ( L m ( k ) , L m + 1 ( k ) ) is increasing in m, in agreement with the point (ii) of Theorem 6.

6. Conclusions

In this paper, we discussed the concept of inaccuracy between L n ( k ) and X in a random sample generated by X. We proposed a dynamic version of cumulative inaccuracy and studied a related characterization result. We also proved that I ( L n ( k ) , X ; t ) can uniquely determine the parent distribution F. Moreover, we constructed bounds for characterization results of I ( L n ( k ) , X ) . Also, we estimated the cumulative measure of inaccuracy by means of the empirical cumulative inaccuracy in lower record values. These concepts can be applied in measuring the inaccuracy contained in the associated past lifetime. Finally, we studied the degree of dependency among the sequence of k-th lower record values in terms of mutual information. We showed that M ( L m ( k ) , L n ( k ) ) is distribution-free and can be computed by using the distribution of the k-th lower record values of the sequence from the uniform distribution.

Author Contributions

All the authors contributed equally to this work.

Funding

This research received no external funding.

Acknowledgments

The authors thank the referees for useful comments that improved the paper. Part of this research has been performed during a visit of Maryam Eskandarzadeh at Salerno University. A.D.C. is member of the group GNCS of INdAM.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Navarro, J.; del Aguila, Y.; Asadi, M. Some new results on the cumulative residual entropy. J. Stat. Plan. Inference 2010, 140, 310–322. [Google Scholar] [CrossRef]
  2. Psarrakos, G.; Navarro, J. Generalized cumulative residual entropy and record values. Metrika 2013, 76, 623–640. [Google Scholar] [CrossRef]
  3. Tahmasebi, S.; Eskandarzadeh, M. Generalized cumulative entropy based on kth lower record values. Stat. Prob. Lett. 2017, 126, 164–172. [Google Scholar] [CrossRef]
  4. Sordo, M.A.; Psarrakos, G. Stochastic comparisons of interfailure times under a relevation replacement policy. J. Appl. Prob. 2017, 54, 134–145. [Google Scholar] [CrossRef]
  5. Ebrahimi, N.; Soofi, E.S.; Soyer, R. Information measures in perspective. Int. Stat. Rev. 2010, 78, 383–412. [Google Scholar] [CrossRef]
  6. Ebrahimi, N.; Kirmani, S.N.U.A.; Soofi, E.S. Multivariate dynamic information. J. Multivar. Anal. 2007, 98, 328–349. [Google Scholar] [CrossRef]
  7. Ahmadi, J.; Di Crescenzo, A.; Longobardi, M. On dynamic mutual information for bivariate lifetimes. Adv. Appl. Prob. 2015, 47, 1157–1174. [Google Scholar] [CrossRef]
  8. Kundu, C.; Di Crescenzo, A.; Longobardi, M. On cumulative residual (past) inaccuracy for truncated random variables. Metrika 2016, 79, 335–356. [Google Scholar] [CrossRef]
  9. Kerridge, D.F. Inaccuracy and inference. J. R. Stat. Soc. Ser. B Stat. Methodol. 1961, 23, 184–194. [Google Scholar] [CrossRef]
  10. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plan. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  11. Kumar, V.; Taneja, H.C. Dynamic cumulative residual and past inaccuracy measures. J. Stat. Theory Appl. 2015, 14, 399–412. [Google Scholar] [CrossRef]
  12. Park, S.; Rao, M.; Shin, D.W. On cumulative residual Kullback-Leibler information. Stat. Prob. Lett. 2012, 82, 2025–2032. [Google Scholar] [CrossRef]
  13. Di Crescenzo, A.; Longobardi, M. Some properties and applications of cumulative Kullback-Leibler information. Appl. Stoch. Models Bus. Ind. 2015, 31, 875–891. [Google Scholar] [CrossRef]
  14. Dziubdziela, W.; Kopociński, B. Limiting properties of the k-th record values. Appl. Math. 1976, 15, 187–190. [Google Scholar] [CrossRef]
  15. Malinowska, I.; Szynal, D. On characterization of certain distributions of kth lower (upper) record values. Appl. Math. Comput. 2008, 202, 338–347. [Google Scholar] [CrossRef]
  16. Tahmasebi, S.; Eskandarzadeh, M.; Jafari, A.A. An extension of generalized cumulative residual entropy. J. Stat. Theory Appl. 2017, 16, 165–177. [Google Scholar] [CrossRef]
  17. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. A First Course in Order Statistics; John Wiley and Sons: New York, NY, USA, 1992. [Google Scholar]
  18. Thapliyal, R.; Taneja, H.C. A measure of inaccuracy in order statistics. J. Stat. Theory Appl. 2013, 12, 200–207. [Google Scholar] [CrossRef]
  19. Thapliyal, R.; Taneja, H.C. On residual inaccuracy of order statistics. Stat. Prob. Lett. 2015, 97, 125–131. [Google Scholar] [CrossRef]
  20. Thapliyal, R.; Taneja, H.C. On Rényi entropies of order statistics. Int. J. Biomath. 2015, 8, 1550080. [Google Scholar] [CrossRef]
  21. Fashandi, M.; Ahmadi, J. Characterizations of symmetric distributions based on Rényi entropy. Stat. Prob. Lett. 2012, 82, 798–804. [Google Scholar] [CrossRef]
  22. Asadi, M. A new measure of association between random variables. Metrika 2017, 80, 649–661. [Google Scholar] [CrossRef]
  23. Goel, R.; Taneja, H.C.; Kumar, V. Measure of entropy for past lifetime and k-record statistics. Physica A 2018, 503, 623–631. [Google Scholar] [CrossRef]
  24. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, NY, USA, 2007. [Google Scholar]
  25. Ma, C. Convex orders for linear combinations of random variables. J. Stat. Plan. Inference 2000, 84, 11–25. [Google Scholar] [CrossRef]
  26. De Gusmão, F.R.S.; Ortega, E.M.M.; Cordeiro, G.M. The generalized inverse Weibull distribution. Stat. Pap. 2011, 52, 591–619. [Google Scholar] [CrossRef]
  27. Gupta, R.C.; Gupta, R.D. Proportional reversed hazard rate model and its applications. J. Stat. Plan. Inference 2007, 137, 3525–3536. [Google Scholar] [CrossRef]
  28. Di Crescenzo, A.; Toomaj, A. Extension of the past lifetime and its connection to the cumulative entropy. J. Appl. Prob. 2015, 52, 1156–1174. [Google Scholar] [CrossRef]
  29. Gupta, R.C.; Kirmani, S.N.U.A. Characterizations based on convex conditional mean function. J. Stat. Plan. Inference 2008, 138, 964–970. [Google Scholar] [CrossRef]
  30. Abouammoh, A.M.; Abdulghani, S.A. On partial orderings and testing of new better than renewal used classes. Reliab. Eng. Syst. Saf. 1994, 43, 37–41. [Google Scholar] [CrossRef]
  31. Pyke, R. Spacings. J. R. Stat. Soc. Ser. B 1965, 27, 395–449. [Google Scholar] [CrossRef]
  32. Zahedi, H.; Shakil, M. Properties of entropies of record values in reliability and life testing context. Commun. Stat. Theory Meth. 2006, 35, 997–1010. [Google Scholar] [CrossRef]
  33. Baratpour, S.; Ahmadi, J.; Arghami, N.R. Entropy properties of record statistics. Stat. Pap. 2007, 48, 197–213. [Google Scholar] [CrossRef]
Figure 1. The values of I ( L n ( k ) , X ) related to Example 1; (a) for 1 k 30 and n = 10 , from top to bottom near the origin: cases (i), (ii), (iii) with ( α , β ) = ( 0.816049 , 4 ) , and (iii) with ( α , β ) = ( 0.935779 , 10 ) ; (b) for 1 n 60 and k = 10 , from top to bottom for large n in the same cases as (a).
Figure 1. The values of I ( L n ( k ) , X ) related to Example 1; (a) for 1 k 30 and n = 10 , from top to bottom near the origin: cases (i), (ii), (iii) with ( α , β ) = ( 0.816049 , 4 ) , and (iii) with ( α , β ) = ( 0.935779 , 10 ) ; (b) for 1 n 60 and k = 10 , from top to bottom for large n in the same cases as (a).
Mathematics 07 00175 g001
Figure 2. The values of I ^ ( L n ( k ) , X ) concerning Example 2, (a) for 1 k 15 and n = 1 , 2 , 3 , 4 , 5 (from bottom to top), and (b) for 1 n 15 and k = 1 , 2 , 3 , 4 , 5 (from top to bottom).
Figure 2. The values of I ^ ( L n ( k ) , X ) concerning Example 2, (a) for 1 k 15 and n = 1 , 2 , 3 , 4 , 5 (from bottom to top), and (b) for 1 n 15 and k = 1 , 2 , 3 , 4 , 5 (from top to bottom).
Mathematics 07 00175 g002
Figure 3. The values of M ( L m ( k ) , L n ( k ) ) are shown (a) for m = 1 , 2 , 3 , 4 , 5 (from bottom to top) and m < n 20 , and (b) for n = 20 , 25 , 30 , 35 , 40 (from top to bottom, near the origin) and 1 m < n .
Figure 3. The values of M ( L m ( k ) , L n ( k ) ) are shown (a) for m = 1 , 2 , 3 , 4 , 5 (from bottom to top) and m < n 20 , and (b) for n = 20 , 25 , 30 , 35 , 40 (from top to bottom, near the origin) and 1 m < n .
Mathematics 07 00175 g003
Figure 4. Values of M ( L m ( k ) , L m + 1 ( k ) ) for 1 m 200 .
Figure 4. Values of M ( L m ( k ) , L m + 1 ( k ) ) for 1 m 200 .
Mathematics 07 00175 g004
Table 1. Computed values of E [ I ^ ( L n ( 2 ) , X ) ] and V a r [ I ^ ( L n ( 2 ) , X ) ] for the uniform distribution.
Table 1. Computed values of E [ I ^ ( L n ( 2 ) , X ) ] and V a r [ I ^ ( L n ( 2 ) , X ) ] for the uniform distribution.
E [ I ^ ( L n ( 2 ) , X ) ] Var [ I ^ ( L n ( 2 ) , X ) ]
m n = 2 n = 3 n = 4 n = 5 n = 2 n = 3 n = 4 n = 5
100.230.370.480.570.0030.0060.0080.010
150.240.380.500.590.0020.0040.0060.008
200.250.390.510.610.0020.0030.0050.006
Table 2. Computed values of E [ I ^ ( L n ( 2 ) , X ) ] and V a r [ I ^ ( L n ( 2 ) , X ) ] for the exponential distribution.
Table 2. Computed values of E [ I ^ ( L n ( 2 ) , X ) ] and V a r [ I ^ ( L n ( 2 ) , X ) ] for the exponential distribution.
E [ I ^ ( L n ( 2 ) , X ) ]
λ 0.5120.5120.5120.512
m n = 2 n = 3 n = 4 n = 5
101.300.650.331.770.890.442.121.0630.532.371.190.59
151.330.670.331.810.910.452.171.0860.542.431.220.61
201.350.680.341.830.920.462.191.0960.552.461.230.62
Var [ I ^ ( L n ( 2 ) , X ) ]
λ 0.5120.5120.5120.512
m n = 2 n = 3 n = 4 n = 5
100.130.0320.0080.160.040.0090.180.0460.0110.200.0500.012
150.090.0220.0050.110.0280.0070.120.0310.0080.140.0350.008
200.070.0170.0040.080.0210.0050.090.0240.0060.110.0260.007

Share and Cite

MDPI and ACS Style

Eskandarzadeh, M.; Di Crescenzo, A.; Tahmasebi, S. Cumulative Measure of Inaccuracy and Mutual Information in k-th Lower Record Values. Mathematics 2019, 7, 175. https://doi.org/10.3390/math7020175

AMA Style

Eskandarzadeh M, Di Crescenzo A, Tahmasebi S. Cumulative Measure of Inaccuracy and Mutual Information in k-th Lower Record Values. Mathematics. 2019; 7(2):175. https://doi.org/10.3390/math7020175

Chicago/Turabian Style

Eskandarzadeh, Maryam, Antonio Di Crescenzo, and Saeid Tahmasebi. 2019. "Cumulative Measure of Inaccuracy and Mutual Information in k-th Lower Record Values" Mathematics 7, no. 2: 175. https://doi.org/10.3390/math7020175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop