Next Article in Journal
Concomitants of Order Statistics from a Bivariate Generalized Linear Exponential Distribution: Theory and Practice
Previous Article in Journal
HARQ Performance Limits for Free-Space Optical Communication Systems
Previous Article in Special Issue
Mutual Information and Quantum Coherence in Minimum Error Discrimination of N Pure Equidistant Quantum States
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Results on Cumulative Residual Inaccuracy Measure of k-Record Values

1
School of Engineering and Technology, Vivekananda Institute of Professional Studies-Technical Campus, Pitampura, Delhi 110034, India
2
Department of Applied Sciences, UIET, M.D. University, Rohtak 124001, India
3
School of Cyber Security & Digital Forensics, National Forensic Sciences University, Delhi 110085, India
4
Institut für Physikalische Chemie, RWTH-Aachen University, 52056 Aachen, Germany
*
Author to whom correspondence should be addressed.
Entropy 2026, 28(1), 17; https://doi.org/10.3390/e28010017
Submission received: 5 November 2025 / Revised: 18 December 2025 / Accepted: 22 December 2025 / Published: 24 December 2025
(This article belongs to the Special Issue Insight into Entropy)

Abstract

Herein, we consider the significance of cumulative residual entropy (CRE) and its numerous generalizations. This article presents an extension of the cumulative residual inaccuracy to k-record values. We examine certain properties of this measure. Additionally, we investigate some stochastic ordering and identify the proposed measure for several distributions that frequently arise in various realistic scenarios and have applications across multiple fields of science and engineering.

1. Introduction

The pivotal event that established the field of information theory was the 1948 seminal paper by Claude E. Shannon [1], in which he presented the concept of uncertainty for a continuous random variable X with a probability density function f ( x ) as follows:
H ( f ) = 0 f ( x ) ln f ( x ) d x .
After the work of Shannon, there was enormous development in this field, resulting in a huge literature introducing numerous information-theoretic measures by various researchers. Kerridge [2] provides the inaccuracy measure, which is a generalization of Shannon entropy, as follows:
H ( f , g ) = 0 f ( x ) ln g ( x ) d x .
Here, f ( x ) is the actual distribution, whereas g ( x ) is the predicted one. In order to address certain limitations associated with Shannon’s entropy measure, Rao et al. [3] introduced a novel measure of uncertainty, referred to as the cumulative residual entropy. In this measure, the probability density function f ( x ) has been substituted with the survival function (a function that gives the probability that a patient, device, or other object of interest will survive past a certain time) F ¯ ( x ) of the random variable X, defined as
H ( F ¯ ) = 0 F ¯ ( x ) ln F ¯ ( x ) d x .
This measure of uncertainty is especially relevant for characterizing the information in issues related to the aging characteristics of reliability theory, which is based on the mean residual life function. Following Rao’s introduction of this cumulative residual entropy [3], it has garnered significant attention from researchers. Asadi and Zohrev [4] examined its dynamic variant to elucidate the effect of age on the information pertaining to the residual lifetime of a system or component. In a similar vein to cumulative residual entropy (3), Dicrescenzo and Longobardi [5] presented cumulative entropy as particularly applicable to problems associated with inactivity time. Sunoj and Linu [6] introduced the cumulative version of Renyi’s entropy. Taneja and Kumar [7] expanded the concept of cumulative residual entropy to include the cumulative residual inaccuracy (CRI) and, subsequently, dynamic cumulative inaccuracy (DCI), while also investigating some of its properties. The measure proposed by Taneja and Kumar [7] is defined as follows:
H ( F ¯ , G ¯ ) = 0 F ¯ ( x ) ln G ¯ ( x ) d x .
Consider a sequence of independent and identically distributed continuous random variables denoted as { X i , i 1 } , with a distribution function F ( x ) and a probability density function f ( x ) . An observation X j is referred to as an upper record value if its value exceeds that of all preceding observations. A similar definition applies to a lower record value. However, in various scenarios where the expected waiting time between two record values is significantly large, this model of record values becomes inadequate. To address such situations, a novel model of k-records has been introduced by Dziubdziela and Kopocinski [8]. They define the n t h upper k-record as follows. For a positive integer k, define T 1 , k = k , and for n 2 ,
T n , k = m i n j : j > T n 1 , k ; X j k + 1 : j > X T n 1 , k k + 1 : T n 1 , k .
Let Y n , k = X T n , k k + 1 : T n , k , n 1 , where X i : n denotes the i t h order statistic in a sample of size n. The sequence { Y n , k , n 1 } is referred to as k-record values. An analogous definition can be given for a lower k-record value. If we put k = 1 , the results for usual records can be obtained as a special case.
In the existing literature, several authors have studied cumulative inaccuracy measures for record values. For instance, Eskandarzadeh, Di Crescenzo, and Tahmasebi [9] introduced a cumulative measure of inaccuracy involving the distribution function, defined as follows:
I ( F n , k , F ) = F n , k ( x ) ln F ( x ) d x ,
where F n , k ( x ) and F ( x ) are the cumulative distribution functions of the k-th record value and the parent variable, respectively.
If we set k = 2 or k = 3 in the model of an upper k-record value, this means that instead of viewing the largest value, we observe the second or third largest values. We can take the values of k that are of great importance in the context of a particular problem. Claims in some non-life insurance can be used as an example (see Kamps [10]). So, the concept of k-record values has been widely studied in the literature (refer to Berred [11] and Fashandi and Ahmadi [12]). The probability distribution function (pdf) and the survival function of the n t h upper k-record are given, respectively, as follows:
f n , k ( x ) = k n Γ ( n ) ( ln F ¯ ( x ) ) n 1 ( F ¯ ( x ) ) k 1 f ( x )
and
F ¯ n , k = j = 0 n 1 1 j ! F ¯ ( x ) k k ln F ¯ ( x ) j
where Γ is the (complete) Gamma function (e.g., see Arnold et al. [13]). Psarrakos and Navarro [14] expanded the notion of CRE by connecting it to the average duration between record values and the idea of the relevation transform [15,16]. The latter describes the total lifetime of a component replaced by a new one of the same age upon failure. Tahmasebi and Eskandarzadeh [17] suggested an enhancement of cumulative entropy based on the k t h lower record values, while also examining its dynamic variant that incorporates the past lifetime. Goel et al. [18] investigated the measure of inaccuracy between the record distribution and the parent distribution, as well as between the k-record distribution and the parent distribution.
In this article, we study the cumulative residual inaccuracy contained in the sequence of k-record values. The structure of this paper is outlined as follows. In Section 2, we introduce the extension of the cumulative residual inaccuracy measure to k-record values. In Section 3, we examine several properties of the proposed measure and establish some bounds for it. Subsequently, in Section 4, we investigate certain aspects of stochastic ordering. Section 5 presents a simplified expression for cumulative residual inaccuracy to facilitate computations and calculations. Section 6 presents an application to extremal quantum uncertainty. Finally, conclusive comments are made at the end in Section 7.

2. Cumulative Residual Inaccuracy Measure

Corresponding to the measure (4), we introduce the cumulative residual inaccuracy measure between the k-record distribution F ¯ n , k ( x ) and the parent distribution F ¯ as follows:
H ( F ¯ n , k , F ¯ ) = 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x .
Using (6), we have
H ( F ¯ n , k , F ¯ ) = 0 j = 0 n 1 1 j ! F ¯ ( x ) k k ln F ¯ ( x ) j ln F ¯ ( x ) d x   = j = 0 n 1 0 k j j ! F ¯ ( x ) k ( ln F ¯ ( x ) ) j + 1 d x .
Here and throughout we write f ( x ) = F ( x ) for the density associated with the distribution function F ( x ) and
F ¯ ( x ) = 1 F ( x )
for the survival (tail) function. The hazard (failure rate) function corresponding to F is defined by
λ F ( x ) = f ( x ) F ¯ ( x ) , x 0 ,
so that f ( x ) = λ F ( x ) F ¯ ( x ) . It will be convenient to identify the quantity
f j + 2 , k ( x ) : = k j + 2 Γ ( j + 2 ) ln F ¯ ( x ) j + 1 F ¯ ( x ) k λ F ( x ) ,
which (upon using f ( x ) = λ F ( x ) F ¯ ( x ) ) equals
k j + 2 ln F ¯ ( x ) j + 1 F ¯ ( x ) k 1 f ( x ) Γ ( j + 2 ) .
After some rearrangements, we get
H ( F ¯ n , k , F ¯ ) = j = 0 n 1 0 ( j + 1 ) k 2 λ F ( x ) . k j + 2 ln F ¯ ( x ) ( j + 1 ) F ¯ ( x ) ( k 1 ) f ( x ) Γ ( j + 2 ) d x = j = 0 n 1 0 ( j + 1 ) k 2 λ F ( x ) f j + 2 , k ( x ) d x = j = 0 n 1 ( j + 1 ) k 2 E f j + 2 , k 1 λ F ( x ) .
Here, E stands for expectation and λ F ( x ) stands for the hazard function corresponding to F ( x ) . The hazard function (also known as the failure rate, hazard rate, or force of mortality) measures the likelihood of, for example, death or failure occurring at a specific time, given that the event has not occurred before that time [19,20].

3. Properties and the Bounds of the Measure

In the present section, we study some of the properties of the proposed measure of cumulative inaccuracy as follows:
  • H ( F ¯ n , k , F ¯ ) = j = 0 n 1 k j ( j + 1 ) μ j + 2 , k ( x ) μ j + 1 , k ( x ) , where μ n , k ( x ) = 0 F ¯ n , k ( x ) d x .
    Proof.
    From (6), we can write
    F ¯ j + 2 , k ( x ) F ¯ j + 1 , k ( x ) = ( F ¯ ( x ) ) k ( k ln F ¯ ( x ) ) j + 1 ( j + 1 ) ! .
    Therefore from (8) and (10), we get
    H ( F ¯ n , k , F ¯ ) = j = 0 n 1 k j ( j + 1 ) 0 F ¯ j + 2 , k ( x ) F ¯ j + 1 , k ( x ) d x = j = 0 n 1 k j ( j + 1 ) μ j + 2 , k ( x ) μ j + 1 , k ( x ) .
    Remark 1.
    If for a fixed k, F n , k is a decreasing function of n, that is, F ¯ n , k is an increasing function of n, then F ¯ j + 2 , k > F ¯ j + 1 , k . From the previous result, we can see that H ( F ¯ n , k , F ¯ ) is an increasing function of n.
  • Consider two random variables X and Y with survival functions F ¯ ( x ) and G ¯ ( y ) , respectively, such that Y = ϕ ( X ) , where ϕ is a strictly increasing function and differentiable almost everywhere with ϕ ( 0 ) = 0 , then
    H ( G ¯ n , k , G ¯ ) = j = 0 n 1 0 k j j ! ( F ¯ ( x ) ) k ( ln F ¯ ( x ) ) j + 1 ϕ ( x ) d x .
    Proof.
    From (8), we can write
    H ( G ¯ n , k , G ¯ ) = j = 0 n 1 0 k j j ! ( G ¯ ( y ) ) k ln G ¯ ( y ) j + 1 d y .
    Now Y = ϕ ( X ) G ¯ ( y ) = F ¯ ( x ) and G ¯ n , k ( y ) = F ¯ n , k ( x ) . Also d y = ϕ ( x ) d x .
    By setting all these values in (12), the result is obvious. □
    Remark 2.
    In particular, Y = ϕ ( X ) = a X ϕ ( x ) = a . Therefore (11) becomes
    H ( G ¯ n , k , G ¯ ) = j = 0 n 1 a 0 k j j ! ( F ¯ ( x ) ) k ( ln F ¯ ( x ) ) j + 1 d x = a H ( F ¯ n , k , F ¯ ) .
  • If G ¯ ( x ) = F ¯ ( x ) β , where β is an integer greater than 1 and F ¯ ( x ) and G ¯ ( x ) are the survival functions of X and Y, respectively, then
    H ( G ¯ n , k , G ¯ ) = β H ( F ¯ n , k β , F ¯ )
    Proof.
    We know that
    H ( G ¯ n , k , G ¯ ) = j = 0 n 1 0 k j j ! ( G ¯ ( x ) ) k ( ln G ¯ ( x ) ) j + 1 d x = j = 0 n 1 0 k j j ! ( F ¯ ( x ) ) k β ( β ln F ¯ ( x ) ) j + 1 d x = β j = 0 n 1 0 ( k β ) j j ! ( F ¯ ( x ) ) k β ( ln F ¯ ( x ) ) j + 1 d x = β H ( F ¯ n , k β , F ¯ ) .
  • Consider
    η ( X ) = 0 ( F ¯ ( x ) ) k ln F ¯ ( x ) d x ,
    Then
    H ( F ¯ n , k , F ¯ ) j = 0 n 1 k j j ! η ( X ) j + 1 C k j , W h e r e C k = 0 ( F ¯ ( x ) ) k d x
    Proof.
    From (8),
    H ( F ¯ n , k , F ¯ ) = j = 0 n 1 0 k j j ! ( F ¯ ( x ) ) k ( ln F ¯ ( x ) ) j + 1 d x .
    Since 0 F ¯ ( x ) 1 for a survival function, we have F ¯ ( x ) ( F ¯ ( x ) ) j + 1 for every integer j 0 . Thus,
    H ( F ¯ n , k , F ¯ ) j = 0 n 1 0 k j j ! ( F ¯ ( x ) ) k 1 ( F ¯ ( x ) ) j + 1 ( ln F ¯ ( x ) ) j + 1 d x .
    To apply Jensen’s inequality [21], define
    C k = 0 ( F ¯ ( x ) ) k d x > 0 , Z ( x ) = ln F ¯ ( x ) 0 ,
    and the probability measure
    d P k ( x ) = ( F ¯ ( x ) ) k C k d x .
    Let ϕ ( t ) = t j + 1 , which is convex for all j 0 . Jensen’s inequality then gives
    0 Z ( x ) j + 1 d P k ( x ) 0 Z ( x ) d P k ( x ) j + 1 .
    Multiplying both sides by C k and using
    η ( X ) = 0 ( F ¯ ( x ) ) k Z ( x ) d x ,
    yields
    0 ( F ¯ ( x ) ) k ( ln F ¯ ( x ) ) j + 1 d x η ( X ) j + 1 C k j .
    Substitute this bound into the expression (15).
    This completes the proof. □
  • Let X denote an absolutely continuous non-negative random variable; then
    H ( F ¯ n , 1 , F ¯ ) j = 0 n 1 H ( F ¯ ) j + 1 j ! E ( X ) j
    where E ( X ) denotes the expectation of X and H ( F ¯ ) is defined in (3).
    Proof.
    This result follows directly from (14) by taking k = 1 . For k = 1 , we have
    C 1 = 0 F ¯ ( x ) d x = E ( X ) ,
    and
    η ( X ) = 0 F ¯ ( x ) ln F ¯ ( x ) d x ,
    which is the cumulative residual entropy. Hence,
    H ( F ¯ ) = 0 F ¯ ( x ) ln F ¯ ( x ) d x .
    Substituting these expressions into (14) completes the proof. □
  • Let X be an absolutely continuous non-negative random variable with
    E ( X ) = 1 .
    Then
    H ( F ¯ n , 1 , F ¯ ) j = 0 n 1 H ( F ¯ ) j + 1 j ! ,
    where H ( F ¯ ) is given by (3).
    Proof.
    Setting k = 1 in (14), we obtain
    C 1 = 0 F ¯ ( x ) d x = E ( X ) = 1 .
    Moreover,
    η ( X ) = 0 F ¯ ( x ) ln F ¯ ( x ) d x = H ( F ¯ ) ,
    which is the cumulative residual entropy. Substituting these values into (14) yields the desired result. □

4. Some Results on Stochastic Ordering

In this section, we prove some order properties of the cumulative inaccuracy measure for k-record values. First we provide the following definitions.
Definition 1.
A random variable X is said to be less than Y in the stochastic ordering denoted by X s t Y if F ¯ ( x ) G ¯ ( x ) for all x, where F ¯ ( x ) and G ¯ ( x ) are the survival functions of X and Y, respectively.
Definition 2.
A random variable X is said to be less than Y in the likelihood ratio ordering denoted by X l r Y if f X ( x ) g Y ( x ) is non-increasing in x, where f X ( x ) and g Y ( x ) are the pdfs of X and Y, respectively.
Proposition 1.
If E ( X n , k ) and E ( X ) denote the expected value of the n t h k-record value and the parent distribution such that X n , k s t X , then
( i ) H ( F ¯ n , k ) H ( F ¯ n , k , F ¯ ) E ( X n , k ) ln E ( X n , k ) E ( X ) .
( i i ) H ( F ¯ n , k ) H ( F ¯ ) E ( X n , k ) ln E ( X n , k ) E ( X ) .
Here H ( F ¯ n , k ) and H ( F ¯ ) denote the cumulative residual entropy for the random variables X n , k and X, respectively.
Proof.
By the log-sum inequality, we have
0 F ¯ n , k ( x ) ln F ¯ n , k ( x ) F ¯ ( x ) d x 0 F ¯ n , k ( x ) d x ln 0 F ¯ n , k ( x ) d x 0 F ¯ ( x ) d x = E ( X n , k ) ln E ( X n , k ) E ( X ) .
Hence, using the previous inequality, we obtain
H ( F ¯ n , k ) = 0 F ¯ n , k ( x ) ln F ¯ n , k ( x ) d x 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x E ( X n , k ) ln E ( X n , k ) E ( X ) = H ( F ¯ n , k , F ¯ ) E ( X n , k ) ln E ( X n , k ) E ( X ) .
Now using X n , k s t X in above inequality, we get
H ( F ¯ n , k ) 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x E ( X n , k ) ln E ( X n , k ) E ( X ) 0 F ¯ ( x ) ln F ¯ ( x ) d x E ( X n , k ) ln E ( X n , k ) E ( X ) = H ( F ¯ ) E ( X n , k ) ln E ( X n , k ) E ( X ) .
This proves the result. □
Proposition 2.
Let X > 0 be associated with the density function f ( x ) and cumulative distribution function F ( x ) . If X n , k s t X , then,
H ( F ¯ n , k , F ¯ ) C e H ( f n , k , f ) .
Here H ( f n , k , f ) = 0 f n , k ( x ) ln f ( x ) d x denotes the measure of inaccuracy between the n t h k-record value and the parent distribution (see Goel et al. [22]).
Proof.
Consider,
0 f n , k ( x ) ln f ( x ) F ¯ n , k ( x ) ln F ¯ ( x ) d x ln 1 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x = ln 1 H ( F ¯ n , k , F ¯ ) .
The inequality above results from the log-sum inequality [23]. Continuing, we get
0 f n , k ( x ) ln f ( x ) d x 0 f n , k ( x ) ln F ¯ n , k ( x ) ln F ¯ ( x ) d x ln H ( F ¯ n , k , F ¯ )
or
H ( f n , k , f ) + 0 f n , k ( x ) ln F ¯ n , k ( x ) ln F ¯ ( x ) d x ln ( H ( F ¯ n , k , F ¯ ) )
Using X n , k s t X , we get
H ( f n , k , f ) + 0 f n , k ( x ) ln F ¯ n , k ( x ) ln F ¯ n , k ( x ) d x ln ( H ( F ¯ n , k , F ¯ ) )
Using the substitution F ¯ n , k ( x ) = u , we get
0 f n , k ( x ) ln F ¯ n , k ( x ) ln F ¯ n , k ( x ) d x = 0 1 ln u ln u d u = k . ( s a y )
Therefore, by inserting this value into (21), we get
H ( f n , k , f ) + k ln ( H ( F ¯ n , k , F ¯ ) )
or
H ( F ¯ n , k , F ¯ ) C e H ( f n , k , f ) ,
where C = exp k is a constant. □
Proposition 3.
Let X be a non-negative random variable obeying the Weibull distribution; then
H ( F ¯ n , k ) E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β ( X n , k ) .
Proof.
Let X be a random variable obeying the Weibull distribution [24] and reliability function F ¯ ( x ) = e ( λ x ) β .
From (18),
H ( F ¯ n , k ) 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x E ( X n , k ) ln E ( X n , k ) E ( X ) .
The Weibull distribution becomes
H ( F ¯ n , k ) E ( X n , k ) ln E ( X n , k ) E ( X ) 0 F ¯ n , k ( x ) ( λ x ) β d x = E ( X n , k ) ln E ( X n , k ) E ( X ) λ β β + 1 E f n , k ( X β + 1 ) .
Let E ( X ) = μ = 0 F ¯ ( x ) d x = Γ ( 1 β + 1 ) λ . Hence,
H ( F ¯ n , k ) E ( X n , k ) ln E ( X n , k ) μ E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) μ β .
The right-hand side of the above equation is maximized for a fixed β at
μ β = β E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E ( X n , k ) 1 β .
By inserting this result into (23), we get
H ( F ¯ n , k ) E ( X n , k ) ln E ( X n , k ) μ β E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) μ β β = E ( X n , k ) β ln β E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β + 1 ( X n , k ) E ( X n , k ) β .
Now we know that ln x 1 x ; using this, we get
H ( F ¯ n , k ) E ( X n , k ) β 1 β E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β + 1 ( X n , k ) E ( X n , k ) β = E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β ( X n , k ) . o r H ( F ¯ n , k ) E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β ( X n , k ) .
This completes the proof. □
Remark 3.
If we set β = 1 and n = k = 1 in the previous result, it reduces to
H ( F ¯ ) E ( X 2 ) 2 E ( X ) ,
a bound obtained by Rao et al. [3].
Proposition 4.
Suppose that the non-negative random variable X has a decreasing hazard rate; then
H ( F ¯ n + 1 , k , F ¯ ) H ( F ¯ n , k , F ¯ )
Proof.
Consider two pdfs of consecutive record values f n , k ( x ) and f n + 1 , k ( x ) . Using (5), we get
f n , k ( x ) f n + 1 , k ( x ) = n k ln F ¯ ( x ) ,
which is a decreasing function in x. This implies that X n , k l r X n + 1 , k . Therefore X n , k s t X n + 1 , k ; that is F ¯ n , k ( x ) F ¯ n + 1 , k ( x ) . (For more details one can refer to Shaked and Shantikumar [25].) Therefore for all increasing functions ψ, this is equivalent to E ( ψ ( X n , k ) ) E ( ψ ( X n + 1 , k ) ) , provided these expectations exist.
Now if X has a decreasing hazard rate λ F ( x ) , then 1 λ F ( x ) is an increasing function. Therefore by the above,
E 1 λ F ( X n , k ) E 1 λ F ( X n + 1 , k )
From (9), we can see that
H ( F ¯ n , k , F ¯ ) H ( F ¯ n + 1 , k , F ¯ ) .
Here λ F ( X n , k ) and λ F ( X n + 1 , k ) denote the hazard rates corresponding to X n , k and X n + 1 , k , respectively. This completes the proof. □

5. Cumulative Inaccuracy for Some Specific Distributions

In this section, we give a lemma providing a simplified expression for finding the cumulative inaccuracy measure for various distributions, and then we give some examples based on it.
The symbolic expressions and closed-form derivations of the inaccuracy measures H ( F ¯ n , k , F ¯ ) , including equations from Equation (29) to Equation (32), were performed using the Maple Computer Algebra System (Maplesoft, Version 2025.1). Maple’s symbolic integration, summation, and simplification capabilities were utilized to cross-check and symbolically collapse the infinite sums into closed-form expressions and checked via numerical substitution. The procedures closely follow the recommendations documented in the Maple Programming Guide [26].
Lemma 1.
Consider a random variable X with distribution function F ¯ ( x ) , where F 1 ( x ) exists; then the cumulative inaccuracy measure between k-record values and the parent distribution is given as
H ( F ¯ n , k , F ¯ ) = j = 0 n 1 k j j ! 0 u j + 1 e u ( k + 1 ) f ( F 1 ( 1 e u ) ) d u .
Proof.
From (8)
H ( F ¯ n , k , F ¯ ) = ( F ¯ ( x ) ) k j = 0 n 1 0 k j j ! ( ln F ¯ ( x ) ) j + 1 d x .
By inserting ln F ¯ ( x ) = u into the previous equation, we get
H ( F ¯ n , k , F ¯ ) = e k u j = 0 n 1 0 k j u j + 1 e u j ! f ( F 1 ( 1 e u ) ) d u = j = 0 n 1 k j j ! 0 u j + 1 e u ( k + 1 ) f ( F 1 ( 1 e u ) ) d u .
Example 1.
Consider the finite range distribution with pdf f ( x ) = a b 1 x b a 1 , a > 1 , 0 x b and distribution function F ( x ) = 1 x b a .
  • Then F 1 ( 1 e u ) = b ( 1 e u a ) and this gives f ( F 1 ( 1 e u ) ) = a b e u ( a 1 ) a . Therefore using (28), we get
H ( F ¯ n , k , F ¯ ) = j = 0 n 1 k j j ! 0 u j + 1 e u ( k + 1 ) f ( F 1 ( 1 e u ) ) d u = j = 0 n 1 k j b j ! a 0 u j + 1 e u ( k + 1 ) e u ( a 1 ) a d u = j = 0 n 1 k j b j ! a 0 u j + 1 e u ( k + 1 a ) d u .
Now using the substitution u ( k + 1 a ) = t , we have
H ( F ¯ n , k , F ¯ ) = j = 0 n 1 k j b j ! a ( k + 1 a ) j + 2 0 t j + 1 e t d t = b a j = 0 n 1 k j ( j + 1 ) ( k + 1 a ) j + 2 = b ( k a + n + 1 ) k a k a + 1 n a 2 k a + 1 + a 2 a = ( k a + n + 1 ) ( k a k a + 1 ) n k a 1 b a k a + 1
Here 0 t j + 1 e t d t = Γ ( j + 2 ) = ( j + 1 ) !
In particular if n = 2 a n d k = 1 , then we have an inaccuracy measure between the second record value and the parent distribution as follows:
H ( F ¯ 2 , 1 , F ¯ ) = b a ( 1 + 1 a ) 2 j = 1 3 j ( 1 + 1 a ) j 1 = b a ( 6 a 2 + 4 a + 1 ) ( a + 1 ) 4 .
Example 2.
For a uniform distribution, if we put a = 1 in (29), we get an inaccuracy measure corresponding to a uniform distribution as follows:
H ( F ¯ n , k , F ¯ ) = b j = 0 n 1 k j ( j + 1 ) ( k + 1 ) j + 2 = b 1 ( k + n + 1 ) ( k k + 1 ) n k + 1 .
Example 3.
If X is a random variable with a Weibull distribution with a pdf of f ( x ) = α β x β 1 e α x β , f o r x > 0 , α > 0 , β > 0 and survival function F ¯ ( x ) = 1 e α x β , this gives F 1 ( 1 e u ) = u α 1 β . Therefore putting these values in (28), the inaccuracy measure will come out as
H ( F ¯ n , k , F ¯ ) = j = 0 n 1 k j Γ ( j + 1 + 1 β ) j ! ( α β β ) 1 β k j + 1 + 1 β . = n β k n Γ n β + β + 1 β ( β + 1 ) n ! ( α β β ) 1 β k n β + β + 1 β = β k β 1 β ( α β β ) 1 β ) Γ n β + β + 1 β ( β + 1 ) ( n 1 ) ! .
Example 4.
If X is an exponentially distributed random variable, then by putting β = 1 in (31), we get an inaccuracy measure corresponding to an exponential distribution as follows:
H ( F ¯ n , k , F ¯ ) = j = 0 n 1 ( j + 1 ) k 2 α = n ( n + 1 ) 2 k 2 α .
Remark 4.
If we put k = 1 , then H ( F ¯ n , k , F ¯ ) becomes H ( F ¯ n , F ¯ ) , which represents the cumulative residual inaccuracy measure between the n t h record value and the parent distribution, and if n = 1 , k = 1 , this represents the cumulative residual entropy given by Rao et al. [3].

6. Application to Extremal Quantum Uncertainty

The framework introduced for cumulative residual inaccuracy (CRI) under k-record statistics admits a direct extension to quantify informational divergence in extremal quantum phenomena. Quantum information processing systems frequently exhibit rare, high-impact deviations such as maximum gate errors, decoherence spikes, or syndrome bursts during error-corrected qubit operations [27,28,29]. Such deviations, structurally analogous to k-record values, possess disproportionate informational weight within the noise ensemble [30].
Let F ¯ n , k ( x ) denote the survival function of the k t h record in the empirical quantum error distribution and F ¯ ( x ) represent the reference survival function under the baseline decoherence model. The residual inaccuracy functional,
H ( F ¯ n , k , F ) = 0 F ¯ n , k ( x ) ln F ( x ) d x ,
acts as a scalar quantifier of discord between the statistical extremality observed experimentally and the theoretically predicted uncertainty field. The value of H ( F n , k , F ) increases as the observed tail deviates from the expected exponential or sub-exponential structure characteristic of nominal quantum noise processes [31,32].
In practice, the empirical parameters constituting F n , k may be derived from extremal quantum error syndromes aggregated across multiple correction cycles or tomography datasets [33]. The reference model F ( x ) , frequently exponential or Weibull-distributed, aligns with conventional assumptions for decoherence and relaxation dynamics in open quantum systems [34,35]. Evaluation of (33) thus measures the incremental uncertainty generated by the discrepancies between the empirical and modeled error extremality.
For a simple exponential case, where the empirical and theoretical densities are, respectively, defined as f ( x ) = λ e λ x and g ( x ) = λ 0 e λ 0 x , the CRI reduces to
H ( f , g ) = log λ 0 + λ 0 λ ,
illustrating that greater extremal variability ( λ < λ 0 ) produces heightened uncertainty, indicating deviation from the designed fault-tolerant threshold [27,28].
The established formulation reframes the record-based uncertainty as an entropic performance indicator for quantum devices. Unlike mean-based entropies that track average uncertainty, the CRI integrates residual deviation structures, capturing both the persistence and asymmetry of extremal quantum events. This transition from expectation-based to tail-based informational analysis enables refined assessment of quantum reliability, measurement-chain stability, and the adequacy of noise models in practical implementations [36,37,38].
Thus, the cumulative residual inaccuracy measure derived here establishes a pragmatic statistical–entropic bridge between record value information theory and quantum uncertainty quantification [37,39]. It provides a continuous, interpretable diagnostic of noise asymmetry and extremal behavior in quantum devices—an attractive analytical tool for performance characterization, fault-tolerant design, and risk-aware quantum control.

Relation to Entropic Uncertainty and the Logarithmic Schrödinger Equation

The CRI measure complements the broader framework of entropic uncertainty relations (EURs) in quantum mechanics, which quantify intrinsic limits on the joint knowledge of non-commuting observables through entropy sums or bounds [40,41,42]. The EUR or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg’s uncertainty principle can be expressed as a lower bound on the sum of these entropies [43,44]. A related and insightful formulation arises from the logarithmic Schrödinger equation [45], which introduces a nonlinear quantum evolution incorporating the Shannon entropy functional into the dynamics. This equation underpins the Everett–Hirschman entropic uncertainty relation [42,44,46,47], which bounds the sum of the entropies of a quantum state’s position and momentum probability distributions, providing a stronger uncertainty bound than classical variance-based principles. This has invaluable applications in quantum memory and quantum information [40,47,48]. Moreover, the logarithmic Schrödinger equation has applications in fundamental physics ranging from superfluids to modeling the potential of the Higg’s boson [49,50,51].
While traditional entropic uncertainty relations operate in the phase space domain, assessing the incompatibility of observables, the CRI extends uncertainty quantification to residual tail distributions manifested in record statistics from sequences of quantum measurements or error syndromes. This links the entropic cost of observed extremal deviations to fundamental quantum limits framed by logarithmic nonlinearities in wavefunction evolution, thereby establishing a bridge from entropic uncertainty at the foundational level to operational uncertainty in quantum noise and error landscapes.
Integrating these perspectives enriches the interpretation of the CRI as an entropy-informed measure capturing layered quantum uncertainty from intrinsic measurement indeterminacy characterized by the Everett–Hirschman framework to emergent deviations evidenced in the cumulative residual inaccuracy of extremal events. This multiscale entropic viewpoint informs both the physical and statistical analysis of next-generation quantum information devices.

7. Conclusions

Record values and k-record values frequently arise in numerous practical scenarios. Examples include athletic competitions [52,53], hydrology [54,55], and weather forecasts [56]. Furthermore, Kerridge’s measure of inaccuracy [2] serves as a valuable tool for assessing the discrepancy between two distributions [2,57]. Therefore, in this communication, we present the cumulative residual inaccuracy between the distributions of the k-record values and the parent random variable. Furthermore, we derive several properties of this measure, including stochastic ordering. To reduce the computational effort required to determine the inaccuracy for various distributions, we offer a simplified expression for the proposed inaccuracy measure and demonstrate its application to several standard distributions.

Author Contributions

Conceptualization, methodology, and writing, R.G.—the manuscript is largely based on her thesis work; validation, V.K.; writing—original draft preparation, S.V.; writing—review and editing, T.C.S.; software, T.C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CRECumulative Residual Entropy
CRICumulative Residual Innacuracy
DCIDynamic Cumulative Innacuracy
pdfProbability Distribution Function

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–432. [Google Scholar] [CrossRef]
  2. Kerridge, D.F. Inaccuracy and inference. J. R. Stat. Soc. Ser. B 1961, 23, 184–194. [Google Scholar] [CrossRef]
  3. Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  4. Asadi, M.; Zohrevand, Y. On the dynamic cumulative residual entropy. J. Stat. Plann. Inference 2007, 137, 1931–1941. [Google Scholar] [CrossRef]
  5. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plann. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  6. Sunoj, S.M.; Linu, M.N. Dynamic cumulative residual Rényi’s entropy. Statistics 2012, 46, 41–56. [Google Scholar] [CrossRef]
  7. Taneja, H.C.; Kumar, V. On dynamic cumulative residual inaccuracy measure. In Proceedings of the World Congress of Engineering (WCE), London, UK, 4–6 July 2012; Volume I. [Google Scholar]
  8. Dziubdziela, W.; Kopocinski, B. Limiting properties of the kth record values. Appl. Math. 1976, 15, 187–190. [Google Scholar] [CrossRef]
  9. Eskandarzadeh, M.; Di Crescenzo, A.; Tahmasebi, S. Cumulative Measure of Inaccuracy and Mutual Information in k-th Lower Record Values. Mathematics 2019, 7, 175. [Google Scholar] [CrossRef]
  10. Kamps, U. A concept of generalized order statistics. J. Stat. Plann. Inference 1995, 48, 1–23. [Google Scholar] [CrossRef]
  11. Berred, M. k-Record values and the extreme-value index. J. Stat. Plann. Inference 1995, 45, 49–63. [Google Scholar] [CrossRef]
  12. Fashandi, M.; Ahmadi, J. Characterizations of symmetric distributions based on Rényi entropy. Stat. Probab. Lett. 2012, 82, 798–804. [Google Scholar] [CrossRef]
  13. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. Records; Wiley: New York, NY, USA, 1998. [Google Scholar]
  14. Psarrakos, G.; Navarro, J. Generalized cumulative residual entropy and record values. Metrika 2013, 76, 623–640. [Google Scholar] [CrossRef]
  15. Krakowski, M. The relevation transform and a generalization of the gamma distribution function. RAIRO-Oper. Res.-Rech. Opérationnelle 1973, 7, 107–120. [Google Scholar] [CrossRef]
  16. Sankaran, P.G.; Dileep Kumar, M. Reliability properties of proportional hazards relevation transform. Metrika 2019, 82, 441–456. [Google Scholar] [CrossRef]
  17. Tahmasebi, S.; Eskandarzadeh, M. Generalized cumulative entropy based on kth lower record values. Stat. Probab. Lett. 2017, 126, 164–172. [Google Scholar] [CrossRef]
  18. Goel, R.; Taneja, H.C.; Kumar, V. Measure of entropy for past lifetime and record statistics. Phys. A 2018, 503, 623–631. [Google Scholar] [CrossRef]
  19. Weisstein, E.W. Hazard Function. From MathWorld—A Worlfram Resource. 2025. Available online: https://mathworld.wolfram.com/HazardFunction.html (accessed on 6 December 2025).
  20. Liberto, D. Hazard Rate: Definition, How to Calculate, and Example. 2025. Available online: https://www.investopedia.com/terms/h/hazard-rate.asp (accessed on 29 November 2025).
  21. Jensen, J.L.W.V. Sur les fonctions convexes et les inégalités entre les valeurs moyennes. Acta Math. 1906, 30, 175–193. [Google Scholar] [CrossRef]
  22. Goel, R.; Taneja, H.C.; Kumar, V. Measure of inaccuracy and k-record statistics. Bull. Calcutta Math. Soc. 2018, 110, 151–166. [Google Scholar]
  23. Dannan, F.M.; Neff, P.; Thiel, C. On the sum of squared logarithms inequality and related inequalities. J. Math. Inequal. 2016, 10, 1–17. [Google Scholar] [CrossRef]
  24. Cameron, A.C.; Trivedi, P.K. Preface. In Microeconometrics; Cambridge University Press: Cambridge, UK, 2005; pp. xxi–xxii. [Google Scholar]
  25. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, NY, USA, 2007. [Google Scholar]
  26. Bernardin, L.; Chin, P.; DeMarco, P.; Geddes, K.O.; Hare, D.E.G.; Heal, K.M.; Labahn, G.; May, J.P.; McCarron, J.; Monagan, M.B.; et al. Maple Programming Guide; Maplesoft, Inc.: Toronto, ON, Canada, 2012. [Google Scholar]
  27. Kribs, D.W.; Pasieka, A.; Życzkowski, K. Entropy of a Quantum Error Correction Code; World Scientific: Singapore, 2008. [Google Scholar]
  28. Nielsen, M.A. Information-Theoretic Approach to Quantum Error Correction. Proc. R. Soc. A 1998, 454, 277–304. [Google Scholar] [CrossRef]
  29. Cafaro, C.; van Loock, P. An Entropic Analysis of Approximate Quantum Error Correction. Phys. A 2014, 404, 34–46. [Google Scholar] [CrossRef]
  30. Bueno, V.D.C.; Balakrishnan, N. A Cumulative Residual Inaccuracy Measure for Coherent Systems; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
  31. Choi, S.; Bao, Y.; Qi, X.; Altman, E. Quantum Error Correction in Scrambling Dynamics and Open Systems. Phys. Rev. Lett. 2020, 125, 030505. [Google Scholar] [CrossRef]
  32. Li, Y. Statistical Mechanics of Quantum Error Correcting Codes. Phys. Rev. B 2021, 103, 104306. [Google Scholar] [CrossRef]
  33. Onorati, E. Fitting Quantum Noise Models to Tomography Data. Quantum J. 2023, 7, 1197. [Google Scholar] [CrossRef]
  34. Barrios, R. Exponentiated Weibull Fading Channel Model in Free-Space Optics. IEEE Trans. Commun. 2013, 61, 2594–2602. [Google Scholar]
  35. Preskill, J. Quantum Error Correction. Caltech Lecture Notes for Physics 229: Quantum Information and Computation, Chapter 7. Available online: https://www.preskill.caltech.edu/ph229/notes/chap7.pdf (accessed on 21 December 2025).
  36. Golse, F.; Jin, S.; Liu, N. Quantum Algorithms for Uncertainty Quantification. arXiv 2022, arXiv:2209.11220. [Google Scholar] [CrossRef]
  37. Wirsching, G. Quantum-Inspired Uncertainty Quantification. Front. Comput. Sci. 2022, 4, 662632. [Google Scholar] [CrossRef]
  38. Engineering and Physical Sciences Research Council. Quantum Uncertainty Quantification (QUQ) Project Overview; University of Exeter: Exeter, UK, 2025. [Google Scholar]
  39. Halpern, N.; Hayden, P. Entropic uncertainty relations in quantum information theory. arXiv 2019, arXiv:1909.08438. [Google Scholar]
  40. Berta, M.; Christandl, M.; Colbeck, R.; Renes, J.M.; Renner, R. The uncertainty principle in the presence of quantum memory. Nat. Phys. 2010, 6, 659–662. [Google Scholar] [CrossRef]
  41. Wehner, S.; Winter, A. Entropic uncertainty relations—A survey. New J. Phys. 2010, 12, 025009. [Google Scholar] [CrossRef]
  42. Hirschman, I.I. A Note on Entropy. Am. J. Math. 1957, 79, 152–156. [Google Scholar] [CrossRef]
  43. Beckner, W. Inequalities in Fourier Analysis. Ann. Math. 1975, 102, 159–182. [Google Scholar] [CrossRef]
  44. Białynicki-Birula, I.; Mycielski, J. Uncertainty relations for information entropy in wave mechanics. Commun. Math. Phys. 1975, 44, 129–132. [Google Scholar] [CrossRef]
  45. Białynicki-Birula, I.; Mycielski, J. Nonlinear wave mechanics. Ann. Phys. 1976, 100, 62–93. [Google Scholar] [CrossRef]
  46. Everett, H. Generalized Lagrange Multipliers in the Problem of Boltzmann’s Entropy. Ann. Math. 1957, 66, 591–600. [Google Scholar]
  47. Maassen, H.; Uffink, J.B.M. Generalized entropic uncertainty relations. Phys. Rev. Lett. 1988, 60, 1103–1106. [Google Scholar] [CrossRef]
  48. Ding, Z.Y.; Yang, H.; Wang, D.; Yuan, H.; Yang, J.; Ye, L. Experimental investigation of entropic uncertainty relations and coherence uncertainty relations. Phys. Rev. A 2020, 101, 032101. [Google Scholar] [CrossRef]
  49. Zloshchastiev, K.G. Spontaneous symmetry breaking and mass generation as built-in phenomena in logarithmic nonlinear quantum theory. Acta Phys. Pol. B 2011, 42, 261. [Google Scholar] [CrossRef]
  50. Zloshchastiev, K.G. Origins of logarithmic nonlinearity in the theory of fluids and beyond. Int. J. Mod. Phys. B 2025, 39, 2530008. [Google Scholar] [CrossRef]
  51. Scott, T.C.; Glasser, M.L. Kink Soliton Solutions in the Logarithmic Schrödinger Equation. Mathematics 2025, 13, 827. [Google Scholar] [CrossRef]
  52. Athletics Australia. Australian Para Athletics Records, 2025. Available online: https://www.athletics.com.au/results-records-toplists/records/ (accessed on 21 December 2025).
  53. USA Track & Field. Record Applications. Available online: https://usatf.org/resources/statistics/record-applications (accessed on 21 December 2025).
  54. Teegavarapu, R. Statistical Analysis of Hydrologic Variables; Technical report; Tufts University: Medford, MA, USA, 2019. [Google Scholar]
  55. Vogel, R.M. The value of streamflow record augmentation procedures for estimating low-flow statistics. J. Hydrol. 1991, 127, 15–23. [Google Scholar]
  56. Dey, R. Weather forecasting using Convex Hull & K-Means clustering. arXiv 2015, arXiv:1501.06456. [Google Scholar]
  57. Balakrishnan, N.; Buono, F.; Cali, C.; Longobardi, M. Dispersion indices based on Kerridge inaccuracy measure and applications. Comput. Stat. 2023, 53, 5574–5592. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Goel, R.; Kumar, V.; Vehale, S.; Scott, T.C. Some Results on Cumulative Residual Inaccuracy Measure of k-Record Values. Entropy 2026, 28, 17. https://doi.org/10.3390/e28010017

AMA Style

Goel R, Kumar V, Vehale S, Scott TC. Some Results on Cumulative Residual Inaccuracy Measure of k-Record Values. Entropy. 2026; 28(1):17. https://doi.org/10.3390/e28010017

Chicago/Turabian Style

Goel, Ritu, Vikas Kumar, Sarang Vehale, and Tony C. Scott. 2026. "Some Results on Cumulative Residual Inaccuracy Measure of k-Record Values" Entropy 28, no. 1: 17. https://doi.org/10.3390/e28010017

APA Style

Goel, R., Kumar, V., Vehale, S., & Scott, T. C. (2026). Some Results on Cumulative Residual Inaccuracy Measure of k-Record Values. Entropy, 28(1), 17. https://doi.org/10.3390/e28010017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop