Next Article in Journal
A Note on Finite-to-Infinite Extensions and Homotopy Invariance of Digraph Brown Functors
Previous Article in Journal
Images of Generalized Multilinear Polynomials on Upper Triangular Matrix Algebras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Measurement Error on Residual Extropy Estimation

by
Radhakumari Maya
1,
Muhammed Rasheed Irshad
1,
Febin Sulthana
1 and
Maria Longobardi
2,*
1
Department of Statistics, Cochin University of Science and Technology, Cochin 682 022, Kerala, India
2
Dipartimento di Matematica e Applicazioni, Università degli Studi di Napoli Federico II, 80126 Naples, Italy
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(9), 672; https://doi.org/10.3390/axioms14090672 (registering DOI)
Submission received: 28 July 2025 / Revised: 25 August 2025 / Accepted: 26 August 2025 / Published: 31 August 2025

Abstract

In scientific analyses, measurement errors in data can significantly impact statistical inferences, and ignoring them may lead to biased and invalid results. This study focuses on the estimation of the residual extropy function, in the presence of measurement errors. We developed an estimator for the extropy function and established its asymptotic properties. A comprehensive simulation study evaluates the performance of the proposed estimators under various error scenarios, while their practical utility and precision are demonstrated through an application to a real-world data set.

1. Introduction

Entropy, as introduced by [1], is frequently used to measure the uncertainty in the probability distribution of a random variable (rv). The concept of entropy plays a fundamental role in statistical theory and its applications (see [2,3]). A well-known application of entropy in statistics is the test for normality, based on the property that the normal distribution has the highest entropy among all continuous distributions with a given variance (see [4]). A recent review of Shannon entropy and related measures like Rényi and Tsallis entropy can be found in [5]. The differential entropy for a non-negative continuous rv X, with a probability density function (pdf) f X ( x ) , can be defined as:
D ( X ) = 0 + f X ( x ) log f X ( x ) d x ,
When a unit has survived up to a specific age t, understanding the distribution of its remaining lifetime is especially important in reliability and survival analysis. To deal with this, Ref. [6] introduced the notion of residual entropy. Later, Ref. [7] suggested a way to describe the lifetime distribution by using conditional Shannon’s entropy. Based on these, Refs. [6,7] studied certain ordering and aging characteristics of lifetime distributions. Ref. [8] expanded some results presented by [9]. Ref. [10] described a distribution using the functional connections between residual entropy and the hazard rate function. Refs. [11,12,13] studied different generalizations of Ebrahimi‘s measure. For a non-negative r v  X, which represents the lifetime of a component, the residual entropy function can be defined as:
D t ( X ) = t f X ( x ) F ¯ X ( t ) ln f X ( x ) F ¯ X ( t ) d x .
where F ¯ X ( t ) is the survival function ( s f ).
The measure called extropy is considered as the complementary dual of entropy. For a non-negative and absolutely continuous rv X, which represents the lifetime of a component with p d f   f X ( x ) , the differential extropy of X, as introduced by [14], is defined as:
M ( X ) = 1 2 0 + f X 2 ( x ) d x .
This measure provides an alternative way to assess the concentration or spread of the pdf over the domain of X, offering a different perspective on the uncertainty or distribution associated with the rv’s lifetime. By utilizing extropy, researchers can quantify the relative uncertainty of one variable compared to another, which is particularly beneficial in areas such as reliability engineering, information theory, and decision-making processes.
Extropy serves as a complementary measure to entropy for quantifying uncertainty in rv’s, and it is particularly useful for comparing the uncertainties of two rv’s. The uncertainty in a r v  X can be quantified by considering the difference between the outcomes of two independent repetitions of the same experiment. Let X 1 and X 2 denote such outcomes. Then, the difference X 1 X 2 reflects the uncertainty associated with X. Then, the p d f of X 1 X 2 is given by,
d ( u ) = + f X ( x ) f X ( x u ) d x .
It follows that the probability of X 1 = X 2 can be approximated as d ( 0 ) = 2 M ( X ) . Therefore, if the extropy of X 1 is smaller than that of another r v   X 2 , i.e., M ( X 1 ) < M ( X 2 ) , then X 1 possesses greater uncertainty than X 2 (see [15]); further foundational studies on extropy can be found in [16]. As the concept of M ( X ) is not suitable for a rv that has already persisted for a certain period, Ref. [17] proposed the concept of residual extropy, defined as:
M t ( X ) = 1 2 t + f X ( x ) F X ¯ ( x ) 2 d x ,
where F ¯ X ( . ) is the s f .
Residual extropy was introduced as a tool to assess the uncertainty associated with a rv. This measure has gained growing importance in areas such as survival analysis, reliability analysis, and actuarial science, as it provides crucial insights into the behavior and dynamics of systems and processes over time. In commercial or scientific fields such as astronomical measurements of heat distribution in galaxies, extropic analysis offers valuable insights worthy of further exploration (see [18,19]). The concept of residual extropy has been studied in various contexts, including the work by [20], which focuses on k-record values. Ref. [21] proposed a kernel-based estimator for the residual extropy function under the α -mixing dependence condition.
Data gathered for statistical analysis in various scientific disciplines often include errors. In meteorology, weather data often contain inaccuracies caused by instrument limitations or unpredictable atmospheric changes. For example, in manufacturing industry, production data can contain errors due to problems with machines or mistakes made during the inspection process. Similarly, in agriculture, the measurement of crop yields can be unreliable because of differences in how samples are collected or changes in weather and other environmental conditions. These kinds of mistakes are known as measurement errors, and they happen when the data we observe is different from the actual or true values. This can occur for many reasons, such as faulty equipment, poor measuring tools, changes in the environment, or even simple human mistakes during data collection. To deal with these issues, measurement error models are used. These models help us understand and correct the inaccuracies in the data, making it possible to get more accurate and meaningful results. They are especially important in fields like manufacturing, agriculture, medicine, and social sciences, where reliable data is crucial for making decisions and drawing conclusions.
The classical error model is used when we want to determine X, but cannot do so directly because of different types of errors in measurement. For example, measuring systolic blood pressure (SBP) can be affected by daily and seasonal changes. Errors may come from machine recording issues, how the measurement is taken, etc. In such cases, it is often reasonable to assume an additive error model. For additional details and examples of classical error models, refer to [22]. Additive measurement error models have been widely studied due to their significance in handling contaminated data. In cases where the variable of interest X, which cannot be directly observed. Instead, an independent sample Y 1 , Y 2 , , Y n are drawn according to the model:
Y i = X i + ω i i = 1 , 2 , , n ,
where the measurement error ω is independent of X. The primary goal is to estimate f X ( x ) , the unknown pdf of X. In this model, the distribution of ω is exactly known; however, this may not possible always. In such cases, ω can be estimated from replicated measurements, as discussed by [23]. In cases where replicates are unavailable, estimating the measurement error distribution becomes more challenging. One approach is to estimate the error distribution from an independent validation data set, where a subset of the data includes both the contaminated observations and their corresponding error-free measurements. From this, the distribution of the measurement error can be directly estimated by [22]. This strategy is particularly useful in industrial applications. Additionally, simulation-based deconvolution techniques have been proposed to estimate the error distribution even when it is unknown, such as the SIMEX (Simulation-Extrapolation)-based approach introduced by [24].
In the context of extropy, Ref. [25] extended the theory of extropy estimation to handle data contaminated by additive measurement error. Building on this, Ref. [26] developed the theory for past extropy estimation under measurement error, offering an estimation framework supported by asymptotic theory and simulation evidence. However, residual extropy, despite its practical relevance, has not yet been studied under the presence of measurement error.
This lack of literature presents a critical gap: while uncertainty about the future is often the focus in predictive tasks, we currently lack statistical tools to estimate residual extropy when the observed data are contaminated. Given the pervasiveness of measurement error in lifetime and reliability data, addressing this issue is essential to ensure accurate and reliable inference.
Therefore, this study is motivated by the need to develop a non-parametric estimation method for residual extropy in the presence of additive measurement error. Our approach leverages kernel-based deconvolution techniques to correct for the error and accurately estimate the residual extropy function. By doing so, we extend the existing body of work on extropy and contribute a novel and practical solution to uncertainty quantification under imperfect data conditions.
The structure of the paper is as follows: Section 2 introduces the estimator for residual extropy in the presence of measurement error in the data. In Section 3, we examine the asymptotic properties of the proposed estimator. Section 4 and Section 5 provide a comparison between the proposed estimator and the empirical estimator based on contaminated data, through simulation and data analysis, respectively. The paper concludes with Section 6.

2. Estimation of Residual Extropy in the Presence of Measurement Error

This section addresses the estimation of the residual extropy function when measurement error is present.

2.1. Deconvolution Estimator

Assume that the rv X has a continuous pdf  f X ( x ) and the characteristic function (cf) of ω , ϕ ω exhibits the property of being non-vanishing, which ensures the identifiability of the deconvolution procedure,
| ϕ ω ( t ) | > 0 , t R .
This holds true in many relevant cases, particularly in the normal model, where the c f of the normal distribution satisfies the non-vanishing condition (see [27]). Furthermore, the assumption fails in case of uniform distribution (see [28]). Assume the kernel K is a symmetric, bounded p d f used in the construction of the deconvolution estimator, with a c f ϕ K , satisfying the following conditions for any fixed positive λ > 0 :
Sup t R ϕ K ( t ) ϕ ω ( t / λ ) < + , + ϕ K ( t ) ϕ ω ( t / λ ) d t < + .
This condition ensures that ϕ K 2 ( t ) / | ϕ ω t λ | 2 , | ϕ K | and ϕ K 2 are integrable and ϕ K is invertible, which often requires ϕ K to have a compact support. Ref. [29] suggested kernels such as Gaussian or Laplacian have c f s that are not compactly supported but exhibit rapid decay and these kernels are used in practice due to their smoothness and computational advantages. That is
K ( x ) = 1 2 π + e i t x ϕ K ( t ) d t .
The deconvolution kernel estimator of f X ( x ) , proposed by [30] and later extended by [27] is:
f X ^ ( x ) = 1 n j = 1 n 1 2 π + e i t ( Y j x ) ϕ K ( t λ ) ϕ ω ( t ) d t , = 1 n λ j = 1 n L x Y j λ ,
where K: R R + , λ > 0 , ϕ ω and ϕ K are the cf s of ω and K, respectively, and
L ( z ) = 1 2 π + e i t z ϕ K ( t ) ϕ ω t λ d t .
The estimator is valid for any non-zero ϕ ω , provided ϕ K has compact support.

2.2. Estimation of Residual Extropy

The estimator for residual extropy function is developed based on the standard deconvolution estimator to handle with the contaminated data. The proposed estimator is expressed as:
M ^ n t ( X ) = 1 2 F ¯ ^ X 2 ( t ) t + f ^ X 2 ( x ) d x .
The function f ^ X ( x ) refers to the standard deconvolution estimator, as specified in Equation (9) and,
F ¯ ^ X ( t ) = t τ F f ^ X ( x ) d x ,
where τ F = inf { x : F ( x ) = 1 }
Based on the tail behavior of the characteristic function ϕ ω , the following cases are commonly identified.
  • Ordinary Smooth Errors: The tails of characteristic function ϕ ω are polynomial, which can be described as:
    c 0 | t | β | ϕ ω ( t ) | c 1 | t | β , t R ,
    for some c 1 > 0 , c 0 > 0 , and β > 0 .
  • Super Smooth Errors: The tails of characteristic function ϕ ω are exponential, which follow the following form:
    c 0 exp γ | t | β | ϕ ω ( t ) | c 1 exp γ | t | β , t R ,
    for some c 1 > 0 , c 0 > 0 , γ > 0 and β > 0 .
Distributions such as the Laplace, gamma, and double exponential are considered examples of ordinary smooth errors, while the normal, mixture normal, and Cauchy distributions are examples of super-smooth errors.

3. Asymptotic Results

This section focuses on investigating the asymptotic distribution and consistency of the proposed estimator.
Theorem 1. 
Let ϕ ω and ϕ K satisfy the conditions given in Equations (6) and (7). Assume that λ 0 and ( n λ ) 1 + ϕ K 2 ( t ) ϕ ω t λ 2 d t 0 as n + . Under these assumptions, as n + , the estimator M ^ n t ( X ) converges in probability to M t ( X ) , expressed as:
M ^ n t ( X ) p M t ( X ) ,
where p denotes convergence in probability.
Proof. 
The corresponding Theorem’s proof is outlined in Appendix A. □
Corollary 1. 
Under the condition that error measurements follow a normal distribution with mean zero and standard deviation σ ω , and λ approaches 0 as n tends to infinity, Theorem 1 applies in this case.
Corollary 2. 
If the error measurements follows a Laplace distribution with mean zero, and as λ approaches zero, when n + , then Theorem 1 holds in this context.
Theorem 2. 
Assume that + ( ϕ K ( t λ ) / ϕ ω ( t ) ) d t c a s n + , for some constant c 0 , and the conditions outlined in Theorem 1 are satisfied. Then, as n + , for a fixed t, the following expression holds:
n M ^ n t ( X ) M t ( X ) d N ( 0 , σ 2 ) .
Assuming that ϕ ω and ϕ K meet the conditions outlined in Equations (6) and (7), σ 2 is defined as:
σ 2 = 1 F ¯ ^ X 4 ( t ) τ t + f X 2 ( x ) d x ,
where τ = V a r ( L 1 ( x Y 1 ) ) , and
L 1 ( z ) = 1 2 π + e i t z ( ϕ K ( t λ ) / ϕ ω ( t ) ) d t ,
with F ¯ ^ X ( t ) is defined in Equation (12).
Proof. 
We observe that,
n M ^ n t ( X ) M t ( X ) = n 1 2 t + f ^ X 2 ( x ) F ¯ ^ X 2 ( t ) d x 1 2 t + f X 2 ( x ) F ¯ X 2 ( t ) d x = n 2 F ¯ ^ X 2 ( t ) t + f ^ X 2 ( x ) d x F ¯ ^ X 2 ( t ) F ¯ X 2 ( t ) t + f X 2 ( x ) d x = n 2 F ¯ ^ X 2 ( t ) t + f ^ X 2 ( x ) d x t + f X 2 ( x ) d x + t + f X 2 ( x ) d x n 2 F ¯ ^ X 2 ( t ) F ¯ ^ X 2 ( t ) F ¯ X 2 ( t ) t + f X 2 ( x ) d x = n 2 F ¯ ^ X 2 ( t ) t + f ^ X 2 ( x ) d x t + f X 2 ( x ) d x n 2 F ¯ ^ X 2 ( t ) t + f X 2 ( x ) d x F ¯ ^ X 2 ( t ) F ¯ X 2 ( t ) t + f X 2 ( x ) d x = n 2 F ¯ ^ X 2 ( t ) t + f ^ X 2 ( x ) d x t + f X 2 ( x ) d x n 2 F ¯ ^ X 2 ( t ) F ¯ X 2 ( t ) F ¯ ^ X 2 ( t ) F ¯ X 2 ( t ) t + f X 2 ( x ) d x n F ¯ ^ X 2 ( t ) t + f ^ X ( x ) f X ( x ) f X ( x ) d x 2 n F ¯ ^ X 2 ( t ) M t ( X ) F ¯ ^ X ( t ) F ¯ X ( t ) F ¯ X ( t ) n F ¯ ^ X 2 ( t ) t + f ^ X ( x ) f X ( x ) f X ( x ) + 2 M t ( X ) F ¯ X ( t ) d x .
By utilizing the asymptotic normality of the standard deconvolution estimator, as described in [25],
n τ f ^ X ( x ) f X ( x ) d N ( 0 , 1 ) , n + .
Consequently, as n + for a fixed t,
n M ^ n t ( X ) M t ( X ) d N ( 0 , 1 F ¯ ^ X 4 ( t ) τ t + f X 2 ( x ) d x ) ,
where τ = Var ( L 1 ( x Y 1 ) ) , and
L 1 ( z ) = 1 2 π + e i t z ( ϕ K ( t λ ) / ϕ ω ( t ) ) d t ,
Selecting bandwidth in deconvolution problems has been extensively explored in the literature. A theoretical study of the cross-validation (CV) bandwidth selection procedure was conducted by [31]. A bootstrap procedure for estimating the optimal bandwidth was introduced by [32], who also demonstrated its consistency. In their subsequent work, Ref. [29] compared several plug-in bandwidth selectors with the CV and bootstrap methods. Furthermore, the plug-in and bootstrap methods were extended to cases with heteroscedastic errors by [33]. Then, we use the rule-of-thumb and plug-in methods for bandwidth selection from [34].

4. Simulation

A simulation study was conducted to assess the performance of the proposed estimator, comparing it with that of the empirical estimator of residual extropy, using the contaminated sample Y 1 , , Y n . The estimator is defined as:
M t ( Y ) = 1 2 n i = 1 n f ^ Y ( Y i ) I ( Y i t ) ( G n ( t ) ) 2 .
The empirical estimator of the sf  F ¯ Y ( t ) , denoted by G n ( t ) , is based on the contaminated sample Y i ,
I ( Y i t ) = 1 : if Y i t 0 : otherwise .
The simulation framework included the following scenarios and conditions, with the unobserved distribution assumed to be as follows:
  • Gamma distribution with shape parameter 2 and scale parameter 3 .
  • Weibull distribution with shape parameter 2 and scale parameter 3.
  • Log-normal distribution with shape parameter 0 and scale parameter 1.
The measurement errors are assumed to follow the distributions outlined below:
  • Normal distribution with mean zero and standard deviation σ ω .
  • Laplace distribution with mean zero and standard deviation σ ω .
The parameter σ ω was chosen to achieve a specific level of contamination, as determined by the signal-to-noise ratio V ( X ) / σ ω . We examined contamination levels of 15% and 30% in measurement error distributions with a mean of 0 in both cases. Convergence rates shows how quickly an estimator approaches the true parameter or function as the sample size increases. They are important to understand the efficiency and accuracy of the estimator under different error distributions and smoothness conditions. For normal and Laplace error distributions, we applied the rule-of-thumb (ROT) bandwidth selection method, which sets the bandwidth proportional to the noise standard deviation and sample size. Regarding the error distribution,
b ROT , N = 2 σ ω ( log n ) 1 / 2 ,
and
b ROT , L = 5 σ ω 4 n 1 / 9 .
are optimum bandwidth values for normal and Laplace errors, respectively (see [34]).
In kernel density estimation with error-free data, it is generally accepted that the choice of the kernel function K has little influence on the estimator’s accuracy. On the other hand, when performing deconvolution kernel estimation with contaminated data, certain conditions concerning the structure of deconvolution estimators must be fulfilled under specific circumstances. For normal errors, we applied a second-order kernel with a characteristic function that exhibits symmetric and compact support (see, [28,32]),
K ( x ) = 48 cos x π x 4 1 15 x 2 144 sin x π x 5 2 5 x 2
with characteristic function,
ϕ K ( t ) = ( 1 t 2 ) 3 I [ 1 , 1 ] ( t ) .
In case of Laplacian errors, the standard normal kernel function was considered. The estimators defined in Equations (9) and (13) were calculated for different sample sizes n, while keeping t = 1 fixed for each sample. Repeating this procedure for 1000 samples, the bias and mean squared error ( M S E ) were evaluated for various sample sizes n = 50, 100, 200, 400, and 500.
Examining Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, we notice a consistent pattern where both bias and MSE decrease with increasing sample size. When we compare the performance of the estimators based on the MSE, it can be concluded that M ^ n t ( X ) performs better than M t ( Y ) .

5. Data Analysis

We evaluated the performance of the proposed estimator M ^ n t ( X ) and the empirical estimator M t ( Y ) using an actual testing data set of piston pumps, which is taken from [35]. The data set used in this study originates from an accelerated testing experiment conducted on a hydraulic piston pump. It consists of two parts: a degradation test and a lifetime test. In the degradation test, the return oil flow (measured in L/min) is used as a performance degradation indicator. Three pumps were tested under a constant pressure condition of 27.6 MPa and a rotation speed of 6000 r/min. The return oil flow was recorded every 10 h for a total duration of 220 h, producing a time series of degradation values for each pump. This data helps monitor the gradual performance decay of the pumps. In the lifetime test, eight pumps were tested under the same conditions. The failure threshold was defined as a drop in the return oil flow to 2.15 L/min, at which point the pump was considered to have failed. The time taken by each pump to reach this failure threshold is recorded as the lifetime. Both the degradation and lifetime data are used to assess the reliability of the system, where degradation readings inform the progression of wear, fatigue, and ageing and lifetime measurements represent the total usable life until failure occurs.
The primary function of a hydraulic piston pump is to produce hydraulic energy for the hydraulic system. Powered by an engine, the drive shaft rotates the cylinder block, causing the pistons to move reciprocally within the block. Meanwhile, the valve plate and swash plate remain stationary. With each revolution, the pistons perform a back-and-forth motion within their chambers, transforming mechanical energy into hydraulic energy. In this study, the standard deviation is assumed to be 0.01. The context of degradation testing error is proposed in [35,36].
The proposed estimators have been assessed by determining their estimated values and MSE. A comparative study was also carried out to examine the effect of the kernel estimator of extropy on contaminated data. Let M n ( t ) be the kernel estimator of residual extropy, which is given by
M n ( t ) = 1 2 t + f n ( x ) F ¯ n ( t ) 2 d t ,
where f n ( x ) represents the standard kernel density estimator introduced by [37] and
F n ( t ) = 0 t f n ( x ) d x ,
is the kernel density estimator.
As shown in Table 7, our proposed estimator generally exhibits better performance than the empirical estimator in terms of MSE. The primary objective here is to show that the performance of our proposed estimator is either comparable to or better than that of the empirical estimator when dealing with contaminated data. A comparison with the kernel density estimator was also conducted, demonstrating that the proposed estimator performs better. The results confirm that this is indeed the case. These findings confirm the effectiveness and reliability of the proposed estimator in challenging data environments.

6. Conclusions

We have estimated the residual extropy function under the presence of measurement error and demonstrated the asymptotic properties of the estimator. We assessed the performance of the estimator through a simulation study and data analysis, comparing it with the empirical estimator derived from contaminated samples. We also compared our estimator with the kernel density estimator, and the results indicate that our proposed estimator outperforms it. The simulation study, which shows decreasing MSE and bias, confirms the effectiveness of the estimator. Additionally, data analysis confirms that the proposed estimator performs better in the presence of contaminated data. Overall, these findings confirm that the proposed estimator is effective in the presence of contaminated or noisy data.

Author Contributions

Conceptualization, M.R.I., R.M., F.S., and M.L.; methodology, M.R.I., R.M., F.S., and M.L.; software, M.R.I., R.M., F.S., and M.L.; validation, M.R.I., R.M., and M.L.; writing—original draft preparation, M.R.I., R.M., and F.S.; writing—review and editing, M.R.I., R.M., and M.L.; visualization, M.R.I., F.S., and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

No funding was received for this research.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

There is no conflict of interest with the publication of this work.

Appendix A. Proof of Theorem 1

The following expression represents the estimator of residual extropy:
M ^ n t ( X ) = 1 2 F ¯ ^ X 2 ( t ) t + f ^ X 2 ( x ) d x ,
To demonstrate M ^ n t ( X ) is consistent under the assumptions provided in Theorem 1, we express the denominator of Equation (A1) as follows:
F ¯ ^ X ( t ) = + ρ ( x ) f ^ X ( x ) d x ,
where ρ ( x ) = I [ t , τ F ] ( x ) and f ^ X ( x ) given in (9)
As stated in [34], we have:
+ ρ ( x ) f ^ X ( x ) d x p + ρ ( x ) f X ( x ) d x .
By the property of consistency and since f X ( x ) is continuous, the expression ( + ρ ( x ) f ^ X ( x ) d x ) 2 is also consistent. Therefore, we have
F ¯ ^ X 2 ( t ) p F ¯ X 2 ( t ) .
Therefore, having established the convergence of the denominator, to prove the convergence in probability of the numerator, it is sufficient to show that,
E 1 2 t + f ^ X 2 ( x ) d x = 1 2 t + f X 2 ( x ) d x and
Var 1 2 t + f ^ X 2 ( x ) d x 0 .
First, we consider the expectation,
E 1 2 t + f ^ X 2 ( x ) d x = E 1 2 t + 1 n λ j = 1 n L x Y j λ 2 d x = 1 2 t + E 1 n λ j = 1 n L x Y j λ 2 d x = 1 2 t + E 1 n λ j = 1 n L x Y j λ 2 + V 1 n λ j = 1 n L x Y j λ d x = 1 2 t + ( I + I I ) d x .
Now, we take term I,
E 1 n λ j = 1 n L x Y j λ = 1 n j = 1 n E 1 2 π + e i t ( Y j x ) ( ϕ K t λ / ϕ ω ( t ) ) d t = 1 n j = 1 n 1 2 π + e i t x E e i t Y j ( ϕ K t λ / ϕ ω ( t ) ) d t = 1 n j = 1 n 1 2 π + e i t x ϕ Y ( t ) ( ϕ K t λ / ϕ ω ( t ) ) d t .
Based on the additive measurement model and the assumption that ω and X are independent, it follows that ϕ Y ( t ) = ϕ X ( t ) ϕ ω ( t ) . Therefore,
E 1 n λ j = 1 n L x Y j λ = 1 n j = 1 n 1 2 π + e i t x ϕ X ( t ) ϕ K ( t λ ) d t = 1 n j = 1 n 1 2 π + e i t x ϕ X ( t ) d t .
Since ϕ K ( t λ ) 1 as λ 0 , the above equation becomes,
E 1 n λ j = 1 n L x Y j λ = f X ( x ) .
Following a similar approach as in [27] for variance, let us define:
A λ , a = + ( L ( x ) ) 2 f X ( a + λ x ) d x + ( L ( x ) ) 2 d x ,
which is bounded by B f = sup x f X ( x ) (refer to [34] for further steps), and thus,
Var 1 n λ j = 1 n L x Y j λ 1 2 π n λ B f + ϕ K 2 ( t ) | ϕ ω ( t / λ ) | 2 d t .
Using Theorem 1, the term I I 0 , as n + . Therefore, the expectation becomes,
E 1 2 t + f ^ X 2 ( x ) d x 1 2 t + f X 2 ( x ) d x .
For proving the variance:
Var 1 2 t + f ^ X 2 ( x ) d x 0 .
We expand f ^ X 2 ( x ) using Taylor series to n +
f ^ X 2 ( x ) f X 2 ( x ) + 2 f ^ X ( x ) f X ( x ) f X ( x ) .
Thus:
Var 1 2 t + f ^ X 2 ( x ) d x t + Var ( f ^ X ( x ) ) f X 2 ( x ) d x .
Using Var ( f ^ X ( x ) ) from [27]:
Var 1 2 t + f ^ X 2 ( x ) d x 1 n λ B g ϕ K 2 ( t ) | ϕ ω ( t / λ ) | 2 d t t + f X 2 ( x ) d x 0 .
Hence, by using the assumptions stated in Theorem 1,
E 1 2 t + f ^ X 2 ( x ) d x 1 2 t + f X 2 ( x ) d x
and
Var 1 2 t + f ^ X 2 ( x ) d x 0
Thus, the sufficient conditions were satisfied, and now we can write it as,
1 2 t + f ^ X 2 ( x ) d x p 1 2 t + f X 2 ( x ) d x .
The result follows from the property that if a n p a and b n p b , then a n b n p a b , provided that b 0 ([38] p. 174). Using Equations (A2) and (A4), we obtain:
M ^ n t ( X ) p M t ( X ) .

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Cover, T.M. Elements of Information Theory; John Wiley & Sons: New York, NY, USA, 1999. [Google Scholar]
  3. Kapur, J.N.; Kesavan, H.K. Entropy optimization principles and their applications. In Entropy and Energy Dissipation in Water Resources; Springer: Berlin/Heidelberg, Germany, 1992; pp. 3–20. [Google Scholar]
  4. Vasicek, O. A test for normality based on sample entropy. J. R. Stat. Soc. Ser. B Stat. Methodol. 1976, 38, 54–59. [Google Scholar] [CrossRef]
  5. Bodnarchuk, I.; Mishura, Y.; Ralchenko, K. Properties of the Shannon, Rényi and other entropies: Dependence in parameters, robustness in distributions and extremes. arXiv 2024, arXiv:2411.15817. [Google Scholar] [CrossRef]
  6. Ebrahimi, N.; Pellerey, F. New partial ordering of survival functions based on the notion of uncertainty. J. Appl. Probab. 1995, 32, 202–211. [Google Scholar] [CrossRef]
  7. Ebrahimi, N. How to measure uncertainty in the residual lifetime distribution. Sankhyā Indian J. Stat. Ser. A 1996, 58, 48–56. [Google Scholar]
  8. Belzunce, F.; Navarro, J.; Ruiz, J.M.; del Águila, Y. Some results on residual entropy function. Metrika 2004, 59, 147–161. [Google Scholar] [CrossRef]
  9. Rajesh, G.; Nair, K.R.M. Residual entropy function in discrete time. Far East J. Theor. Stat. 1998, 2, 1–10. [Google Scholar]
  10. Asadi, M.; Ebrahimi, N. Residual entropy and its characterizations in terms of hazard function and mean residual life function. Stat. Probab. Lett. 2000, 49, 263–269. [Google Scholar] [CrossRef]
  11. Abraham, B.; Sankaran, P.G. Rényi’s entropy for residual lifetime distribution. Stat. Pap. 2006, 47, 17–29. [Google Scholar] [CrossRef]
  12. Nanda, A.K.; Paul, P. Some results on generalized residual entropy. Inf. Sci. 2006, 176, 27–47. [Google Scholar] [CrossRef]
  13. Hooda, D.S.; Kumar, P. Generalized residual entropies in survival analysis. J. Appl. Probab. Stat. 2007, 2, 241–249. [Google Scholar]
  14. Lad, F.; Sanfilippo, G.; Agro, G. Extropy: Complementary dual of entropy. Stat. Sci. 2015, 30, 40–58. [Google Scholar] [CrossRef]
  15. Qiu, G.; Wang, L.; Wang, X. On extropy properties of mixed systems. Probab. Eng. Inf. Sci. 2019, 33, 471–486. [Google Scholar] [CrossRef]
  16. Qiu, G. The extropy of order statistics and record values. Stat. Probab. Lett. 2017, 120, 52–60. [Google Scholar] [CrossRef]
  17. Qiu, G.; Jia, K. Extropy estimators with applications in testing uniformity. J. Nonparametr. Stat. 2018, 30, 182–196. [Google Scholar] [CrossRef]
  18. Furuichi, S.; Mitroi, F.-C. Mathematical inequalities for some divergences. Phys. A Stat. Mech. Its Appl. 2012, 391, 388–400. [Google Scholar] [CrossRef]
  19. Vontobel, P.O. The Bethe permanent of a nonnegative matrix. IEEE Trans. Inf. Theory 2012, 59, 1866–1901. [Google Scholar] [CrossRef]
  20. Jose, J.; Sathar, E.I.A. Residual extropy of k-record values. Stat. Probab. Lett. 2019, 146, 1–6. [Google Scholar] [CrossRef]
  21. Maya, R.; Irshad, M.R. Kernel estimation of residual extropy function under α-mixing dependence condition. S. Afr. Stat. J. 2019, 53, 65–72. [Google Scholar] [CrossRef]
  22. Carroll, R.J.; Ruppert, D.; Stefanski, L.A.; Crainiceanu, C.M. Measurement Error in Nonlinear Models: A Modern Perspective; Chapman and Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
  23. Meister, A.; Meister, A. Density deconvolution. In Deconvolution Problems in Nonparametric Statistics; Springer: Berlin/Heidelberg, Germany, 2009; pp. 5–105. [Google Scholar]
  24. Delaigle, A.; Hall, P. Using SIMEX for smoothing-parameter choice in errors-in-variables problems. J. Am. Stat. Assoc. 2008, 103, 280–287. [Google Scholar] [CrossRef]
  25. Irshad, M.R.; Archana, K.; Maya, R.; Longobardi, M. Estimation of extropy function in the presence of measurement error. Metrika 2024, 1–25. [Google Scholar] [CrossRef]
  26. Archana, K.; Maya, R.; Irshad, M.R.; Longobardi, M. Estimation of past extropy in the presence of measurement error. Ric. Mat. 2025. [Google Scholar] [CrossRef]
  27. Stefanski, L.A.; Carroll, R.J. Deconvolving kernel density estimators. Statistics 1990, 21, 169–184. [Google Scholar] [CrossRef]
  28. Fan, J. Deconvolution with supersmooth distributions. Can. J. Stat. 1992, 20, 155–169. [Google Scholar] [CrossRef]
  29. Delaigle, A.; Gijbels, I. Practical bandwidth selection in deconvolution kernel density estimation. Comput. Stat. Data Anal. 2004, 45, 249–267. [Google Scholar] [CrossRef]
  30. Carroll, R.J.; Hall, P. Optimal rates of convergence for deconvolving a density. J. Am. Stat. Assoc. 1988, 83, 1184–1186. [Google Scholar] [CrossRef]
  31. Hesse, C.H. Data-driven deconvolution. J. Nonparametr. Stat. 1999, 10, 343–373. [Google Scholar] [CrossRef]
  32. Delaigle, A.; Gijbels, I. Bootstrap bandwidth selection in kernel density estimation from a contaminated sample. Ann. Inst. Stat. Math. 2004, 56, 19–47. [Google Scholar] [CrossRef]
  33. Wang, X.; Wang, B. Simultaneous confidence bands and bootstrap bandwidth selection in deconvolution with heteroscedastic error. Comput. Stat. Data Anal. 2010, 54, 25–36. [Google Scholar] [CrossRef]
  34. Pourjafar, H.; Zardasht, V. Estimation of the mean residual life function in the presence of measurement errors. Commun. Stat.—Simul. Comput. 2020, 49, 532–555. [Google Scholar] [CrossRef]
  35. Liu, D.; Wang, S. Reliability estimation from lifetime testing data and degradation testing data with measurement error based on evidential variable and wiener process. Reliab. Eng. Syst. Saf. 2021, 205, 107231. [Google Scholar] [CrossRef]
  36. Ma, Z.; Wang, S.; Ruiz, C.; Zhang, C.; Liao, H.; Pohl, E. Reliability estimation from two types of accelerated testing data considering measurement error. Reliab. Eng. Syst. Saf. 2020, 193, 106610. [Google Scholar] [CrossRef]
  37. Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  38. Resnick, S.I. A Probability Path; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
Table 1. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) are derived from a Weibull(2,3) distribution with normal contamination, where M t ( X ) = 0.17385 .
Table 1. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) are derived from a Weibull(2,3) distribution with normal contamination, where M t ( X ) = 0.17385 .
nEstimatorError with 15% ContaminationError with 30% Contamination
J | bias | MSE J | bias | MSE
50 M ^ n t ( X ) 0.26047 0.08662 0.00750 0.22099 0.04714 0.00222
M t ( Y ) 0.34356 0.17151 0.02941 0.32238 0.06542 0.00428
100 M ^ n t ( X ) 0.22634 0.05249 0.00276 0.20145 0.02760 0.00076
M t ( Y ) 0.32047 0.14662 0.02150 0.30550 0.04853 0.00236
200 M ^ n t ( X ) 0.20694 0.03309 0.00109 0.18915 0.01530 0.00023
M t ( Y ) 0.31912 0.14527 0.02110 0.29507 0.03810 0.00145
400 M ^ n t ( X ) 0.18502 0.01117 0.00012 0.18534 0.01149 0.00013
M t ( Y ) 0.31171 0.13786 0.01901 0.27502 0.01805 0.00033
500 M ^ n t ( X ) 0.18424 0.01039 0.00011 0.17771 0.01214 0.00000
M t ( Y ) 0.28717 0.11332 0.01284 0.27746 0.01449 0.00021
Table 2. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) are derived from a Weibull(2,3) distribution with Laplace contamination, where M t ( X ) = 0.17385 .
Table 2. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) are derived from a Weibull(2,3) distribution with Laplace contamination, where M t ( X ) = 0.17385 .
nEstimatorError with 15% ContaminationError with 30% Contamination
J | bias | MSE J | bias | MSE
50 M ^ n t ( X ) 0.21742 0.04357 0.00190 0.20427 0.03042 0.00093
M t ( Y ) 0.09855 0.04686 0.00220 0.11949 0.06780 0.00460
100 M ^ n t ( X ) 0.21693 0.04308 0.00186 0.19377 0.01992 0.00040
M t ( Y ) 0.09609 0.04440 0.00197 0.10673 0.05504 0.00303
200 M ^ n t ( X ) 0.20263 0.02878 0.00083 0.15843 0.01542 0.00024
M t ( Y ) 0.09185 0.04016 0.00161 0.09606 0.04437 0.00197
400 M ^ n t ( X ) 0.19958 0.02573 0.00066 0.16784 0.00601 0.00004
M t ( Y ) 0.09045 0.03876 0.00150 0.09355 0.04186 0.00175
500 M ^ n t ( X ) 0.18966 0.01581 0.00025 0.17007 0.00378 0.00001
M t ( Y ) 0.08380 0.03211 0.00103 0.09218 0.04049 0.00164
Table 3. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) derived from a Gamma(2, 3 ) distribution with normal contamination, where M t ( X ) = 0.05169 .
Table 3. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) derived from a Gamma(2, 3 ) distribution with normal contamination, where M t ( X ) = 0.05169 .
nEstimatorError with 15% ContaminationError with 30% Contamination
J | bias | MSE J | bias | MSE
50 M ^ n t ( X ) 0.07385 0.02215 0.00049 0.03169 0.02000 0.00040
M t ( Y ) 0.10962 0.05793 0.00336 0.1072 0.05551 0.00308
100 M ^ n t ( X ) 0.07217 0.02048 0.00042 0.06163 0.00994 0.00010
M t ( Y ) 0.10208 0.05039 0.00254 0.10549 0.05380 0.00289
200 M ^ n t ( X ) 0.07375 0.01434 0.00021 0.04600 0.00569 0.00003
M t ( Y ) 0.09745 0.04579 0.00210 0.10499 0.05330 0.00284
400 M ^ n t ( X ) 0.06508 0.01338 0.00018 0.04355 0.00491 0.00003
M t ( Y ) 0.09311 0.04142 0.00172 0.09729 0.04560 0.00208
500 M ^ n t ( X ) 0.06516 0.01347 0.00018 0.05036 0.00133 0.00000
M t ( Y ) 0.09295 0.04126 0.00170 0.09355 0.04186 0.00175
Table 4. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) derived from a Gamma(2, 3 ) distribution with Laplace contamination, where M t ( X ) = 0.05169 .
Table 4. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) derived from a Gamma(2, 3 ) distribution with Laplace contamination, where M t ( X ) = 0.05169 .
nEstimatorError with 15% ContaminationError with 30% Contamination
J | bias | MSE J | bias | MSE
50 M ^ n t ( X ) 0.06955 0.01786 0.00022 0.06922 0.01753 0.00031
M t ( Y ) 0.09555 0.04686 0.0032 0.11949 0.06780 0.00460
100 M ^ n t ( X ) 0.06701 0.01532 0.00023 0.06493 0.01324 0.00018
M t ( Y ) 0.09609 0.04440 0.00197 0.10673 0.05504 0.00303
200 M ^ n t ( X ) 0.03748 0.01421 0.0002 0.04826 0.00343 0.00001
M t ( Y ) 0.09185 0.04016 0.00161 0.09606 0.04437 0.00197
400 M ^ n t ( X ) 0.06279 0.01110 0.00012 0.05159 0.01110 0.0001
M t ( Y ) 0.09045 0.03876 0.00150 0.09355 0.04186 0.00175
500 M ^ n t ( X ) 0.05913 0.00744 0.00006 0.0509 0.0002 0.0000
M t ( Y ) 0.0838 0.03211 0.00103 0.09218 0.04049 0.00164
Table 5. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) derived from a Log-normal (0,1) distribution with normal contamination, where M t ( X ) = 0.14053 .
Table 5. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) derived from a Log-normal (0,1) distribution with normal contamination, where M t ( X ) = 0.14053 .
nEstimatorError with 15% ContaminationError with 30% Contamination
J | bias | MSE J | bias | MSE
50 M ^ n t ( X ) 0.23882 0.08266 0.00683 0.08875 0.05178 0.00268
M t ( Y ) 0.05787 0.09829 0.00966 0.03233 0.10820 0.01171
100 M ^ n t ( X ) 0.18850 0.04798 0.00230 0.09773 0.04280 0.00183
M t ( Y ) 0.07095 0.06958 0.00484 0.03422 0.10631 0.01130
200 M ^ n t ( X ) 0.18014 0.03961 0.00157 0.10520 0.03533 0.00125
M t ( Y ) 0.07266 0.06787 0.00461 0.03651 0.10402 0.01082
400 M ^ n t ( X ) 0.16971 0.02918 0.00085 0.10662 0.03391 0.00115
M t ( Y ) 0.07340 0.06712 0.00451 0.04351 0.09702 0.00941
500 M ^ n t ( X ) 0.15635 0.01583 0.00025 0.12425 0.01628 0.00026
M t ( Y ) 0.07455 0.06597 0.00435 0.05718 0.08334 0.00695
Table 6. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) derived from a Log-normal (0,1) distribution with Laplace contamination, where M t ( X ) = 0.14053 .
Table 6. The estimated value (J), absolute bias ( | bias | ), and the MSE of M ^ n t ( X ) and M t ( Y ) derived from a Log-normal (0,1) distribution with Laplace contamination, where M t ( X ) = 0.14053 .
nEstimatorError with 15% ContaminationError with 30% Contamination
J | bias | MSE J | bias | MSE
50 M ^ n t ( X ) 0.21534 0.07482 0.00560 0.07974 0.06079 0.00370
M t ( Y ) 0.05107 0.08946 0.00800 0.03124 0.10928 0.01194
100 M ^ n t ( X ) 0.08929 0.05124 0.00263 0.09416 0.04637 0.00215
M t ( Y ) 0.05623 0.08430 0.00711 0.03458 0.10595 0.01122
200 M ^ n t ( X ) 0.12196 0.01856 0.00034 0.10055 0.03998 0.00160
M t ( Y ) 0.06527 0.07526 0.00566 0.03978 0.10074 0.01015
400 M ^ n t ( X ) 0.15525 0.01472 0.00022 0.10584 0.03469 0.00120
M t ( Y ) 0.06827 0.07226 0.00522 0.04121 0.09932 0.00987
500 M ^ n t ( X ) 0.14123 0.00071 0.00000 0.11417 0.02636 0.00069
M t ( Y ) 0.07066 0.06987 0.00488 0.04418 0.09635 0.00928
Table 7. The estimated value (J) and MSE of M ^ n t ( X ) , M t ( Y ) , and M n ( t ) for the hydraulic piston pump study.
Table 7. The estimated value (J) and MSE of M ^ n t ( X ) , M t ( Y ) , and M n ( t ) for the hydraulic piston pump study.
EstimatorJMSE
M ^ n t ( X ) 0.000002 33,452.53
M t ( Y ) 0.003205 33,453.64
M n ( t ) 0.004779 33,454.18
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maya, R.; Irshad, M.R.; Sulthana, F.; Longobardi, M. Impact of Measurement Error on Residual Extropy Estimation. Axioms 2025, 14, 672. https://doi.org/10.3390/axioms14090672

AMA Style

Maya R, Irshad MR, Sulthana F, Longobardi M. Impact of Measurement Error on Residual Extropy Estimation. Axioms. 2025; 14(9):672. https://doi.org/10.3390/axioms14090672

Chicago/Turabian Style

Maya, Radhakumari, Muhammed Rasheed Irshad, Febin Sulthana, and Maria Longobardi. 2025. "Impact of Measurement Error on Residual Extropy Estimation" Axioms 14, no. 9: 672. https://doi.org/10.3390/axioms14090672

APA Style

Maya, R., Irshad, M. R., Sulthana, F., & Longobardi, M. (2025). Impact of Measurement Error on Residual Extropy Estimation. Axioms, 14(9), 672. https://doi.org/10.3390/axioms14090672

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop