Previous Article in Journal
Evaluation of TRNG Bit Distribution via Stable Entropy Source Synchronization on FPGA
Previous Article in Special Issue
Information-Theoretic Reliability Analysis of Consecutive r-out-of-n:G Systems via Residual Extropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Properties of Residual Cumulative Sharma–Taneja–Mittal Model and Its Extensions in Reliability Theory with Applications to Human Health Analysis and Mixed Coherent Mechanisms

by
Mohamed Said Mohamed
1 and
Hanan H. Sakr
2,*
1
Department of Mathematics, College of Science and Humanities, Prince Sattam bin Abdulaziz University, Hawtat Bani Tamim 16511, Saudi Arabia
2
Department of Management Information Systems, College of Business Administration in Hawtat Bani Tamim, Prince Sattam bin Abdulaziz University, Al-Kharj 16273, Saudi Arabia
*
Author to whom correspondence should be addressed.
Entropy 2026, 28(1), 32; https://doi.org/10.3390/e28010032 (registering DOI)
Submission received: 14 November 2025 / Revised: 18 December 2025 / Accepted: 23 December 2025 / Published: 26 December 2025
(This article belongs to the Special Issue Recent Progress in Uncertainty Measures)

Abstract

The entropy measure of residual cumulative Sharma–Taneja–Mittal is an alternative measure of uncertainty for residual cumulative entropy. This study investigates further theoretical properties and develops nonparametric estimation procedures for the proposed measure. The performance of the estimator is evaluated through simulation experiments, and its practical relevance is illustrated using a real-world dataset on malignant tumor cases. Moreover, we investigate the properties of its dynamic version, including stochastic comparisons and its connections with the hazard rate function, mean residual function, and equilibrium random variables. Moreover, we introduce an alternative version of dynamic residual cumulative Sharma–Taneja–Mittal entropy and examine its monotonic properties. Additionally, we discuss this alternative version and its conditional form in the circumstances of record values. We introduce this alternative expression for the residual lifespan of upper record quantities in general distributions, characterizing it as a measure of upper record quantities derived from a distribution of uniform. Since Sharma–Taneja–Mittal entropy measures uncertainty, we also investigate its use in determining the entropy of the lifespan of mixed and coherent mechanisms, in which the lives of its constituent components are identically distributed and independent.

1. Introduction

Numerous disciplines, including the science of information, physics, statistical analysis, probability, theory of communications, and economics, have made extensive use of the entropy measure that Shannon [1] introduced. Assume that Y is the lifespan of a unit with a probability density function (pdf) k ( y ) , a survival function K ¯ ( y ) = 1 K ( y ) , and a cumulative distribution function (cdf) K ( y ) . If the expected quantity is present, the Shannon differential entropy of Y, which measures the uncertainty of a random phenomenon, is defined by
Λ ( Y ) = E ( ln k ( Y ) ) = k ( y ) ln k ( y ) d y .
The literature has presented a number of entropy measurements, each of which are appropriate for a particular set of circumstances. The uncertainty measure, known as the residual cumulative entropy, was proposed by Rao et al. [2] and is dependent on the cdf. The residual cumulative entropy for a non-negative continuously random variable Y with survival function K ¯ ( y ) is shown as
R Λ ( Y ) = 0 K ¯ ( y ) ln K ¯ ( y ) d y .
As noted by Asadi and Ebrahimi [3], Λ ( Y ) is no longer relevant for evaluating the uncertainty over the unit’s remaining lifespan if Y is taken to be the lifetime of a new unit. In these cases, the residual entropy measure of a component or system’s lifespan Y should be taken into account provided that the system has lived to an age l. The dynamical or dependent on time random variable Y l = ( Y l Y > l ) with a survival function is what we are interested in examining in these situations:
K ¯ l ( y ) = K ¯ ( y ) K ¯ ( l ) , if y > l , 1 , otherwise ,
where y > l , and both y and l are positive. Then the function of mean life residual ( M H r ( l ) ) is defined as M H r ( l ) = E ( Y l ) = l K ¯ ( y ) K ¯ ( l ) d y . Asadi and Zohrevand [4] have adjusted the concept of residual cumulative entropy to account for the system’s present age by using the survival function of Y l as
D R Λ ( Y ; l ) = l K ¯ ( y ) K ¯ ( l ) ln K ¯ ( y ) K ¯ ( l ) d y .
The measure D R Λ ( Y ; l ) is known as the dynamic residual cumulative entropy measure. D R Λ ( Y ; 0 ) = R Λ ( Y ) is evident. Address Rao et al. [2], Asadi and Zohrevand [4], Navarro et al. [5], and the articles cited therein for more characteristics and uses of (2) and (3).
Another alternative form of the dynamic residual cumulative entropy measure can be obtained by considering the pdf of Y l , symbolized as k l ( y ) = k ( y + l ) K ¯ ( l ) , in which y and l are both positive. Ebrahimi [6] proposed this measure of Y l as
D R Λ * ( Y ; l ) = l k ( y ) K ¯ ( l ) ln k ( y ) K ¯ ( l ) d y .
In addition to other studies cited in their work, several academics, including Asadi et al. [7], Nanda and Paul [8], and Zhang [9], have conducted extensive research on the properties, extensions, and applications of D R Λ * ( Y ; l ) .
Another interesting measure of uncertainty is the Sharma–Taneja–Mittal entropy, independently introduced by Sharma and Taneja [10] and by Mittal [11] and defined as follows:
S Λ τ 1 , τ 2 ( Y ) = 1 τ 1 τ 2 0 k τ 1 ( y ) k τ 2 ( y ) d y   = 1 τ 1 τ 2 E k τ 1 1 K 1 ( V ) E k τ 2 1 K 1 ( V ) ,
with the condition that τ 1 τ 2 > 0 , and K 1 ( v ) = inf   { y : K ( y ) v } , v [ 0 , 1 ] stands for the quantile function. Moreover, the Sharma–Taneja–Mittal entropy measure provided in (5) could be considered a further development of negative Tsallis entropy if τ 2 = 1 , see Tsallis [12].
More recently, Kattumannil et al. [13] proposed definitions for the Sharma–Taneja–Mittal residual cumulative entropy and cumulative entropy, given by
R S Λ τ 1 , τ 2 ( Y ) = 1 τ 1 τ 2 0 K ¯ τ 1 ( y ) K ¯ τ 2 ( y ) d y ,
C S Λ τ 1 , τ 2 ( Y ) = 1 τ 1 τ 2 0 K τ 1 ( y ) K τ 2 ( y ) d y ,
again under the restriction that τ 1 τ 2 > 0 . Subsequently, Sudheesh et al. [14] reformulated the measures in the above expressions in terms of probability-weighted moments, thereby paving the way for the development of inferential techniques based on the established properties of these moments. Moreover, Sakr and Mohamed [15] discussed the Sharma–Taneja–Mittal entropy measure in estimation and goodness-of-fit test. In addition, several works have explored the Sharma–Mittal framework and its related extensions in diverse situations. For instance, Ghaffari et al. [16] examined thermodynamic characteristics of Schwarzschild black holes within the Sharma–Mittal entropy formalism. Koltcov et al. [17] employed the Sharma–Mittal entropy to assess the performance of topic modeling techniques. In the setting of record values, Paul and Thomas [18] investigated key properties associated with the Sharma–Mittal entropy. More recently, Sfetcu et al. [19] introduced an alternative form of the cumulative residual Sharma–Taneja–Mittal entropy and established corresponding upper and lower bounds.
This paper discusses the properties of Sharma–Taneja–Mittal entropy and its extensions. The organization of the rest of the article is as follows: Section 2 presents the proportional hazard framework of the residual cumulative Sharma–Taneja–Mittal entropy and its non-parametric estimation using a U-statistic. Section 3 explores the properties of the dynamic residual cumulative Sharma–Taneja–Mittal entropy, including stochastic comparisons and their connections to hazard rate functions, mean residual functions, and equilibrium random variables. Section 4 introduces an alternative dynamic form of the residual cumulative Sharma–Taneja–Mittal entropy and examines its properties with ordered variables. Finally, Section 5 presents the Sharma–Taneja–Mittal entropy and its properties for coherent and mixed structures in independent and identically distributed situations.

2. Properties of Residual Cumulative Sharma–Taneja–Mittal Entropy

In this section, we will discuss some further properties of the residual cumulative Sharma–Taneja–Mittal entropy introduced in (6). We demonstrate that R S Λ τ 1 , τ 2 ( Y ) is shifting as an independent measure in the following property.
Theorem 1.
Let Z = α Y + β ,   α > 0 ,   β 0 , and let Y be a continuous non-negative random variable. Next,
R S Λ τ 1 , τ 2 ( Z ) = α · R S Λ τ 1 , τ 2 ( Y ) .
Proof. 
The conclusion is that for every y > β , K ¯ α Y + β ( y ) = K ¯ Y y β α . □
Theorem 2.
Assume that two random variables, Y and Z η , admit a proportional hazard framework, provided by
K ¯ η * ( y ) = K ¯ ( y ) η , η > 0 , y > 0 .
Then the following claims are true:
(i)
R S Λ τ 1 , τ 2 ( Z η ) = R S Λ η τ 1 , η τ 2 ( Y ) ,
(ii)
R S Λ τ 1 , τ 2 ( Z η ) = η · R S Λ τ 1 , τ 2 ( η Y ) .
Corollary 1.
Suppose X becomes a random variable that is not negative and has a cdf of K that is absolutely continuous. Using a random sample of K, let Y 1 : p be the first-order statistic based on Y 1 , Y 2 , , Y p . We know K ¯ Y 1 : p ( y ) = ( K ¯ ( y ) ) p , and so R S Λ τ 1 , τ 2 ( Y 1 : p ) = R S Λ p τ 1 , p τ 2 ( Y ) .
Example 1.
Suppose the random variable Y follows an exponential distribution with parameter θ, i . e . , Y Expo ( θ ) , then R S Λ τ 1 , τ 2 ( Z η ) = 1 τ 1 τ 2 1 θ η τ 1 1 θ η τ 2 , R S Λ η τ 1 , η τ 2 ( Y ) = 1 τ 1 τ 2 1 θ η τ 1 1 θ η τ 2 , and R S Λ τ 1 , τ 2 ( Y ) = 1 τ 1 τ 2 1 θ τ 1 1 θ τ 2 . So, we have R S Λ τ 1 , τ 2 ( Z η ) = R S Λ η τ 1 , η τ 2 ( Y ) and R S Λ τ 1 , τ 2 ( Y ) = η · R S Λ η τ 1 , η τ 2 ( Y ) .
We now derive an estimator for R S Λ τ 1 , τ 2 ( Y ) . Let τ 1 τ 2 > 0 , and assume that Y 1 : p represents the smallest order statistic from a random sample Y 1 , , Y p drawn from the distribution F. For a non-negative random variable Y, its expectation is given by μ = E ( Y ) = 0 K ¯ ( y ) d y . Thus, we can express
R S Λ τ 1 , τ 2 ( Y ) = 1 τ 1 τ 2 0 K ¯ τ 1 ( y ) d y 0 K ¯ τ 2 ( y ) d y   = 1 τ 1 τ 2 E ( Y 1 : τ 1 ) E ( Y 1 : τ 2 ) .
This motivates the construction of a U-statistic–based estimator for R S Λ τ 1 , τ 2 ( Y ) as
R S ^ Λ τ 1 , τ 2 ( Y ) = 1 τ 1 τ 2 1 A τ 1 , p A τ 1 , p min ( Y j 1 , Y j 2 , , Y j τ 1 ) 1 A τ 2 , p A τ 2 , p min ( Y j 1 , Y j 2 , , Y j τ 2 ) ,
where the summations extend over the set A τ i , p containing all combinations of τ i distinct elements { Y j 1 , Y j 2 , , Y j τ i } selected from { 1 , 2 , , p } for i = 1 , 2 .
Next, we simplify the expression for R S ^ Λ τ 1 , τ 2 ( Y ) . Denote by Y j : p the j-th order statistic based on the sample Y 1 , , Y p drawn from K. In terms of these order statistics, the following equivalent formulations hold:
j = 1 p s = 1 s < j p min { Y 1 , Y 2 } = j = 1 p ( p j ) Y 1 : p ,
and
j = 1 p s = 1 s < j p r = 1 r < s p min { Y 1 , Y 2 , Y 3 } = j = 1 p ( p j 1 ) ( p j ) Y 1 : p   = j = 1 p p 2 j Y 1 : p .
Thus, the estimator in (8) can be alternatively written as
R S ^ Λ τ 1 , τ 2 ( Y ) = 1 τ 1 τ 2 1 A τ 1 , p j = 1 p p j τ 1 1 Y 1 : p 1 A τ 2 , p j = 1 p p j τ 2 1 Y 1 : p .
We now examine the asymptotic properties of R S ^ Λ τ 1 , τ 2 ( Y ) . It is evident that this estimator is both unbiased and consistent for C s ( X ) (see Lehmann [20]). The following theorem characterizes its asymptotic distribution.
Theorem 3.
As p , the scaled difference
p R S ^ Λ τ 1 , τ 2 ( Y ) R S Λ τ 1 , τ 2 ( Y )
converges in distribution to a normal random variable with mean zero and variance
σ 2 = 1 ( τ 1 τ 2 ) 2 V a r Y K ¯ τ 1 1 ( Y ) + ( τ 1 1 ) 0 Y z K ¯ τ 1 2 ( z ) d K ( z ) Y K ¯ τ 2 1 ( Y ) + ( τ 2 1 ) 0 Y z K ¯ τ 2 2 ( z ) d K ( z ) .
Proof. 
By employing the central limit theorem for U-statistics, the asymptotic normality of R S ^ Λ τ 1 , τ 2 ( Y ) follows. The asymptotic variance, denoted σ 1 2 , is provided by Lee [21] as
σ 1 2 = 1 ( τ 1 τ 2 ) 2 V a r E min ( Y 2 , , Y τ 1 ) Y 1 E min ( Y 2 , , Y τ 2 ) Y 1 .
Define Z = min ( Y 2 , Y 3 , , Y τ i ) ; then, its survival function is K ¯ τ i 1 ( y ) for i = 1 , 2 . Notice that
E min ( y , Y 2 , Y 3 , , Y τ i ) = E [ y 1 ( Z > y ) ] + E Z 1 ( Z y )   = y K ¯ τ i 1 ( y ) + ( τ i 1 ) 0 y z K ¯ τ i 2 ( z ) d F ( z ) .
Substituting this result into (11) yields the variance expression shown in (10), which completes the proof. □
Finally, the finite-sample performance of the estimator in (9) is assessed via Monte Carlo simulations, with the results detailed in the next subsection.

2.1. Simulation Studies

Using the exponential distribution with rate 0.5 and the gamma distribution with shape parameter equal 2 and scale parameter equal 2 ( G a m m a ( 2 , 2 ) ), we perform a comprehensive Monte Carlo simulation research to evaluate the estimator performance of the suggested measure. Using R software (version 4.4.1), the simulation is run ten thousand times with varying sample sizes. Table 1 and Table 2 present the theoretical value of the residual cumulative Sharma–Taneja–Mittal entropy as well as the variance and the mean square error root (MSER) of the estimator R S ^ Λ τ 1 , τ 2 ( Y ) with different values of τ 1 and τ 2 . The results in Table 1 and Table 2 indicate that for a fixed pair ( τ 1 , τ 2 ) , increasing the dimension p significantly reduces both the variance and the MSER of the residual cumulative Sharma–Taneja–Mittal entropy estimator, demonstrating enhanced estimation accuracy with larger p. Moreover, as the difference between τ 1 and τ 2 increases—reflected by the R S Λ τ 1 , τ 2 ( Y ) index becoming less negative—the estimator exhibits further improvements, with lower variance and MSER observed. This suggests that a larger disparity between the tuning parameters, in addition to an increased p, contributes to a more robust and efficient estimation by better balancing bias and variance in the estimation procedure.
Under τ 1 = 2 , τ 2 = 4 , and p = 100 , Figure 1 and Figure 2 show the performance of the residual cumulative Sharma–Taneja–Mittal entropy estimator computed from 10,000 simulations using an exponential distribution with rate 0.5 and G a m m a ( 2 , 2 ) distribution, respectively. The top left (TL) panel displays the histogram with a kernel density overlay and the theoretical value marked by a dashed line. The top right (TR) panel shows the QQ-plot for assessing normality. The bottom left (BL) panel illustrates the running cumulative average, demonstrating convergence to the theoretical value, while the bottom right (BR) panel presents a boxplot summarizing the estimator’s dispersion.

2.2. Real Data Application

In this subsection, a real dataset is analyzed to illustrate the practical applicability of the proposed methodology. The dataset comprises records of malignant tumor cases collected from King Faisal Specialist Hospital and Research Centre, classified by tumor site and gender for the year 2023 in Saudi Arabia. The analysis is restricted to female patients from the Jeddah region. This dataset is publicly accessible via the Saudi Open Data Portal at https://open.data.gov.sa/ar/datasets/view/190d7539-ba90-47f2-a162-f2b64b3c41d1 (accessed on 5 November 2025).
The observations were modeled using an exponential distribution whose cdf is expressed as G ( y ) = 1 e θ y , θ > 0 , y 0 . Parameter estimation was carried out using the maximum likelihood estimation method, resulting in an estimated rate parameter of θ = 0.01798561 . The goodness-of-fit of the model was assessed through two statistical tests:
  • Kolmogorov–Smirnov test: statistic = 0.29587, with p-value = 0.1168;
  • Anderson–Darling test: statistic = 2.471, with p-value = 0.05213.
Both tests support the adequacy of the exponential model in describing the observed data. Additionally, Figure 3 presents the histogram of the data along with the fitted pdf and a comparison between the empirical and fitted cdfs.
We run 10,000 bootstrap samples to examine the estimate for the real set of data. Table 3 shows the bootstrap estimates of the proposed estimator for several combinations of ( τ 1 , τ 2 ) based on 10,000 resamples from the observed data. The results indicate that the bootstrap mean values are very close to the original estimates across all combinations, suggesting that the estimator is stable and exhibits only minor bias. The standard deviations and the associated 95 % confidence intervals confirm that the variability of the estimator decreases as the values of ( τ 1 , τ 2 ) increase, reflecting improved precision for larger order parameters. Overall, the bootstrap analysis supports the consistency and robustness of the proposed estimator under repeated sampling.

3. Dynamic Residual Cumulative Sharma–Taneja–Mittal Entropy

In this section, we will define the dynamic residual cumulative Sharma–Taneja–Mittal entropy. In analogue to (3), we can define the dynamic residual cumulative Sharma–Taneja–Mittal entropy as
D R S Λ τ 1 , τ 2 ( Y ; l ) = 1 τ 1 τ 2 l K ¯ ( y ) K ¯ ( l ) τ 1 d y l K ¯ ( y ) K ¯ ( l ) τ 1 d y ,
with the condition that τ 1 τ 2 > 0 , and we readily recognize that D R S Λ τ 1 , τ 2 ( Y ; 0 ) = R S Λ τ 1 , τ 2 ( Y ) , at l = 0 .
Proposition 1.
D R S Λ τ 1 , τ 2 ( Y ; l ) is always non-positive.
Proof. 
For τ 1 ( ) τ 2 , we have K ¯ ( y ) K ¯ ( l ) τ 1 ( ) K ¯ ( y ) K ¯ ( l ) τ 2 , y l , which proves the result. □
Lemma 1.
In the event when Z = ψ ( Y ) is an increasing differentiated function, the random variable Z’s dynamic residual cumulative Sharma–Taneja–Mittal entropy can be found using
D R S Λ τ 1 , τ 2 ( Z ; l ) = 1 τ 1 τ 2 1 ( K ¯ τ 1 ( ψ 1 ( l ) ) ) ψ 1 ( l ) K ¯ τ 1 ( y ) ψ ( y ) d y   1 ( K ¯ τ 2 ( ψ 1 ( l ) ) ) ψ 1 ( l ) K ¯ τ 2 ( y ) ψ ( y ) d y .
Example 2.
Let Z = α Y + β for a non-negative randomly selected variable Y, where α > 0 and β 0 . Next, we have
D R S Λ τ 1 , τ 2 ( Z ; l ) = 1 τ 1 τ 2 1 ( K ¯ τ 1 ( l β α ) ) l K ¯ τ 1 ( y β α ) ψ ( y ) d y   1 ( K ¯ τ 2 ( l β α ) ) l K ¯ τ 2 ( y β α ) ψ ( y ) d y   = α τ 1 τ 2 1 ( K ¯ τ 1 ( l β α ) ) l β α K ¯ τ 1 ( v ) d v 1 ( K ¯ τ 2 ( l β α ) ) l β α K ¯ τ 2 ( v ) d v   = α D R S Λ τ 1 , τ 2 ( Y ; l β α ) .
Moreover, we can see that
1.
D R S Λ τ 1 , τ 2 ( α Y ; l ) = α D R S Λ τ 1 , τ 2 ( Y ; l α ) .
2.
D R S Λ τ 1 , τ 2 ( Y + β ; l ) = D R S Λ τ 1 , τ 2 ( Y ; l β ) .
Let D R S Λ τ 1 , τ 2 ( Y 1 ; l ) and D R S Λ τ 1 , τ 2 ( Y 2 ; l ) represent the dynamic residual cumulative Sharma–Taneja–Mittal entropy of the random variables Y 1 and Y 2 , where Y 1 = ψ ( Y 2 ) . We now confirm the conditions under which the transformations raise or lower the cumulative Sharma–Taneja–Mittal entropy of the dynamic residual. We specify the dispersion ordered of Y 1 and Y 2 to demonstrate this.
Definition 1.
A random variable Y 1 is said to possess a higher degree of dispersion than Y 2 , denoted by Y 1 O D I S Y 2 , if and only if there exists a dilation function ψ ( · ) such that Y 1 = ψ ( Y 2 ) . This means that the function ψ satisfies ψ ( y ) ψ ( y * ) y y * ,   for   all   y y * .
Theorem 4.
If it holds that Y 1 O D I S Y 2 (or equivalently, Y 1 O D I S Y 2 ), then the relationship D R S Λ τ 1 , τ 2 ( Y 1 ; l ) ( ) D R S Λ τ 1 , τ 2 ( Y 2 ; l ) is valid.
Proof. 
Since the dilation function ψ satisfies ψ ( y ) 1 for all y, and using the non-positivity property established earlier, the required inequality follows immediately from the application of Equation (13). Hence, the result is proved. □
Theorem 5.
Let Y be a random variable that is continuous with hazard rate function H r ( l ) = k ( l ) K ¯ ( l ) and the survival function K ¯ ( y ) . The connection between the dynamic residual cumulative Sharma–Taneja–Mittal entropy and hazard rate function is provided by
H r ( l ) = D R S Λ τ 1 , τ 2 ( Y ; l ) τ i D R S Λ τ 1 , τ 2 ( Y ; l ) + l K ¯ ( y ) K ¯ ( l ) τ j d y ,
where i j = 1 , 2 .
Proof. 
From (12), we have
( τ 1 τ 2 ) D R S Λ τ 1 , τ 2 ( Y ; l ) = l K ¯ ( y ) K ¯ ( l ) τ 1 d y l K ¯ ( y ) K ¯ ( l ) τ 1 d y .
When we differentiate both sides of the equation above with regard to l, we obtain
( τ 1 τ 2 ) D R S Λ τ 1 , τ 2 ( Y ; l ) = 1 + τ 1 H r ( l ) l K ¯ ( y ) K ¯ ( l ) τ 1 d y + 1 τ 2 H r ( l ) l K ¯ ( y ) K ¯ ( l ) τ 2 d y ,
which is equivalent to
( τ 1 τ 2 ) D R S Λ τ 1 , τ 2 ( Y ; l ) = H r ( l ) τ 1 ( τ 1 τ 2 ) D R S Λ τ 1 , τ 2 ( Y ; l ) + ( τ 1 τ 2 ) l K ¯ ( y ) K ¯ ( l ) τ 2 d y .
Then,
H r ( l ) = D R S Λ τ 1 , τ 2 ( Y ; l ) τ 1 D R S Λ τ 1 , τ 2 ( Y ; l ) + l K ¯ ( y ) K ¯ ( l ) τ 2 d y ,
which proves the theorem. □
Proposition 2.
Assume that Y is a non-negative variable that is random with a hazardous rate function H r ( y ) and the survival function K ¯ ( y ) . The survival function K ¯ ( y ) is therefore uniquely determined by D R S Λ τ 1 , τ 2 ( Y ; l ) .
In the following theorem, we demonstrate that the Sharma–Taneja–Mittal entropy’s dynamic residual cumulative can possibly be represented as an expectation of the mean residual function M H r ( l ) .
Theorem 6.
The dynamic residual cumulative Sharma–Taneja–Mittal entropy can be expressed as
D R S Λ τ 1 , τ 2 ( Y ; l ) = 1 τ 1 τ 2 ( τ 2 1 ) K ¯ τ 2 ( l ) E M H r ( Y ) K ¯ τ 2 1 ( Y ) | Y > l   ( τ 1 1 ) K ¯ τ 1 ( l ) E M H r ( Y ) K ¯ τ 1 1 ( Y ) | Y > l .
Proof. 
We can write (12) as
D R S Λ τ 1 , τ 2 ( Y ; l ) = 1 τ 1 τ 2 ( τ 2 1 ) ( τ 2 1 ) M H r ( l ) l K ¯ ( y ) K ¯ ( l ) τ 2 d y   ( τ 1 1 ) ( τ 1 1 ) M H r ( l ) l K ¯ ( y ) K ¯ ( l ) τ 1 d y .
By noting that
l K ¯ τ i ( y ) d y = l d d y ( M H r ( y ) K ¯ ( y ) ) K ¯ τ i 1 ( y ) d y   = M H r ( l ) K ¯ τ i ( l ) + ( τ i 1 ) l M H r ( y ) K ¯ τ i 1 ( y ) k ( y ) d y ,
where i = 1 , 2 , then the result follows. □
The equilibrium random and the initial random variables are related by the following theorem.
Theorem 7.
Assume that the cdf K has the decreasing mean residual life property. Then the equilibrium random variable Y E b obeys the inequality
( τ 1 τ 2 ) D R S Λ τ 1 , τ 2 ( Y E b ; l ) D R S Λ τ 1 , τ 2 ( Y ; l ) , τ 1 > τ 2 > 1 .
Proof. 
The survival function for the equilibrium’s randomly selected variable Y E b is provided by
K ¯ Y E b ( l ) = M H r ( l ) K ¯ ( l ) μ , where μ = E ( Y ) .
( τ 1 τ 2 ) D R S Λ τ 1 , τ 2 ( Y E b ; l ) = l K ¯ Y E b ( y ) K ¯ Y E b ( y ) τ 1 d y = 1 ( M H r ( l ) K ¯ ( l ) ) τ 1 l ( M H r ( y ) K ¯ ( y ) ) τ 1 d y 1 ( M H r ( l ) K ¯ ( l ) ) τ 2 l ( M H r ( y ) K ¯ ( y ) ) τ 2 d y .
Moreover, for the non-positivity property, the equation above now simplifies to (16) when the decreasing mean residual life property is applied. □

4. Alternative Dynamic Residual Cumulative Sharma–Taneja–Mittal Entropy

In this section, we will discuss an alternative version of the dynamic residual cumulative Sharma–Taneja–Mittal entropy. In analogy with (4), we can define the alternative dynamic residual cumulative Sharma–Taneja–Mittal entropy as follows:
D R S Λ τ 1 , τ 2 * ( Y ; l ) = 1 τ 1 τ 2 0 k l τ 1 ( y ) d y 0 k l τ 1 ( y ) d y   = 1 τ 1 τ 2 l k ( y ) K ¯ ( l ) τ 1 d y l k ( y ) K ¯ ( l ) τ 2 d y   = 1 τ 1 τ 2 0 1 k l τ 1 1 ( K ¯ l 1 ( u ) ) d u 0 1 k l τ 2 1 ( K ¯ l 1 ( u ) ) d u ,
with the condition that τ 1 τ 2 > 0 , and K ¯ l 1 ( u ) = inf { y ; K ¯ l ( y ) u } is the quantile function of K ¯ l ( y ) = K ¯ ( y + l ) K ¯ ( l ) , y , l > 0 . We readily recognize that D R S Λ τ 1 , τ 2 * ( Y ; 0 ) = S Λ τ 1 , τ 2 ( Y ) at l = 0 .
The resultant value of the expression D R S Λ τ 1 , τ 2 * ( . ; l ) under affine translation is provided by the following lemma. The proof is left out.
Lemma 2.
Define Z = α Y + β for every absolutely continuous random variable Y, where α > 0 and β 0 are the constants. Subsequently,
D R S Λ τ 1 , τ 2 * ( Z ; l ) = 1 τ 1 τ 2 1 α τ 1 1 l β α k ( y ) K ¯ ( l β α ) τ 1 d y 1 α τ 2 1 l β α k ( y ) K ¯ ( l β α ) τ 2 d y .
Proof. 
Using the transformation for the pdf, k ( α y + β ) = 1 α k ( y ) , so
k ( α y + β ) K ¯ ( l ) τ i = 1 α k ( y ) K ¯ ( l ) τ i = 1 α τ i k ( y ) K ¯ ( l ) τ i , i = 1 , 2 .
Then, the result follows. □
Ebrahimi [6] introduced two non-parametric distribution classes characterized by the function D R Λ * ( Y ; l ) .
Definition 2.
A random variable Y is said to exhibit a decreasing (or increasing) life residual property—denoted as DLR (or ILR)—if the function D R Λ * ( Y ; l ) is monotonically decreasing (or increasing) for every l 0 .
Similarly, using the measure D R S Λ τ 1 , τ 2 * ( Y ; l ) , we define another set of nonparametric classes.
Definition 3.
A non-negative random variable Y is classified as having a decreasing (or increasing) life residual—referred to as DLR (or ILR)—if the function D R S Λ τ 1 , τ 2 * ( Y ; l ) decreases (or increases) continuously for all l 0 .
We give a counter-example below to demonstrate that not every distribution is monotonous with regard to D R Λ * ( Y ; l ) .
Example 3.
(Illustrative Counterexample) Consider a nonnegative random variable Y whose survival function is defined as
K ¯ ( y ) = 1 ( 1 e y ) ( 1 e 2 y ) , y 0 .
Accordingly, the probability density function of Y is given by
k ( y ) = e y + 2 e 2 y 3 e 3 y , y > 0 .
Now, set τ 1 = 3 and τ 2 = 2 . Then, the expression
D R S Λ τ 1 , τ 2 * ( Y ; l ) = 1 τ 1 τ 2 l k ( y ) K ¯ ( l ) τ 1 d y l k ( y ) K ¯ ( l ) τ 2 d y = 1 τ 1 τ 2 l e y + 2 e 2 y 3 e 3 y e l + e 2 l e 3 l τ 1 d y l e y + 2 e 2 y 3 e 3 y e l + e 2 l e 3 l τ 2 d y .
is obtained. As illustrated in Figure 4, this function D R S Λ τ 1 , τ 2 * ( Y ; l ) is not monotonic.
Remark 1.
Unlike Proposition 1, we find that D R S Λ τ 1 , τ 2 * ( Y ; l ) is not always non-positive, as shown in Figure 4. This can be explained by the fact that the ratio k ( y ) K ¯ ( l ) is not always less than or equal to one (not necessarily bounded above by one).

4.1. Record Values

Let us look at a technical mechanism that experiences shocks like spikes in voltage. Given a common continual cdf K ( . ) , pdf k ( . ) , and survival function K ¯ ( l ) = 1 K ( l ) , the shocks may then be treated as a series of independently and equally distributed random variables { Y j , j 1 } . The stresses placed on the mechanism at various points in time are represented by the shocks. The record statistics of this sequence, or the values of the greatest stresses recorded thus far, are of interest to us. Let us represent the j-th ordering statistic from the first p occurrences by Y j : p . Next, we establish the sequences of upper record values R ( p ) and upper record timings L p , ( p 1 ) , as follows:
R ( p ) = Y L p : L p , p = 0 , 1 , ,
noting that
L 0 = 1 , L p = min { i : i > L p 1 , Y i > R p } , p 1 .
Knowing the incomplete gamma function,
I Γ ( α , y ) = y v α 1 e v d v , α , y > 0 ,
we can see that the survival function and the pdf of R ( p ) , represented by k R ( p ) ( y ) and K ¯ R ( p ) ( y ) , correspondingly, are known to be provided by (see, e.g., [22])
k R ( p ) ( y ) = [ ln K ¯ ( y ) ] p 1 ( p 1 ) ! k ( y ) , y 0 ,
and
K ¯ R ( p ) ( y ) = K ¯ ( y ) t = 0 p 1 [ log K ¯ ( y ) ] t t ! = I Γ ( p , log K ¯ ( y ) ) ( p 1 ) ! , y 0 .
In this part, our focus is on analyzing an alternative formulation of the dynamic residual cumulative Sharma–Taneja–Mittal entropy associated with the random variable Y R ( p ) . This measure serves to assess the level of uncertainty arising from the density function of [ Y R ( p ) l | Y R ( p ) > l ] , which characterizes the remaining lifetime of the system and its predictability. To simplify the computational process, we present a lemma establishing the connection between this alternative formulation of dynamic residual cumulative Sharma–Taneja–Mittal entropy for record value from a uniform distribution and the incomplete gamma function. This link is practically significant as it facilitates a more efficient computation of the proposed entropy measure.
Lemma 3.
Consider a sequence { R j , j 1 } of independent and identically distributed random variables following a uniform distribution. Define R ˇ ( p ) as the p-th upper record value within this sequence. Then, the following holds:
D R S Λ τ 1 , τ 2 * ( R ˇ ( p ) ; l ) = 1 τ 1 τ 2 I Γ ( τ 1 ( p 1 ) + 1 , ln ( 1 l ) ) I Γ τ 1 ( p , ln ( 1 l ) ) I Γ ( τ 2 ( p 1 ) + 1 , ln ( 1 l ) ) I Γ τ 2 ( p , ln ( 1 l ) ) ,
for all τ 1 τ 2 > 0 and 0 < l < 1 .
Proof. 
For the given sequence { R j , j 1 } , the random variables are uniformly distributed on ( 0 , 1 ) , so the survival function is
K ¯ ( y ) = 1 y , 0 < y < 1 .
Using (18) and (19), we have
I * = l 1 k R ˇ ( p ) ( y ) K ¯ R ˇ ( p ) ( l ) τ i d y = l 1 [ ln ( 1 y ) ] p 1 I Γ ( p , ln ( 1 l ) ) τ i d y , i = 1 , 2 .
Changing the variable,
v = ln ( 1 y ) , d v = d y 1 y = e v d v .
Rewriting the integral in terms of v, where y = 1 e v , so when y = l , we obtain v = ln ( 1 l ) :
I * = ln ( 1 l ) 1 v p 1 I Γ ( p , ln ( 1 l ) ) τ i e v d v   = ln ( 1 l ) 1 v τ i ( p 1 ) e v I Γ τ i ( p , ln ( 1 l ) ) d v .
Recognizing this integral as an incomplete gamma function,
I * = I Γ ( τ i ( p 1 ) + 1 , ln ( 1 l ) ) I Γ τ i ( p , ln ( 1 l ) ) .
Practitioners and researchers may easily use the well-known incomplete gamma function to calculate the alternative formulation of the dynamic residual cumulative Sharma–Taneja–Mittal entropy of recording values from a distribution that is uniform by utilizing this lemma. The measure’s applicability and usability in a variety of scenarios are improved by this computational reduction. For the amounts of τ 1 = 0.5 ; τ 2 = 1 and τ 1 = 1 ; τ 2 = 2 together with the value of p = 2 , , 5 , we show the plot of D R S Λ τ 1 , τ 2 * ( R ˇ ( p ) ; l ) in Figure 5.
We denote V T Γ l ( α , β ) to signify that the random variable V follows a truncated Gamma distribution, characterized by the pdf
k V ( v ) = β α I Γ ( α , l ) v α 1 e β v , v > l > 0 ,
where α and β are both positive parameters. The following theorem establishes a connection between the alternative expression of the dynamic residual cumulative Sharma–Taneja–Mittal entropy for record values R ( p ) and this entropy measure for record values derived from a distribution of uniform.
Theorem 8.
A series of independently and equally distributed variables that are random, having cdf K and pdf k, is represented as { Y j , j 1 } . For the sequence { Y j } , let R ( p ) represent the p-th higher record value. Then, for any τ 1 τ 2 > 0 , the alternative expression of the dynamic residual cumulative Sharma–Taneja–Mittal entropy of R ( p ) is constructed as follows:
D R S Λ τ 1 , τ 2 * ( R ( p ) ; l ) = 1 τ 1 τ 2 ( τ 1 τ 2 ) D R S Λ τ 1 , τ 2 * ( R ˇ ( p ) ; K ( l ) ) E [ k τ 1 1 ( K 1 ( 1 e V ( p ) ( 1 ) ) ) ]   + I Γ ( τ 2 ( p 1 ) + 1 , ln ( 1 l ) ) I Γ τ 2 ( p , ln ( 1 l ) ) E [ k τ 1 1 ( K 1 ( 1 e V ( p ) ( 1 ) ) ) ]   E [ k τ 2 1 ( K 1 ( 1 e V ( p ) ( 2 ) ) ) ] , l > 0 ,
where V ( p ) ( i ) T Γ ln K ¯ ( l ) ( τ i ( p 1 ) + 1 , 1 ) , i = 1 , 2 .
Proof. 
Using (18) and (19), we have
l k R ( p ) ( y ) K ¯ R ( p ) ( l ) τ i d y = l [ ln ( 1 K ( y ) ) ] p 1 k ( y ) I Γ ( p , ln ( 1 K ( l ) ) ) τ i d y   = I Γ ( τ i ( p 1 ) + 1 , ln ( 1 K ( l ) ) ) I Γ τ i ( p , ln ( 1 K ( l ) ) ) K ( l ) 1       × ( ln ( 1 u ) ) τ i ( p 1 ) k τ i 1 ( K 1 ( u ) ) I Γ ( τ i ( p 1 ) + 1 , ln ( 1 K ( l ) ) ) d u   = I Γ ( τ i ( p 1 ) + 1 , ln ( 1 K ( l ) ) ) I Γ τ i ( p , ln ( 1 K ( l ) ) )       × ln ( 1 K ( l ) ) z τ i ( p 1 ) e z k τ i 1 ( K 1 ( 1 e z ) ) I Γ ( τ i ( p 1 ) + 1 , ln ( 1 K ( l ) ) ) d z ,
where i = 1 , 2 . Then, by applying Lemma 3, the proof is completed. □
The following example shows the application of the previous theorem.
Example 4.
We assume a series of randomly distributed variables with equal and independent distributions { Y j , j 1 } that have a similar Weibull distribution. This distribution’s cdf is provided by
K ( y ) = 1 e y 3 , y > 0 .
The inverse cdf of Y may be found as
K 1 ( v ) = ( ln ( 1 v ) ) 1 / 3 , 0 < v < 1 .
After that, we may compute
E k τ i 1 K 1 ( 1 e V ( p ) ( i ) ) = 3 τ i 1 τ i τ i ( p 1 3 ) + 1 3 I Γ ( τ i ( p 1 3 ) + 1 3 , τ i l 3 ) I Γ ( τ i ( p 1 ) + 1 , l 3 ) , i = 1 , 2 .
Thus, utilizing (21), we obtain
D R S Λ τ 1 , τ 2 * ( R ( p ) ; l ) = 1 τ 1 τ 2 3 τ 1 1 I Γ ( τ 1 ( p 1 3 ) + 1 3 , τ 1 l 3 ) τ 1 τ 1 ( p 1 3 ) + 1 3 I Γ τ 1 ( p , l 3 ) 3 τ 2 1 I Γ ( τ 2 ( p 1 3 ) + 1 3 , τ 2 l 3 ) τ 2 τ 2 ( p 1 3 ) + 1 3 I Γ τ 2 ( p , l 3 ) ,
p 1 .

4.2. Conditional Entropy of Record Values

Given that all units have voltages greater than l > 0 , we are now interested in assessing the remaining recordings Y R ( p ) l , l 0 . Thus, Y R ( p ) , l 0 = [ Y R ( p ) l | Y R ( 0 ) > l ] may express its survival function as follows, see [23],
K R ( p ) , l ( y ) = P ( Y R ( p ) l > y | Y R ( 0 ) > l ) = I Γ ( p + 1 , ln K ¯ l ( y ) ) .
Consequently, we possess
k R ( p ) , l ( y ) = [ ln K ¯ l ( y ) ] p 1 ( p 1 ) ! k t ( x ) , y l 0 .
In the following part, we will examine the Sharma–Taneja–Mittal entropy of the random variable Y R ( p ) , l 0 , which quantifies the degree of uncertainty regarding the predicted duration of the framework’s residual lifetime with regard to the Sharma–Taneja–Mittal entropy contained in the density of [ Y R ( p ) l | Y R ( 0 ) > l ] . A key component of our goal is the probability of the integral substitution V = K ¯ l ( Y R ( p ) , l 0 ) . R ˇ ( p ) = K ¯ l ( Y R ( p ) , l 0 ) obviously has the following pdf:
h R ( p ) ( v ) = ( ln ( 1 v ) ) p 1 ( p 1 ) ! , 0 < v < 1 , p 1 .
In the next proposal, we use the previously described transformations to give an expression for the alternative formulation of the dynamic residual cumulative Sharma–Taneja–Mittal entropy of Y R ( p ) , l 0 .
Theorem 9.
A series of independently and equally distributed variables that are random, having cdf K and pdf k, is represented as { Y j , j 1 } . The alternative expression of the dynamic residual cumulative Sharma–Taneja–Mittal entropy of Y R ( p ) , l 0 is able to be stated as follows:
D R S Λ τ 1 , τ 2 * ( Y R ( p ) , l 0 ) = 1 τ 1 τ 2 0 1 h R ( p ) τ 1 ( v ) k l τ 1 1 ( K ¯ l 1 ( v ) ) d v 0 1 h R ( p ) τ 2 ( v ) k l τ 2 1 ( K ¯ l 1 ( v ) ) d v ,
for every τ 1 τ 2 > 0 , l > 0 .
Proof. 
From (17) and (22), we use the changed value of the variable v = K ¯ l ( y ) to obtain
( τ 1 τ 2 ) D R S Λ τ 1 , τ 2 * ( Y R ( p ) , l 0 ) = 0 k Y R ( p ) , l 0 τ 1 d y 0 k Y R ( p ) , l 0 τ 2 d y   = 0 [ ln K ¯ l ( y ) ] p 1 ( p 1 ) ! k ( y | l ) τ 1 d y       0 [ ln K ¯ l ( y ) ] p 1 ( p 1 ) ! k ( y | l ) τ 2 d y   = 0 1 [ ln ( 1 v ) ] p 1 ( p 1 ) ! τ 1 k l τ 1 1 ( K ¯ l 1 ( v ) ) d v       0 1 [ ln ( 1 v ) ] p 1 ( p 1 ) ! τ 2 k l τ 2 1 ( K ¯ l 1 ( v ) ) d v   = 0 1 h R ( p ) τ 1 ( v ) k l τ 1 1 ( K ¯ l 1 ( v ) ) d v 0 1 h R ( p ) τ 2 ( v ) k l τ 2 1 ( K ¯ l 1 ( v ) ) d v .
The final equality completes the proof since h R ( p ) ( v ) represents the pdf of V stated in (23). □

5. Entropy of Sharma–Taneja–Mittal of a Combination of Systems That Are Coherent

Based on the definition of continuous Sharma–Taneja–Mittal entropy given in Equation (5), we can express the continuous Sharma–Taneja–Mittal entropy of a random variable Y as follows:
Proposition 3.
Consider Y as a continuous random variable characterized by a probability density function (pdf) k ( y ) . Then, an alternative formulation of Sharma–Taneja–Mittal entropy using the hazard rate function, defined as H r ( l ) = k ( l ) K ¯ ( l ) , is presented below:
S Λ τ 1 , τ 2 ( Y ) = 1 τ 1 τ 2 0 k τ 1 ( y ) k τ 2 ( y ) d y   = 1 τ 1 τ 2 E [ ( H r ( Y τ 1 ) ) τ 1 1 ] E [ ( H r ( Y τ 2 ) ) τ 2 1 ] ,
where the pdf of the transformed random variable Y τ i , for i = 1 , 2 , is given by
k Y τ i ( y ) = τ i K ¯ τ i 1 ( y ) k ( y ) .
Following the approach of Shaked and Shanthikumar [24], with the addition of Definition 1, we make use of specific stochastic order relations, including the stochastic order ( O S T ), the hazard rate order ( O H R ), and the dispersive order ( O D I S ). Furthermore, the relationships among these orders can be summarized as
  • O H R O S T ;
  • O D I S O S T .
Definition 4.
Let Y 1 and Y 2 be two non-negative continuous random variables with pdfs k 1 ( y ) and k 2 ( y ) , respectively. Then, we say that Y 1 is smaller than Y 2 in the sense of Sharma–Taneja–Mittal entropy, denoted as Y 1 S T M Y 2 , if
S Λ τ 1 , τ 2 ( Y 1 ) S Λ τ 1 , τ 2 ( Y 2 ) ,
where τ 1 τ 2 > 0 .
Remark 2.
The following results will be discussed under the conditions τ 1 τ 2 > 1 and k ( y ) τ 1 > k ( y ) τ 2 for all y > 0 . Moreover, these conditions hold for any other form of the pdf.
Theorem 10.
Suppose that Y 1 and Y 2 are non-negative continuous random variables, with pdfs k 1 , k 2 and cdfs K 1 , K 2 , respectively. From (24), if Y 1 O D I S Y 2 , then Y 1 S T M Y 2 , noting that the conditions in Remark 2 hold.
Proof. 
From (24) with τ 1 τ 2 > 1 . If Y 1 O D I S Y 2 , then
( τ 1 τ 2 ) S Λ τ 1 , τ 2 ( Y 2 ) = 0 1 k 2 τ 1 1 ( K 2 1 ( v ) ) d v 0 1 k 2 τ 2 1 ( K 2 1 ( v ) ) d v   0 1 k 1 τ 1 1 ( K 1 1 ( v ) ) d v 0 1 k 1 τ 2 1 ( K 1 1 ( v ) ) d v = ( τ 1 τ 2 ) S Λ τ 1 , τ 2 ( Y 1 ) .
Then, the result follows. □
Aspects of the Sharma–Taneja–Mittal entropy of coherence (and mixing) structures are presented in this section. A system is said to be coherent if its structure–function is monotonic and all of its parts are pertinent. One specific example of a coherence structure is the r-out-of-p system. Additionally, according to Samaniego [25], a collection of coherent frameworks is regarded as a mixed system. In the independent and identical situation, the reliable functional of the mixing system lifespan L is reflected by
K ¯ L ( l ) = r = 1 p δ r K ¯ r : p ( l ) ,
where the survival function of Y 1 : p , , Y p : p is K ¯ r : p ( l ) = i = 0 r 1 p i K i ( l ) K ¯ p i ( l ) , r = 1 , 2 , , p . The mixing system lifespan L’s pdf is provided by
k L ( l ) = r = 1 p δ r k r : p ( l ) ,
where k r : p ( l ) is defined as, 1 r p ,
k r : p ( l ) = Γ ( p + 1 ) Γ ( r ) Γ ( p r + 1 ) K r 1 ( l ) K ¯ p r ( l ) k ( l ) .
The vector δ = ( δ 1 , . . . , δ n ) represents the mechanism’s signature, and δ r = P ( L = Y r : p ) , r = 1 p δ r = 1 , 1 r n . The pdf of the order of the statistical process V r : p = K ( Y r : p ) , 1 r p , is
h r ( v ) = Γ ( p + 1 ) Γ ( r ) Γ ( p r + 1 ) v r 1 ( 1 v ) p r .
Consequently, V = K ( L ) ’s pdf is
h V ( v ) = r = 1 p δ r h r ( v ) .
The Sharma–Taneja–Mittal entropy of L is discussed in the following formula utilizing the above transformations.
Theorem 11.
The mixed system lifespan L’s Sharma–Taneja–Mittal entropy is
S Λ τ 1 , τ 2 ( L ) = 1 τ 1 τ 2 0 1 h V τ 1 ( v ) k τ 1 1 ( K 1 ( v ) ) d v 0 1 h V τ 2 ( v ) k τ 2 1 ( K 1 ( v ) ) d v ,
noting that h V ( v ) is specified in Equation (26).
Proof. 
After applying the transformation v = K ( l ) to (24), we obtain
S Λ τ 1 , τ 2 ( L ) = 1 τ 1 τ 2 0 r = 1 p δ r k r : p ( l ) τ 1 d l 0 r = 1 p δ r k r : p ( l ) τ 2 d l   = 1 τ 1 τ 2 0 1 j = 1 n δ r Γ ( p + 1 ) Γ ( r ) Γ ( p r + 1 ) v r 1 ( 1 v ) p r τ 1 k τ 1 1 ( K 1 ( v ) ) d v   0 1 j = 1 n δ r Γ ( p + 1 ) Γ ( r ) Γ ( p r + 1 ) v r 1 ( 1 v ) p r τ 2 k τ 2 1 ( K 1 ( v ) ) d v   = 1 τ 1 τ 2 0 1 h V τ 1 ( v ) k τ 1 1 ( K 1 ( v ) ) d v 0 1 h V τ 2 ( v ) k τ 2 1 ( K 1 ( v ) ) d v .
Theorem 12.
Assume that two mixed systems with p equally and independent component lives have lifetimes of L Y 1 and L Y 2 under the same signature. Next up is the following:
1.
If Y 1 O D I S Y 2 , then L Y 1 S T M L Y 2 .
2.
Consider the sets W 1 = { 0 < v < 1 k 2 ( K 2 1 ( v ) ) k 1 ( K 1 1 ( v ) ) < 1 } and W 2 = { 0 < v < 1 k 2 ( K 2 1 ( v ) ) k 1 ( K 1 1 ( v ) ) 1 } . If the relation Y 1 S T M Y 2 holds, then it follows that L Y 1 S T M L Y 2 , provided that either W 1 = W 2 = or the inequality inf v W 1 h V ( v ) sup v W 2 h V ( v ) is satisfied.
Note that the conditions in Remark 2 hold.
Proof. 
1. Given that Y 1 O D I S Y 2 , we may infer from Equation (24)
( τ 1 τ 2 ) S Λ τ 1 , τ 2 ( L Y 1 ) S Λ τ 1 , τ 2 ( L Y 2 ) = 0 1 h V τ 1 ( v ) k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v   0 1 h V τ 2 ( v ) k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v   0 ,
where τ 1 τ 2 > 1 , which leads to the desired result.
2. Given that Y 1 S T M Y 2 , it follows from Equation (24) that when τ 1 τ 2 > 1 , we obtain
0 1 k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v 0 1 k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v 0 .
The follow-up gives us
( τ 1 τ 2 ) S Λ τ 1 , τ 2 ( L Y 1 ) S Λ τ 1 , τ 2 ( L Y 2 ) = 0 1 h V τ 1 ( v ) k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v   0 1 h V τ 2 ( v ) k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v
Therefore, by utilizing (29) along with the condition inf v W 1 h V ( v ) sup v W 2 h V ( v ) for τ 1 τ 2 > 1 , we arrive at
W 1 h V τ 1 ( v ) k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v   + W 2 h V τ 1 ( v ) k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v   W 1 h V τ 2 ( v ) k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v   W 2 h V τ 2 ( v ) k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v   ( inf v W 1 h V ( v ) ) τ 1 W 1 k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v   + ( sup v W 2 h V ( v ) ) τ 1 W 2 k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v   ( inf v W 1 h V ( v ) ) τ 2 W 1 k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v   ( sup v W 2 h V ( v ) ) τ 2 W 2 k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v   ( sup v W 2 h V ( v ) ) τ 1 W 1 k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v   + ( sup v W 2 h V ( v ) ) τ 1 W 2 k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v   ( sup v W 2 h V ( v ) ) τ 2 W 1 k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v   ( sup v W 2 h V ( v ) ) τ 2 W 2 k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v   = ( sup v W 2 h V ( v ) ) τ 1 0 1 k 1 τ 1 1 ( K 1 1 ( v ) ) k 2 τ 1 1 ( K 2 1 ( v ) ) d v   ( sup v W 2 h V ( v ) ) τ 2 0 1 k 1 τ 2 1 ( K 1 1 ( v ) ) k 2 τ 2 1 ( K 2 1 ( v ) ) d v 0 .
The following theorem shows that the shortest lifetime in the equal and independent case has a lower or equal Sharma–Taneja–Mittal entropy order than for all of the mixed systems in the setting of the decreasing failure rate of each lifetime component.
Theorem 13.
Assume that the lifespan component has a declining failure rate and that the cases are equal and independent. Then, for the mixed lifetime mechanism denoted by L, we have Y 1 : p S T M L . Note that the conditions in Remark 2 hold.
Proof. 
As stated by Bagai and Kochar [26], when the lifetime has a decreasing failure rate, it holds that Y 1 : p O H R L implies Y 1 : p O D I S L . From Theorem 10, we then conclude that Y 1 : p S T M L . □
Theorem 14.
Given the assumption that S Λ τ 1 , τ 2 ( Y r : p ) < , we obtain from Equation (27)
S Λ τ 1 , τ 2 ( L ) r = 1 p δ r S Λ τ 1 , τ 2 ( Y r : p ) ,
where S Λ τ 1 , τ 2 ( Y r : p ) is the Sharma–Taneja–Mittal entropy of the r-th order statistic. Note that the conditions in Remark 2 hold.
Proof. 
Referring back to Equation (28), we obtain
S Λ τ 1 , τ 2 ( L ) = 1 τ 1 τ 2 0 r = 1 p δ r k r : p ( l ) τ 1 d l 0 r = 1 p δ r k r : p ( l ) τ 2 d l .
When we apply Jensen’s inequality, we obtain
r = 1 p δ r k r : p ( l ) τ i r = 1 p δ r k r : p τ i ( l ) ,
where k r : p τ i exhibits convexity when τ i > 1 and l > 0 , with i = 1 , 2 . Therefore,
0 k L τ i ( l ) d l = 0 r = 1 p δ r k r : p ( l ) τ i d l r = 1 p δ r 0 k r : p τ i ( l ) d l ,
0 r = 1 p δ r k r : p ( l ) τ i d l r = 1 p δ r 0 k r : p τ i ( l ) d l ;
by multiplying both sides of Equation (30) by 1 τ 1 τ 2 and observing that τ 1 τ 2 > 1 , the following holds:
S Λ τ 1 , τ 2 ( L ) 1 τ 1 τ 2 r = 1 p δ r 0 k r : p τ 1 ( l ) d l r = 1 p δ r 0 k r : p τ 2 ( l ) d l   = 1 τ 1 τ 2 r = 1 p δ r 0 k r : p τ 1 ( l ) d l r = 1 p δ r 0 k r : p τ 2 ( l ) d l   = r = 1 p δ r 1 τ 1 τ 2 0 k r : p τ 1 ( l ) d l 0 k r : p τ 2 ( l ) d l   = r = 1 p δ r S Λ τ 1 , τ 2 ( Y r : p ) .

6. Conclusions

This study explores additional properties of Sharma–Taneja–Mittal entropy and its related measures. A new non-parametric estimation method for residual cumulative Sharma–Taneja–Mittal entropy, based on the U-statistic, is presented. The efficiency of this estimator is demonstrated through a simulation study, which also examines its sensitivity to the values of τ 1 and τ 2 . Moreover, we investigate the non-positivity, stochastic comparisons, and connections of dynamic residual cumulative Sharma–Taneja–Mittal entropy with the hazard rate function and mean residual function. Additionally, we establish bounds for this measure and examine its relationship with equilibrium random variables. Furthermore, an alternative dynamic form of residual cumulative Sharma–Taneja–Mittal entropy is introduced, along with its properties in the context of record values. Notably, this alternative form does not always guarantee negativity. Finally, we analyze the Sharma–Taneja–Mittal entropy of coherent systems, providing bounds and conducting stochastic comparisons. In future work, we aim to conduct a detailed investigation of the alternative dynamic measure, including the identification of distributional classes under which it retains desirable properties and a clearer delineation of the conditions in which it provides meaningful and stable uncertainty quantification. Moreover, we can discuss the relationships between Sharma–Taneja–Mittal entropy and other entropies (Shannon entropy, Kaniadakis entropy, Rényi entropy), see [27,28,29,30,31,32,33].

Author Contributions

Conceptualization, M.S.M. and H.H.S.; Methodology, M.S.M. and H.H.S.; Software, M.S.M. and H.H.S.; Validation, M.S.M. and H.H.S.; Formal analysis, M.S.M. and H.H.S.; Investigation, M.S.M. and H.H.S.; Resources, M.S.M. and H.H.S.; Writing—original draft, M.S.M. and H.H.S.; Writing—review & editing, M.S.M. and H.H.S.; Visualization, M.S.M. and H.H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Prince Sattam bin Abdulaziz University through the project number (PSAU/2025/02/37509).

Data Availability Statement

All datasets analyzed or generated during this study are included within the article.

Acknowledgments

The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number (PSAU/2025/02/37509).

Conflicts of Interest

The authors state that there are no conflicts of interest associated with this work.

References

  1. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Rao, M.; Chen, Y.; Vemuri, B.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  3. Asadi, M.; Ebrahimi, N. Residual entropy and its characterizations in terms of hazard function and mean residual life function. Stat. Probab. Lett. 2000, 49, 263–269. [Google Scholar] [CrossRef]
  4. Asadi, M.; Zohrevand, Y. On the dynamic cumulative residual entropy. J. Stat. Plan. Inference 2007, 137, 1931–1941. [Google Scholar] [CrossRef]
  5. Navarro, J.; Del Aguila, Y.; Asadi, M. Some new results on the cumulative residual entropy. J. Stat. Plan. Inference 2010, 140, 310–322. [Google Scholar] [CrossRef]
  6. Ebrahimi, N. How to measure uncertainty in the residual lifetime distribution. Sankhya Ser. A 1996, 58, 48–56. [Google Scholar]
  7. Asadi, M.; Ebrahimi, N.; Soofi, E.S. Dynamic generalized information measures. Stat. Probab. Lett. 2005, 71, 85–98. [Google Scholar] [CrossRef]
  8. Nanda, A.K.; Paul, P. Some results on generalized residual entropy. Inf. Sci. 2006, 176, 27–47. [Google Scholar] [CrossRef]
  9. Zhang, Z. Uniform estimates on the Tsallis entropies. Lett. Math. Phys. 2007, 80, 171–181. [Google Scholar] [CrossRef]
  10. Sharma, B.D.; Taneja, I.J. Entropy of type (α,β) and other generalized measures in information theory. Metrika 1975, 22, 205–215. [Google Scholar] [CrossRef]
  11. Mittal, D.P. On some functional equations concerning entropy, directed divergence and inaccuracy. Metrika 1975, 22, 35–45. [Google Scholar] [CrossRef]
  12. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  13. Kattumannil, S.K.; Sreedevi, E.P.; Balakrishnan, N. A Generalized Measure of Cumulative Residual Entropy. Entropy 2022, 24, 444. [Google Scholar] [CrossRef]
  14. Sudheesh, K.K.; Sreedevi, E.P.; Balakrishnan, N. Relationships between cumulative entropy/extropy, Gini mean difference and probability weighted moments. Probab. Eng. Inf. Sci. 2024, 38, 28–38. [Google Scholar] [CrossRef]
  15. Sakr, H.H.; Mohamed, M.S. Sharma-Taneja-Mittal Entropy and Its Application of Obesity in Saudi Arabia. Mathematics 2024, 12, 2639. [Google Scholar] [CrossRef]
  16. Ghaffari, S.; Ziaie, A.H.; Moradpour, H.; Asghariyan, F.; Feleppa, F.; Tavayef, M. Black hole thermodynamics in Sharma-Mittal generalized entropy formalism. Gen. Relativ. Gravit. 2019, 51, 93. [Google Scholar] [CrossRef]
  17. Koltcov, S.; Ignatenko, V.; Koltsova, O. Estimating topic modeling performance with Sharma- Mittal entropy. Entropy 2019, 21, 660. [Google Scholar] [CrossRef] [PubMed]
  18. Paul, J.; Thomas, P.Y. Sharma-Mittal entropy properties on record values. Statistica 2016, 76, 273–287. [Google Scholar]
  19. Sfetcu, R.-C.; Robe-Voinea, E.-G.; Serban, F. An alternate measure of the cumulative residual Sharma-Taneja-Mittal entropy. An. St. Univ. Ovidius Constanta Ser. Mat. 2025, 33, 125–142. [Google Scholar] [CrossRef]
  20. Lehmann, E.L. Consistency and unbiasedness of certain nonparametric tests. Ann. Math. Stat. 1951, 22, 165–179. [Google Scholar] [CrossRef]
  21. Lee, A.J. U-Statistics: Theory and Practice; Routledge: New York, NY, USA, 2019. [Google Scholar]
  22. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. A First Course in Order Statistics; SIAM: Bangkok, Thailand, 2008. [Google Scholar]
  23. Raqab, M.Z.; Asadi, M. On the mean residual life of records. J. Stat. Plan. Inference 2008, 138, 3660–3666. [Google Scholar] [CrossRef]
  24. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, NY, USA, 2007. [Google Scholar]
  25. Samaniego, F.J. System Signatures and Their Applications in Engineering Reliability; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2007; Volume 110. [Google Scholar]
  26. Bagai, I.; Kochar, S.C. On tail-ordering and comparison of failure rates. Commun. Stat.-Theory Methods 1986, 15, 1377–1388. [Google Scholar] [CrossRef]
  27. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: Hoboken, NJ, USA, 2012. [Google Scholar]
  28. Furuichi, S.; Yanagi, K.; Kuriyama, K. Fundamental properties of Tsallis relative entropy. J. Math. Phys. 2004, 45, 4868–4877. [Google Scholar] [CrossRef]
  29. Kaniadakis, G. Non-linear kinetics underlying generalized statistics. Phys. A 2001, 296, 405–425. [Google Scholar] [CrossRef]
  30. Mohamed, M.S.; Sakr, H.H. Uniformity Testing and Estimation of Generalized Exponential Uncertainty in Human Health Analytics. Symmetry 2025, 17, 1403. [Google Scholar] [CrossRef]
  31. Mohamed, M.S.; Sakr, H.H. Generalizing Uncertainty Through Dynamic Development and Analysis of Residual Cumulative Generalized Fractional Extropy with Applications in Human Health. Fractal Fract. 2025, 9, 388. [Google Scholar] [CrossRef]
  32. Principes, J.C. Information Theoretic Learning Rényi’s Entropy and Kernel Perspectives; Springer Science Business Media LLC: Heidelberg/Berlin, Germany, 2010. [Google Scholar]
  33. Sfetcu, R.-C. Tsallis and Rényi divergences of generalized Jacobi polynomials. Phys. A 2016, 460, 131–138. [Google Scholar] [CrossRef]
Figure 1. Performance of the residual cumulative Sharma–Taneja–Mittal entropy estimator from 10,000 simulations using an exponential distribution with rate 0.5 . Panels: (TL) Histogram with density and theoretical value ( 0.25 ); (TR) QQ-plot; (BL) Running average; (BR) Boxplot.
Figure 1. Performance of the residual cumulative Sharma–Taneja–Mittal entropy estimator from 10,000 simulations using an exponential distribution with rate 0.5 . Panels: (TL) Histogram with density and theoretical value ( 0.25 ); (TR) QQ-plot; (BL) Running average; (BR) Boxplot.
Entropy 28 00032 g001
Figure 2. Performance of the residual cumulative Sharma–Taneja–Mittal entropy estimator from 10,000 simulations using G a m m a ( 2 , 2 ) distribution. Panels: (TL) Histogram with density and theoretical value ( 0.445313 ); (TR) QQ-plot; (BL) Running average; (BR) Boxplot.
Figure 2. Performance of the residual cumulative Sharma–Taneja–Mittal entropy estimator from 10,000 simulations using G a m m a ( 2 , 2 ) distribution. Panels: (TL) Histogram with density and theoretical value ( 0.445313 ); (TR) QQ-plot; (BL) Running average; (BR) Boxplot.
Entropy 28 00032 g002
Figure 3. Histogram with fitted pdf (up) and Empirical cdf vs. fitted cdf (down).
Figure 3. Histogram with fitted pdf (up) and Empirical cdf vs. fitted cdf (down).
Entropy 28 00032 g003
Figure 4. Plot of D R S Λ τ 1 , τ 2 * ( Y ; l ) (Example 3) with τ 1 = 3 and τ 2 = 2 against l ( 0 , 3 ) .
Figure 4. Plot of D R S Λ τ 1 , τ 2 * ( Y ; l ) (Example 3) with τ 1 = 3 and τ 2 = 2 against l ( 0 , 3 ) .
Entropy 28 00032 g004
Figure 5. For τ 1 = 0.5 ; τ 2 = 1 (left) and τ 1 = 1 ; τ 2 = 2 (right), the precise values of D R S Λ τ 1 , τ 2 * ( R ˇ ( p ) ; l ) with regard to 0 < l < 1 .
Figure 5. For τ 1 = 0.5 ; τ 2 = 1 (left) and τ 1 = 1 ; τ 2 = 2 (right), the precise values of D R S Λ τ 1 , τ 2 * ( R ˇ ( p ) ; l ) with regard to 0 < l < 1 .
Entropy 28 00032 g005
Table 1. The variance and MSER of the residual cumulative Sharma–Taneja–Mittal entropy estimator using the exponential distribution with rate 0.5 .
Table 1. The variance and MSER of the residual cumulative Sharma–Taneja–Mittal entropy estimator using the exponential distribution with rate 0.5 .
τ 1 , τ 2 R S Λ τ 1 , τ 2 ( Y ) p = 10 p = 20 p = 30
VarianceMSERVarianceMSERVarianceMSER
1, 2−10.14110840.37562530.068035510.2608490.043638820.2088998
1, 3−0.6666670.056685020.23809880.027087660.16459070.017406240.1319349
2, 3−0.3333330.015622750.12498570.0070809910.084148820.0045432610.0674062
2, 4−0.250.0089454010.094576540.0040694010.063791440.0026202030.05119015
τ 1 , τ 2 R S Λ τ 1 , τ 2 ( Y ) p = 50 p = 70 p = 100
VarianceMSERVarianceMSERVarianceMSER
1, 2−10.026797970.16369310.019189410.1385190.013348320.1155292
1, 3−0.6666670.010694090.10340710.0076836310.087652010.0053592650.07320337
2, 3−0.3333330.0027196830.052148460.0019491530.044147160.0013666530.03696643
2, 4−0.250.0015557840.039442180.0011142060.033378280.00078320650.02798447
Table 2. The variance and MSER of the residual cumulative Sharma–Taneja–Mittal entropy estimator using G a m m a ( 2 , 2 ) distribution.
Table 2. The variance and MSER of the residual cumulative Sharma–Taneja–Mittal entropy estimator using G a m m a ( 2 , 2 ) distribution.
τ 1 , τ 2 R S Λ τ 1 , τ 2 ( Y ) p = 10 p = 20 p = 30
VarianceMSERVarianceMSERVarianceMSER
1, 2−1.50.22467120.47407720.10494960.32395060.068337880.2614115
1, 3−1.037040.094473270.30743250.044198040.21023050.028731030.1694998
2, 3−0.5740740.031602550.17781310.014099450.11874350.0090159450.09494975
2, 4−0.4453130.019571940.13992920.0086376790.092941310.0055177410.07427972
τ 1 , τ 2 R S Λ τ 1 , τ 2 ( Y ) p = 50 p = 70 p = 100
VarianceMSERVarianceMSERVarianceMSER
1, 2−1.50.041441590.20356190.029792520.17260070.020559140.1433913
1, 3−1.037040.017465530.13215060.012553280.11204050.0086217750.09285993
2, 3−0.5740740.0053426670.073090.0038079010.061710990.0026014180.05100904
2, 4−0.4453130.0032558070.057056990.0023072250.048036380.0015779140.039727
Table 3. Bootstrap estimates of the proposed estimator for different ( τ 1 , τ 2 ) combinations based on 10,000 resamples.
Table 3. Bootstrap estimates of the proposed estimator for different ( τ 1 , τ 2 ) combinations based on 10,000 resamples.
τ 1 τ 2 OriginalMeanSDBias95% CI (Lower)95% CI (Upper)
12−40.5905−38.053514.98292.5370−65.4670−7.8855
13−24.0000−23.24749.86730.7527−43.8650−5.7365
23−7.4095−8.45715.2065−1.0475−21.9330−2.2886
24−4.8996−5.72733.5175−0.8277−15.0670−1.6549
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohamed, M.S.; Sakr, H.H. Properties of Residual Cumulative Sharma–Taneja–Mittal Model and Its Extensions in Reliability Theory with Applications to Human Health Analysis and Mixed Coherent Mechanisms. Entropy 2026, 28, 32. https://doi.org/10.3390/e28010032

AMA Style

Mohamed MS, Sakr HH. Properties of Residual Cumulative Sharma–Taneja–Mittal Model and Its Extensions in Reliability Theory with Applications to Human Health Analysis and Mixed Coherent Mechanisms. Entropy. 2026; 28(1):32. https://doi.org/10.3390/e28010032

Chicago/Turabian Style

Mohamed, Mohamed Said, and Hanan H. Sakr. 2026. "Properties of Residual Cumulative Sharma–Taneja–Mittal Model and Its Extensions in Reliability Theory with Applications to Human Health Analysis and Mixed Coherent Mechanisms" Entropy 28, no. 1: 32. https://doi.org/10.3390/e28010032

APA Style

Mohamed, M. S., & Sakr, H. H. (2026). Properties of Residual Cumulative Sharma–Taneja–Mittal Model and Its Extensions in Reliability Theory with Applications to Human Health Analysis and Mixed Coherent Mechanisms. Entropy, 28(1), 32. https://doi.org/10.3390/e28010032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop