Next Article in Journal
The Three-Body Problem: The Ramsey Approach and Symmetry Considerations in the Classical and Quantum Field Theories
Previous Article in Journal
The Hybrid Euclidean–Lorentzian Universe: Stability Considerations and the Point of Transition
Previous Article in Special Issue
A Three-Parameter Record-Based Transmuted Rayleigh Distribution (Order 3): Theory and Real-Data Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uniformity Testing and Estimation of Generalized Exponential Uncertainty in Human Health Analytics

by
Mohamed Said Mohamed
1 and
Hanan H. Sakr
2,*
1
Mathematics Department, Faculty of Education, Ain Shams University, Cairo 11341, Egypt
2
Department of Management Information Systems, College of Business Administration in Hawtat Bani Tamim, Prince Sattam Bin Abdulaziz University, Al-Kharj 16273, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(9), 1403; https://doi.org/10.3390/sym17091403
Submission received: 9 June 2025 / Revised: 5 August 2025 / Accepted: 6 August 2025 / Published: 28 August 2025
(This article belongs to the Special Issue Symmetric or Asymmetric Distributions and Its Applications)

Abstract

The entropy function, as a measure of information and uncertainty, has been widely applied in various scientific disciplines. One notable extension of entropy is exponential entropy, which finds applications in fields such as optimization, image segmentation, and fuzzy set theory. In this paper, we explore the continuous case of generalized exponential entropy and analyze its behavior under symmetric and asymmetric probability distributions. Particular emphasis is placed on illustrating the role of symmetry through analytical results and graphical representations, including comparisons of entropy curves for symmetric and skewed distributions. Moreover, we investigate the relationship between the proposed entropy model and other information-theoretic measures such as entropy and extropy. Several non-parametric estimation techniques are studied, and their performance is evaluated using Monte Carlo simulations, highlighting asymptotic properties and the emergence of normality, an aspect closely related to distributional symmetry. Furthermore, the consistency and biases of the estimation methods, which rely on kernel estimation with ρ c o r r -mixing dependent data, are presented. Additionally, numerical calculations based on simulation and medical real data are applied. Finally, a test of uniformity using different test statistics is given.
MSC:
62-04; 90C29; 94A17

1. Introduction

Entropy is a quantitative measure of a probabilistic system’s information, according to Shannon [1]. He used the probability distribution’s logarithmic function to quantify information. The probability that the l t h state of an n-state system will occur is P b l , where 0 P b l 1 and P b l = 1 . log ( 1 P b l ) is the information gain resulting from the appearance of the l t h status in such a system, and the system’s entropy is the expected outcome of the gain function. In research pertaining to information theory and its numerous applications, entropy is an essential measure that is well recognized in this field. Therefore, the Shannon entropy function is expressed as
Ω ( P b ) = l = 1 n P b l log ( P b l ) .
In discrete circumstances, Shannon entropy is non-negative; however, in continuous cases, it may be negative. The literature on entropy has expanded quickly in order to build information entropy. Renyi [2] entropy, Kapur [3] entropy, Havrda and Charvat [4] entropy, Tsallis [5] entropy, and others are examples of Shannon entropy extensions that have grown in number. More assumptions are required because the knowledge about an event we anticipate falls between finite limits. Pal and Pal [6,7] developed an additional measure, known as exponential entropy among other definitions, parallel to Shannon entropy, based on these considerations following Campbell [8]. Exponential entropy is expressed as
Ω E X ( P b ) = l = 1 n P b l ( e 1 P b l 1 ) .
Since it appears only reasonable that any measure of information be set to 0 for the degenerative probability distribution ( 0 , , 0 , 1 , 0 , , 0 ) , the authors have included the 1 term. According to them, exponential entropy is superior to Shannon entropy. For instance, Shannon entropy does not have a defined upper bound, although exponential entropy does for the uniform probability distribution P b l = 1 n , where l = 1 , 2 , , n .
lim n Ω E X 1 n , 1 n , , 1 n = e 1 .
Additionally, in parallel to Shannon entropy in continuous and discrete situations, Panjehkeh et al. [9] investigated the characteristics and attributes of exponential entropy, including the asymptotic equipartition property, chain rule, the subadditivity, and invariancy under monotone transformation. They introduced continuous exponential entropy by
C Ω E X ( Y ) = R g ( y ) ( e 1 g ( y ) ) d y ,
where Y is a continuous random variable with the probability density function (PDF) g and support R, and we can see that Panjehkeh et al. [9] removed the term 1 from their measure to make it always non-negative against Shannon entropy. Building on the exponential entropy of Equation (2), Kvalseth [10] introduced a more general form, which is referred to as generalized exponential entropy by
Ω G E X ( P b ) = 1 η l = 1 n P b l ( e 1 P b l η 1 ) ,
where the parameter η is arbitrary and has a non-zero real value (i.e., η R { 0 } ).
Another measure that has been used in many research studies is the dual of Shannon entropy, also known as extropy, which is presented by Lad et al. [11] as
D Ω ( P b ) = l = 1 n ( 1 P b l ) log ( 1 P b l ) .
Lad et al. [11] expanded the function ( 1 P b l ) log ( 1 P b l ) to about P b l + 1 2 P b l 2 for small values of P b l based on the expansion of the Maclaurin series. D Ω ( P b ) may therefore be roughly represented as 1 1 2 l = 1 n P b l 2 , where l = 1 , 2 , , n , for tiny maximum values of P b l . Let us also define g ( y l ) P b l Δ y , where Δ y ( y n y 1 ) ( n 1 ) for every given n. Therefore, the approximation D Ω ( P b ) 1 Δ y 2 l = 1 n g ( y l ) 2 Δ y represents a simple scale and location transformation of 1 2 l = 1 n g ( y l ) 2 Δ y . Thus, the differential extropy measure of the density, g, was defined by Lad et al. [11] as
D Ω ( Y ) = D Ω ( g ) = 1 2 g 2 ( y ) d y = lim Δ y 0 D Ω ( P b ) 1 Δ y .
Non-parametric techniques for estimating information techniques have been presented by several scholars. Vasicek [12] proved his entropy estimator in situations where the measure depends on the PDF by making use of the capability to represent the continuous case of Equation (1) in a different way as
C Ω ( Y ) = 0 g ( y ) log g ( y ) d y = 0 1 log d d ξ G 1 ( ξ ) 1 d ξ ,
noting that G ( y ) is the cumulative distribution function (CDF). The difference operator was then used in place of the CDF, G, and the empirical CDF, G n , was used in place of the differential operator function. The derivative of G 1 ( ξ ) was then estimated using the function of the order statistics. Formally speaking, Vasicek’s estimator is written as follows given the order statistic Y ( 1 ; n ) Y ( n ; n ) from the sample chosen at random Y 1 , , Y n by:
C Ω ( G n ) = 1 n t = 1 n log n 2 λ ( Y ( t + λ ; n ) Y ( t λ ; n ) ) ,
where λ < n 2 is the positive aspects integer panel size, Y ( t ; n ) = Y ( 1 ; n ) if t < 1 , and Y ( t ; n ) = Y ( n ; n ) if t > n . The limitations of (8) at its limits are then pointed out by Ebrahimi et al. [13], who then modify it to propose their entropy estimator as
C Ω ( G n ) = 1 n t = 1 n log n Ξ t λ ( Y ( t + λ ; n ) Y ( t λ ; n ) ) ,
noting that
Ξ t = 1 + t 1 λ , 1 t λ 2 , λ + 1 t n λ 1 + n t λ , n λ + 1 t n .
It should be noted that Vasicek’s entropy estimator is used in most of the experiments described above because of its ease of use and particularly high relative accuracy. For example, Qiu and Jia [14] and Noughabi and Jarrahiferiz [15] used the approach that was described to estimate the extropy measure. Additionally, Wachowiak et al. [16] estimated generalized entropies such as the Tsallis and Renyi entropies using sample spacing. Sakr and Mohamed [17] used kernel function, local linear model, and spacing to give a number of non-parametric estimation methods for the Sharma–Taneja–Mittal entropy measurement of a random variable that is continuous with known support. Moreover, the cumulative residual generalized exponential entropy measure was proposed by Sakr and Mohamed [18] using several non-parametric estimating techniques.
Under the mixing coefficients α c o r r -mixing, ϕ c o r r -mixing, and ρ c o r r -mixing, which are described by Rosenblatt [19], Ibragimov [20], and Kolmogorov and Rozanov [21], another kind of non-parametric estimation relies on the kernel density estimator. Bradley [22] spoke about the kernel density estimator’s asymptotic normality and poor consistency under ρ c o r r -mixing. In the context of α c o r r -mixing, Masry [23] developed a non-parametric recursive density estimator and examined a few of its characteristics. The local linear estimate of the residual entropy function of conditional distributions, whose underlying data are assumed to be ρ c o r r -mixing, was covered by Rajesh et al. [24]. The kernel estimate of cumulative residual Tsallis entropy and its dynamic variant under ρ c o r r -mixing-dependent data was introduced by Irshad et al. [25].

Work Motivation

Building upon existing research on entropy measures involving exponential functions and their generalizations, it is evident that focusing solely on the discrete case provides an incomplete perspective without a parallel investigation of the continuous case. As previously noted, exponential entropy in discrete settings has been extensively explored across multiple disciplines, for instance, see the works of Ye and Cui [26] and Wang et al. [27]. Consequently, it becomes necessary to extend this exploration to the continuous domain, particularly by elucidating the applications where continuous formulations are relevant. An additional point of interest is the foundational role of the exponential function in the proposed model. It prompts the question: can this functional form address specific challenges related to uncertainty?
Among the various probability distributions, the uniform distribution stands out for its simplicity and utility. It allocates equal likelihood to all outcomes within a specified interval in the continuous setting, making it an ideal benchmark for comparison. Its practical relevance is seen in areas such as random number generation. In the field of economics, for example, patterns of supply and demand often deviate from the traditional normal distribution, prompting the need for alternative models that better reflect observed behaviors. Wanke [28] highlights the effectiveness of the uniform distribution in estimating lead times for inventory control, especially during the initial evaluation phase of a new product. Furthermore, social scientists often adopt the uniform distribution to represent ignorance or lack of prior knowledge, such as when employing simulations in which the actual distribution is unknown. In measurement cases, the uniform model is also used to characterize errors from certain instruments. These diverse applications underline the importance of designing hypothesis tests that are both straightforward and computationally efficient for assessing whether a dataset follows a uniform distribution. As such, a comprehensive discussion on uniformity testing is both necessary and timely.
In addition to this modeling perspective, our work addresses a gap in the literature by comparing multiple estimation techniques for generalized exponential entropy. While entropy-based measures have gained popularity, relative studies have examined how different estimators affect the accuracy and performance of entropy-based models in both theoretical and applied settings. Therefore, we introduce and compare four distinct estimators, investigate their asymptotic properties, and evaluate their practical performance through simulations and real data applications. This comparative approach enhances the interpretability and robustness of entropy-based statistical inference.
This suggestion aims to present the concept of generalized exponential entropy in the continuous domain and discuss its properties with various applications, including the test of uniformity. The structure of this study is organized as follows. Section 2 presents the continuous case of generalized exponential entropy, including applications to some well-known distributions. Furthermore, the relationship between the proposed model and entropy and extropy measures is discussed. In Section 3, various procedures for the non-parametric estimation of generalized exponential entropy are investigated. Moreover, the characteristics and consistency of kernel density estimation with ρ c o r r -mixing dependent data are analyzed for generalized exponential entropy. Section 4 applies numerical calculations of the obtained estimators to simulations and real-world medical data. Finally, testing for uniformity using the proposed statistics, along with a power comparison, is conducted in Section 5.

2. Generalized Exponential Entropy and Its Complementary Dual

In this section, we will present the continuous case of generalized exponential entropy. In parallel to the discrete case in (4), let Y be a continuous random variable with PDF g, and continuous generalized exponential entropy can be presented by
C Ω G E X ( Y ) = 1 η g ( y ) ( e 1 g η ( y ) 1 ) d y = 1 η g ( y ) e 1 g η ( y ) d y 1 ,
where η R { 0 } .
Example 1.
With PDF g ( y ) = γ e γ y , γ > 0 , let us assume that the continuously distributed random variable Y follows an exponential distribution symbolized by E X P ( γ ) . The continuous generalized exponential entropy is therefore obtained from (11) by
C Ω G E X ( Y ) = 1 + e ( γ η ) 1 / η Γ 1 η I Γ 1 η ; γ η η η ,
where 0 η R , Γ ( . ) is the Euler gamma function, and I Γ ( . ; . ) is the incomplete gamma function.
Figure 1 shows the generalized exponential entropy of exponential distribution with γ = 0.5 , 1 , 1.5 , 2 . Although the parameter η does not equal zero, Figure 1 reveals a near-symmetric pattern of the generalized exponential entropy around η = 0 . This visual symmetry suggests a balanced response of the entropy measure to variations in η on both sides of the origin. Such behavior highlights the structural regularity of the proposed measure and offers useful insights when analyzing distributions that exhibit or deviate from symmetry.
Example 2.
With PDF g ( y ) = 1 ϕ 2 ϕ 1 , < a < b < , let us assume that the continuously distributed random variable Y follows a uniform distribution represented by U ( ϕ 1 , ϕ 2 ) . The continuous generalized exponential entropy is therefore obtained from (11) by
C Ω G E X ( Y ) = 1 + e 1 1 ϕ 2 ϕ 1 η η ,
where 0 η R . Specifically, the generalized exponential entropy becomes zero when the difference between ϕ 1 and ϕ 2 is precisely one.
The generalized exponential entropy when η goes to 0 is discussed in the following assertion.
Proposition 1.
With PDF g ( y ) , assuming that Y is a continual random variable. When η goes to zero, the continuous generalized exponential entropy then follows the continuous Shannon entropy from (11), i.e.,
lim η 0 C Ω G E X ( Y ) = C Ω ( Y ) ,
where 0 η R , and C Ω ( Y ) is defined in (7).
Proof. 
Applying L H o ^ p i t a l s rule to (11), we obtain
lim η 0 C Ω G E X ( Y ) = lim η 0 1 η g ( y ) e 1 g η ( y ) d y 1 = lim η 0 e 1 g η ( y ) g 1 + η ( y ) log ( g ( y ) ) d y = g ( y ) log ( g ( y ) ) d y = C Ω ( Y ) .
 □
Proposition 2.
In order for Y and Z to be absolutely random variables that are continuous, let Z = Ψ ( Y ) , where Ψ is a precisely increasing function and | Ψ ( y ) | 1 . Next,
C Ω G E X ( Ψ ( Y ) ) C Ω G E X ( Y ) ,
and when | Ψ ( y ) | = 1 , equality is maintained.
Proof. 
Utilizing Equation (11), and let Z = Ψ ( Y ) , then we have
C Ω G E X ( Z ) = 1 η g ( Ψ 1 ( z ) ) 1 | Ψ ( Ψ 1 ( z ) ) | e 1 g ( Ψ 1 ( z ) ) 1 | Ψ ( Ψ 1 ( z ) ) | η d y 1 .
Assume that Ψ is strictly growing. When z = Ψ ( y ) is set, we get
C Ω G E X ( Z ) = 1 η Ψ 1 ( ) Ψ 1 ( ) g ( y ) e 1 g ( y ) 1 Ψ ( y ) η d y 1 1 η g ( y ) e 1 g η ( y ) d y 1 = C Ω G E X ( Y ) .
 □

Complementary Dual of Generalized Exponential Entropy

In this subsection, we will discuss the complementary dual of generalized exponential entropy and its relation to the extropy measure. Inspired by the extropy measure introduced by Lad et al. [11] in the discrete case, as given in (5), we can obtain the complementary dual of generalized exponential entropy by replacing P b l with 1 P b l in (4), l = 1 , 2 , , n , as follows
D Ω G E X ( P b ) = 1 η l = 1 n ( 1 P b l ) ( e 1 ( 1 P b l ) η 1 ) .
We can expanded the function ( 1 P b l ) ( e 1 ( 1 P b l ) η 1 ) for small values of P b l based on the expansion of the Maclaurin series. For | P b l | < 1 , the binomial expansion for 1 ( 1 P b l ) η is:
1 ( 1 P b l ) η = η P b l η ( η 1 ) 2 P b l 2 + O ( P b l 3 ) .
Substitute 1 ( 1 P b l ) η into the exponential function. Recall the Maclaurin series for e z :
e z = 1 + z + z 2 2 + z 3 6 + O ( z 4 ) .
Here, z = 1 ( 1 P b l ) η , which we approximated as:
z = η P b l η ( η 1 ) 2 P b l 2 + O ( P b l 3 ) .
Now substitute into the exponential series:
e 1 ( 1 P b l ) η = 1 + η P b l η ( η 1 ) 2 P b l 2 + 1 2 η P b l 2 + O ( P b l 3 ) .
Simplify:
e 1 ( 1 P b l ) η 1 = η P b l + η ( η 1 ) 2 + η 2 2 P b l 2 + O ( P b l 3 ) .
Thus:
e 1 ( 1 P b l ) η 1 = η P b l + η 2 P b l 2 + O ( P b l 3 ) .
Multiply (15) by 1 P b l , and after simple calculations we get
( 1 P b l ) e 1 ( 1 P b l ) η 1 = η P b l η 2 P b l 2 + O ( P b l 3 ) .
Therefore, D Ω G E X ( P b ) may therefore be roughly represented as 1 1 2 l = 1 n P b l 2 , where l = 1 , 2 , , n , for tiny maximum values of P b l . Let us also define g ( y l ) P b l Δ y , where Δ y ( y n y 1 ) ( n 1 ) for every given n. Therefore, the approximation D Ω G E X ( P b ) 1 Δ y 2 l = 1 n g ( y l ) 2 Δ y represents a simple scale and location transformation of 1 2 l = 1 n g ( y l ) 2 Δ y . Then, we see that
D Ω G E X ( Y ) 1 2 g 2 ( y ) d y = lim Δ y 0 D Ω G E X ( P b ) Δ y .
Thus, we can see that the complementary dual of generalized exponential entropy can be approximated as the differential extropy measure of the density, g, which is defined by Lad et al. [11]. Meanwhile, for the complete differential complementary dual of generalized exponential entropy and entropy given, respectively, by
D Ω G E X ( Y ) = 1 η ( 1 g ( y ) ) ( e 1 ( 1 g ( y ) ) η 1 ) d y ,
D Ω ( Y ) = ( 1 g ( y ) ) log ( 1 g ( y ) ) d y ,
we will discuss their behaviour using the following example. Under the E X P ( 1 ) distribution, Figure 2 shows the plots of (17) and (18), and we can see that D Ω G E X ( Y ) is equal to D Ω ( Y ) when η = 1.295626 , and 0.86261 .

3. Proposed Estimation Procedures

Here we provide a number of non-parametric estimation techniques for generalized exponential entropy and compare their performance with some of the leading alternatives, assuming that Y ( 1 ; n ) Y ( n ; n ) indicate ordering statistics obtained from a sample with a random size of n taken from an undefined continuous CDF, G, and that follows the PDF g.

3.1. First Technique

By the same manner of Vasicek’s estimator given in (8) to estimate the entropy function, we can suppose the estimator of generalized exponential entropy as follows
C 1 Ω G E X ( G n ) = 1 η 1 n t = 1 n e 1 2 λ n ( Y ( t + λ ; n ) Y ( t λ ; n ) ) η 1 ,
where 0 η R , λ < n 2 is the positive aspects integer panel size, Y ( t ; n ) = Y ( 1 ; n ) if t < 1 , and Y ( t ; n ) = Y ( n ; n ) if t > n .
We will then use Monte Carlo simulation to talk about asymptotic normalcy. Our goal is to confirm that:
Δ 1 = C 1 Ω G E X ( G n ) E [ C 1 Ω G E X ( G n ) ] V a r [ C 1 Ω G E X ( G n ) ] ,
exhibits a standard distribution that is normal. With a parameter of 1, an E X P ( 1 ) distribution with η = 2 and η = 3 yields 1000 samples, each with a size of n = 50 and n = 100 . The histograms in Figure 3 and Figure 4 are used to confirm that the estimators in (19) are asymptotically normal.

3.2. Second Technique

We may assume the estimator of generalized exponential entropy in the same way as the Ebrahimi et al. [13] estimator provided in (9) to estimate the entropy function as follows
C 2 Ω G E X ( G n ) = 1 η 1 n t = 1 n e 1 Ξ t λ n ( Y ( t + λ ; n ) Y ( t λ ; n ) ) η 1 ,
where 0 η R , λ < n 2 is the positive aspects integer panel size, Y ( t ; n ) = Y ( 1 ; n ) if t < 1 , and Y ( t ; n ) = Y ( n ; n ) if t > n , and Ξ t is defined in (10).
We will then use Monte Carlo simulation to talk about asymptotic normalcy. Our goal is to confirm that:
Δ 2 = C 2 Ω G E X ( G n ) E [ C 2 Ω G E X ( G n ) ] V a r [ C 2 Ω G E X ( G n ) ] ,
exhibits a standard distribution that is normal. With a parameter of 1, an E X P ( 1 ) distribution with η = 2 and η = 3 yields 1000 samples, each with a size of n = 50 and n = 100 . The histograms in Figure 5 and Figure 6 are used to confirm that the estimators in (20) are asymptotically normal.

3.3. Third Technique

The recommended estimator is presented in the following way based on a local linear frame. Consider the sample data that have been supplied. ( K n ( Y ( 1 ; n ) ) , Y ( 1 ; n ) ) , ( K n ( Y ( 2 ; n ) ) , Y ( 2 ; n ) ) ,…, ( K n ( Y ( n ; n ) ) , Y ( n ; n ) ) . The estimator C 1 Ω G E X ( G n ) from (19) may be expressed as
C 1 Ω G E X ( G n ) = 1 η 1 n t = 1 n e 1 t + λ n t λ n Y ( t + λ ; n ) Y ( t λ ; n ) η 1 ,
where t + λ n t λ n Y ( t + λ ; n ) Y ( t λ ; n ) is the straight slope line of the points ( K n ( Y ( t + λ ; n ) ) , Y ( t + λ ; n ) ) and ( K n ( Y ( t λ ; n ) ) , Y ( t λ ; n ) ) . Consequently, the generalized exponential entropy estimator derived from the least squares approach is given by
C 3 Ω G E X ( G n ) = 1 n 1 n t = 1 n e 1 j = t λ t + λ ( Y ( j ; n ) Y ¯ ( t ; n ) ) ( j t ) n j = t λ t + λ Y ( j ; n ) Y ¯ ( t ; n ) 2 η 1 ,
where Y ¯ ( t ; n ) = 1 2 λ + 1 j = t λ t + λ Y ( j ; n ) , and the approach is achieved comparable to that described by Correa [29] to estimate Shannon entropy. Additionally, this approach employed the local linear model under the points 2 λ + 1 as
K ( y ( j ; n ) ) = θ 1 + θ 2 y ( j ; n ) + e r r o r , j = λ t , , λ + t .
We will then use Monte Carlo simulation to talk about asymptotic normalcy. Our goal is to confirm that:
Δ 3 = C 3 Ω G E X ( G n ) E [ C 3 Ω G E X ( G n ) ] V a r [ C 3 Ω G E X ( G n ) ] ,
exhibits a standard distribution that is normal. With a parameter of 1, an E X P ( 1 ) distribution with η = 2 and η = 3 yields 1000 samples, each with a size of n = 50 and n = 100 . The histograms in Figure 7 and Figure 8 are used to confirm that the estimators in (21) are asymptotically normal.

3.4. Fourth Technique

Assuming that the underlying lifetimes are ρ c o r r -mixing, we will address the non-parametric estimate of generalized exponential entropy using the kernel kind estimation in this subsection. In the following, we present the concept of ρ c o r r -mixing (as discussed in Kolmogorov and Rozanov [21]).
Definition 1.
For a probability space ( W s , Fs , P s ) , let Fs i t be the Σ-algebra of events produced by the random variables { Y k : i k t } . Asymptotically uncorrelated is the state of the stationary process { Y k } if
sup M J 2 ( F i ) , N J 2 ( F i + t + ) | cov ( M , N ) | var ( M ) var ( N ) = ρ c o r r ( t ) 0 ,
as t + , where ρ c o r r ( t ) is the maximum value of the coefficient of correlation or the ρ c o r r -mixing coefficient, and J 2 ( F a b ) indicates all collection of second-order measurable variables to be random with regard to F a b .
Consider the strictly stable process { Y t } t = 1 n , which has a univariate PDF g ( y ) . The lifetimes are considered to be ρ c o r r -mixing, meaning that Y t s do not necessarily need to be mutually independent. A recursive density estimator of g ( y ) was presented by Wegman and Davies [30] and is provided by (see also, Parzen [31])
g ^ * ( Y r ) = 1 n β n t = 1 n 1 β t Λ Y r Y t β t , r = 1 , , n ,
Suppose Λ ( y ) meet the requirements listed below:
  • sup | Λ ( y ) | < + ;
  • + | Λ ( y ) | d y < + ;
  • lim | y | + | y Λ ( y ) | = 0 ;
  • + Λ ( y ) d y = 1 .
β n 0 and n β n + are satisfied by the bandwidth parameter β n as n + . Let y be a location where g continues. Provided that the continuously differentiable ( t + 1 ) -times as g at location y which we have:
sup v | f ( t + 1 ) ( v ) | = Q < + .
Suppose that:
+ | v | k | Λ ( v ) | d v < + , k = 1 , 2 , , t + 1 .
Furthermore, the bandwidth parameter to be used, β n , fulfills:
1 n l = 1 n β l β n j + 1 2 B j + 1 2 < + , as n + , j = 0 , 1 , 2 , , t + 1 .
Next, as stated by Masry [23], the mean and variance of g ^ * ( y ) are as follows:
E ( g ^ * ( y ) ) B 0.5 g ( y ) + β n 2 d 2 2 B 0.5 g ( 2 ) ( y ) B 2.5 ,
and:
Var ( g ^ * ( y ) ) g ( y ) n β n D 2 ,
where
d 2 = + v 2 Λ ( v ) d v and D 2 = + Λ 2 ( v ) d v ,
and g ( 2 ) ( y ) is the 2 n d derivative of g regarding y. g ^ * ( y ) does not constitute an asymptotically unbiased estimation of g ( y ) , according to Equation (23). An asymptotically unbiased estimate of g ( y ) may be found via simple scaling on (22) and is provided by:
g ^ ( y ) = g ^ * ( y ) B 0.5 = 1 n B 0.5 β n t = 1 n 1 β t Λ y Y t β t
According to Masry [23], the bias and variance of g ^ ( y ) are as follows:
Bias ( g ^ ( y ) ) β n 2 d 2 2 B 0.5 g ( 2 ) ( y ) B 2.5 ,
and
Var ( g ^ ( y ) ) g ( y ) n β n B 0.5 2 D 2 .
Since generalized exponential entropy can be written as
C 4 Ω G E X ( Y ) = 1 η g ( y ) ( e 1 g η ( y ) ) d y 1 = 1 η e E e g η ( y ) 1 .
Thus, our suggestion involves the estimation of the generalized exponential entropy of an unidentified continuous PDF, g, through
C 4 Ω G E X ( G n ) = 1 η e n t = 1 n e g ^ η ( Y ( t ; n ) ) 1 ,
where g ^ ( y ) is defined in (24).
The consistency within the estimator e g ^ η ( y ) will be covered in the theorems that follow.
Theorem 1.
Assume that the non-parametric estimation of the generalized exponential entropy given in (27) is C 4 Ω G E X ( G n ) . Consequently, C 4 Ω G E X ( G n ) is a consistent estimation, with 0 η > 1 2 .
Proof. 
We can write e g ^ η ( y ) using the Taylor series expansion in the form
e g ^ η ( y ) e g η ( y ) 1 η g η 1 ( y ) g ^ ( y ) g ( y ) ,
where g ^ ( y ) is defined in (24). Therefore, we have
e g ^ η ( y ) e g η ( y ) η g η 1 ( y ) e g η ( y ) g ^ ( y ) g ( y ) .
By using the bias and variance in (25) and (26) in (28), we obtain
Bias ( e g ^ η ( y ) ) η g η 1 ( y ) e g η ( y ) Bias ( g ^ ( y ) ) η β n 2 d 2 2 B 0.5 g ( 2 ) ( y ) B 2.5 g η 1 ( y ) e g η ( y ) ,
Var ( e g ^ η ( y ) ) η 2 g 2 η 1 ( y ) e 2 g η ( y ) Var ( g ^ ( y ) ) η 2 D 2 n β n B 0.5 2 g 2 η 1 ( y ) e 2 g η ( y ) .
Consequently, the expression for the mean square error (MSE) is
M S E ( e g ^ η ( y ) ) = ( B i a s ( e g ^ η ( y ) ) ) 2 + V a r ( e g ^ η ( y ) ) η β n 2 d 2 2 B 0.5 g ( 2 ) ( y ) B 2.5 g η 1 ( y ) e g η ( y ) 2 + η 2 D 2 n β n B 0.5 2 g 2 η 1 ( y ) e 2 g η ( y ) ,
and
M S E ( e g ^ η ( y ) ) 0 , a s n + ,
which show the consistency of the estimator,
C 4 Ω G E X ( G n ) = 1 η e n t = 1 n e g ^ η ( Y ( t ; n ) ) 1 .
 □
Theorem 2.
Assume that the non-parametric estimation of the generalized exponential entropy given in (27) is C 4 Ω G E X ( G n ) . Then, the bias and variance of C 4 Ω G E X ( G n ) , with 0 η > 1 2 , are given by
Bias ( C 4 Ω G E X ( G n ) ) β n 2 B 2.5 d 2 2 n B 0.5 t = 1 n g ( 2 ) ( Y ( t ; n ) ) g η 1 ( Y ( t ; n ) ) e g η ( Y ( t ; n ) ) ,
Var ( C 4 Ω G E X ( G n ) ) D 2 n 3 β n B 0.5 2 t = 1 n g 2 η 1 ( Y ( t ; n ) ) e 2 g η ( Y ( t ; n ) ) .
Proof. 
By using the bias and variance in (29) and (30), the result follows. □
We can now use Monte Carlo simulation to talk about asymptotic normalcy. The standard normal PDF is the preferred kernel function, and the bandwidth, β , is calculated using the normal optimum smoothing formula (also known as rule-of-thumb bandwidth selection) by 1.06 Σ ^ n 1 5 , where Σ ^ is the standard deviation sample. Our goal is to confirm that:
Δ 4 = C 4 Ω G E X ( G n ) E [ C 4 Ω G E X ( G n ) ] V a r [ C 4 Ω G E X ( G n ) ] ,
has a standard normal distribution, where V a r [ C 4 Ω G E X ( G n ) is defined in (32). With a parameter of 1, an E X P ( 1 ) distribution with η = 2 and η = 3 yields 1000 samples, each with a size of n = 50 and n = 100 . The histograms in Figure 9 and Figure 10 are used to confirm that the estimators in (27) are asymptotically normal.

4. Numerical Calculation

To illustrate the behavior of our estimators, we will provide some numerical findings in this part, covering both actual and simulated data. Finding a suitable value of λ for a given n is necessary in order to estimate generalized exponential entropy. As recommended by Grzegorzewski and Wieczorkowski [32] for entropy estimate, the heuristic formula used depends on the floor value and is as follows:
λ = [ n + 0.5 ] ,

4.1. Simulation-Based Estimator Investigation

In the present investigation, the performance of the proposed estimators for generalized exponential entropy is investigated using simulations. For each sample size, one thousand samples are created, and the estimators and their root mean squared errors are assessed. The study uses a variety of distributions, such as standard uniform and exponential. The E X P ( γ ) distribution is implemented in this study using the rate parameter γ = 0.5 . Values of the root mean square error and standard deviation for the four generalized exponential entropy estimators for sample sizes n = 10 , 30 , 50 , 100 for each of the two distributions considered are shown in Table 1 and Table 2 and Figure 11, Figure 12, Figure 13 and Figure 14. Therefore, we can obtain the following conclusions:
  • By increasing n and fixed η , the root mean square error and associated standard deviation decrease for each estimator.
  • By increasing η and fixed n, the root mean square error and associated standard deviation decrease for each estimator.
  • The values of each estimator are close to the true value, see Figure 11, Figure 12, Figure 13 and Figure 14.

4.2. Real Data Study

Reaven and Miller [33] investigated the correlation between insulin and blood chemistry indicators of glucose tolerance in 145 individuals who were not fat. After visualizing the data in three dimensions using the Stanford accelerator linear center’s PRIM9 technology, they found an odd pattern that resembled a big blob with two wings pointing in separate directions. The 145 observations were categorized as normal group, overt diabetic group, and sub-clinical (chemical) diabetic group. Five variables were examined for each of the subjects: steady state plasma glucose (sspg), plasma insulin during the test (instest), test plasma glucose level (glutest), fasting plasma glucose level (glufast), and relative weight (relwt). In Figure 15, the covariance ellipses are visualized for exploring these multivariate data and identifying patterns or group differences. Each ellipse illustrates the links between variable pairs by representing the distribution and orientation of data for a particular group. Stronger linear relationships (high covariance) are shown by elongated ellipses. Low covariance, or little to no linear connection, is indicated by circular ellipses. The diversity within the group is represented by the ellipse size: larger ellipses signify greater variability. Therefore, we can see that variances are greatest in the overtly diabetic group and least in the normal group.
In this study, we choose the sspg set of data to discuss the four generalized exponential entropy estimators. The violin plot, a statistical data visualization that combines the characteristics of a box plot and a kernel density plot, is depicted in Figure 16. It is very helpful for comparing different distributions and displaying the data’s distribution. The box plot, which displays the median, quartiles, and potential outliers, is embedded within the violin, and the width of the violin shows the density (or frequency) of the data at various levels. Furthermore, for greater aesthetics, density is frequently mirrored along the center axis; alternatively it can be shown one-sided.
With a shape parameter of 2.880043 and a scale value of 63.95974, we fitted these data to a gamma distribution, whose PDF is given by
f ( y ; a , b ) = y a 1 e y / b b a Γ ( a ) for a , b , y > 0 .
The Kolmogorov–Smirnov (Ko-Sm) test is used to verify that the gamma distribution is a suitable match for the sspg dataset. The gap between the empirical and fitted distribution functions has a Ko-Sm statistic of 0.068327 and a corresponding p-value of 0.5076. Consequently, it makes sense to fit the data using the gamma distribution, as shown in Figure 17. Table 3 shows the four generalized exponential entropy estimators of the sspg set of data for different values of η with their absolute biases. Additionally, by raising η , the absolute biases decrease, and we can see the near values between the estimators and the true value.

5. Statistics for Evaluating the Uniformity Hypothesis Test

In this part, we examine how applying the successful estimate of generalized exponential entropy to the Monte Carlo experiment might enhance the extensions of the entropy-based test of uniformity. In many real-world applications, analyzing uniformity is important since determining goodness-of-fit frequently entails examining uniformity. Let us say we have a random sample of size n, such as Z 1 , Z 2 , , Z n , taken from a population that is represented by an unknown CDF, G. Thus, the above goodness-of-fit issue, where G is the CDF of Z t for t = 1 , , n , is equivalent to analyzing the null hypothesis N 0 : G ( z ) = z for all z ( 0 , 1 ) , as opposed to N 1 : G ( z ) z for some z ( 0 , 1 ) . A detailed review of testing uniformity research is given by Marhuenda et al. [34], who also provide a lengthy list of references (also see Sakr and Mohamed [17] and Qiu and Jia [14]).
Theorem 3.
Assume that the random variable Z follows a standard uniform distribution U ( 0 , 1 ) . Then, from (11), C Ω G E X ( Z ) = 0 if Z follows U ( 0 , 1 ) distribution.
In line with the generalized exponential entropy estimates C j Ω G E X ( G n ) , j = 1 , 2 , 3 , 4 , given in (19)–(21) and (24), respectively, and under the null hypothesis G ( z ) = z for all y ( 0 , 1 ) , we get (i.e., Pr . refers to converges in probability)
C j Ω G E X ( G n ) Pr . 0 , w h e n n .
Taking into account a different distribution where the PDF h and CDF H are specified on the interval ( 0 , 1 ) , we get
C j Ω G E X ( H n ) Pr . C Ω G E X ( Y ) 0 , w h e n n .
To evaluate uniformity, we suggest the tests of statistic C j Ω G E X ( G n ) , j = 1 , 2 , 3 , 4 , in accordance with the technique developed by Zamanzade and Arghami [35]. We reject the uniformity hypothesis when C j Ω G E X ( G n ) values depart from zero, indicating non-uniformity.

Critical Points of Percentage and Power Analysis Comparisons

Unfavorably we are unable to ascertain the precise distribution of the statistics C j Ω G E X ( G n ) based on the null hypothesis due to the intricacy of the tests. In order to determine its critical values, we employ Monte Carlo simulation. We use the Monte Carlo method to obtain the percentage points under the generalized exponential entropy estimations. The area necessary to establish the uniformity test is described by the interval
[ l o w e r l e v e l , u p p e r l e v e l ] : = [ C j Ω G E X α 2 , C j Ω G E X 1 α 2 ] , j = 1 , 2 , 3 , 4 ,
where α denotes the selected significance level, and C j Ω G E X α represents the α -quantile function, based on the null hypothesis, of the nearly or asymptotic CDF test statistic.
At the significance level of 0.05 , the crucial points of the percentage of the tests of the statistic C j Ω G E X ( G n ) , j = 1 , 2 , 3 , 4 , are shown in Table 4, for varying sample numbers n = 10 , 20 , 30 , 50 , 70 , 100 , derived from 1000-iteration Monte Carlo simulations. Then, we can observe that
  • By increasing the sample size n, the difference in percentage points decreases.
  • For fixed n and increasing η , the difference in percentage points decreases.
  • Most of the four estimators yield percentage point intervals that include the true value (i.e., zero) of the generalized exponential entropy measure of the U ( 0 , 1 ) distribution across different values of n and η , with few exceptions at small n.
The densities of the various generalized exponential entropy estimators for n = 10 , 20 , 30 , 40 , 50 and η = 2 , 4 are displayed in Figure 18 and Figure 19. It is seen that the test statistics become closer to the exact values (zero) as n increases, indicating that bias and variance both reduce with increasing n.
We evaluate the efficacy of our suggested tests using Monte Carlo simulations by contrasting their capabilities under seven possible situations and with those of other tests. Several researchers have studied a variety of options in uniformity testing, such as Zamanzade [35] and Qiu and Jia [14]. Based on the forms of their densities, they have divided them into three groups as follows:
1 ω : G ( z ) = 1 ( 1 z ) ω , 0 z 1 , ω = 1.5 , 2 , 2 ω : G ( z ) = 2 ω 1 z ω , 0 z 0.5 , 1 2 ω 1 ( 1 z ) ω , 0.5 z 1 , ω = 1.5 , 2 , 3 , 3 ω : G ( z ) = 0.5 2 ω 1 ( 0.5 z ) ω , 0 z 0.5 , 0.5 + 2 ω 1 ( z 0.5 ) ω , 0.5 z 1 , ω = 1.5 , 2 .
We evaluate the effectiveness of our suggested test statistic in comparison with several established statistics under identical alternatives as follows:
  • The statistic of the Cramer–von Mises test (Cramer [36]):
    C & M = t = 1 n Z ( t ; n ) 2 t 1 2 n + 1 12 n .
  • The statistic of the Kolmogrov–Smirnov test (Kolmogorov [37]):
    K & S = max max 1 t n t n Z ( t ; n ) , max 1 t n Z ( t ; n ) t 1 n .
  • The statistic of the Anderson–Darling test (Anderson and Darling [38]):
    A & D = 2 n t = 1 n ( t 0.5 ) log ( Z ( t ; n ) ) + ( n t + 0.5 ) log ( 1 Z ( t ; n ) ) n .
  • The statistic of the extropy test (Qiu and Jia [14]):
    χ E X = 1 2 n t = 1 n Ξ t r / n Z ( t + r ; n ) Z ( t r ; n ) ,
    where the positive integer window size r < n 2 and Z t = Z 1 if t < 1 and Z t = Z n if t > n , and Ξ t is defined in (10).
Remark 1.
In our simulation study, we focus on evaluating the power of the proposed uniformity test under small sample sizes ( n = 10 , 20 , 30 ) . This choice reflects practical scenarios where uniformity testing is applied to limited data. Our goal is to assess the finite-sample performance and sensitivity of the test, rather than to explore asymptotic properties. Power analysis in this context provides meaningful insights into the test’s effectiveness when applied to real-world problems with restricted sample sizes. Moreover, small samples are justified in power studies for many reasons:
1. 
Power analysis under small n is crucial: in practice, uniformity tests are often used in settings with limited data (e.g., simulation diagnostics, goodness-of-fit checks in applied sciences). Knowing how the test performs with small samples is highly valuable.
2. 
Since we are not using these sample sizes to justify theoretical properties like asymptotic normality, but instead focusing on finite-sample power, small n is appropriate.
3. 
It shows practical sensitivity: if your test has good power at n = 10 , 20 , 30 , this highlights its sensitivity and usefulness in realistic, data-limited conditions.
Unfavorably, the strengths of certain suggested tests change depending on the alternative distributions and window size. Finding the ideal value of λ that optimizes the test’s power for every possibility is therefore not possible. To ensure the efficacy of the proposed test, which attains sufficient (although not ideal) power over all potential distributions as described in (33), we use the associated heuristic formula to determine λ . In order to avoid the central limit theorem and preserve normalcy, we also choose n = 10 , 20 , 30 as the sample size n and do not go over this. The power analysis comparison estimates at a significance level of 0.05 with η = 2 , 3 are provided in Table 5. Then, it is evident that
  • In contrast to the previous tests, the four generalized exponential entropy estimators behave well under different 2 ω . Alternatives 1 ω , 2 ω , and 3 ω were interpreted by Stephens [39] as suggesting a shift in the mean, a change towards a lower variance, and a shift towards a greater variance, respectively. As a result, when there is a movement towards a lower variance, our tests perform better than alternatives.
  • By increasing the value of η , our four proposed estimators increase in power.
  • Across a range of n values, the fourth statistic, which is based on the kernel function, performs better than all other tests under alternatives 1 ω and 2 ω .

6. Conclusions

In this article, we introduced the continuous case of generalized exponential entropy with some applications to well-known distributions and found that this model tends to classical Shannon entropy when η 0 . We discussed the complementary dual of generalized exponential entropy and its approximation, and noted that this approximation yields the same approximation of the complementary dual of entropy, which is known as extropy. We provided a number of non-parametric estimation techniques and assessed their asymptotic normalcy using Monte Carlo simulation. In the fourth technique of estimation, we applied kernel estimation under ρ c o r r -mixing dependent data and discussed its consistency and biases. We implemented simulation and real data to evaluate the proposed estimators. Finally, we analyzed critical points of percentage and power comparisons using the four generalized exponential entropy estimators to test uniformity. We observed that the value of the parameter η significantly affects the results.
Despite these contributions, the study has some limitations. First, while the simulation results are promising, the scope of real data analysis is limited, and additional case studies could further validate the estimator’s robustness. Second, the analysis focuses primarily on univariate settings, and extending the model to multivariate contexts remains an open challenge. Third, the kernel-based estimator under ρ corr -mixing assumes specific dependence structures that may not capture all forms of real-world dependency, potentially limiting generalizability. Lastly, some theoretical properties such as the optimal bandwidth selection and finite-sample behavior were not exhaustively explored.
For future research, we plan to conduct more extensive real data analyses to further highlight the potential of the proposed estimator in applied settings. In particular, we aim to develop and evaluate statistical tests based on this entropy measure to enhance its applicability in inference problems. Additionally, a more detailed investigation of the covariance structure in real datasets will be considered to better understand its influence on the performance and interpretation of the proposed model. Furthermore, a more explicit report on the empirical Type I error (i.e., rejection rate under the null hypothesis) would enhance the completeness of the study. Moreover, we can apply this study for the dynamic version of the uncertainty measures; see, for example, Mohamed and Sakr [40].

Author Contributions

Conceptualization, M.S.M. and H.H.S.; Formal analysis, M.S.M. and H.H.S.; Investigation, M.S.M. and H.H.S.; Methodology, M.S.M. and H.H.S.; Software, M.S.M. and H.H.S.; Visualization, M.S.M. and H.H.S.; Resources, M.S.M. and H.H.S.; Writing—original draft, M.S.M. and H.H.S.; Validation, M.S.M. and H.H.S.; Writing—review & editing, M.S.M. and H.H.S. All authors contributed equally to the study’s conception and design. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2025/02/32941).

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available within the manuscript.

Acknowledgments

The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number (PSAU/2025/02/32941).

Conflicts of Interest

The authors have no relevant financial or non-financial interests to disclose. The authors confirm the lack of any conflicts of interest.

References

  1. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Renyi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20–30 July 1960; UC Press, Berkeley: Berkeley, CA, USA, 1961; pp. 547–561. [Google Scholar]
  3. Kapur, J.N. Generalized entropy of order α and type β. Math. Semin. 1967, 4, 78–82. [Google Scholar]
  4. Havrda, J.; Charvat, F. Qualification method of classification process: The concept of structural α-entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  5. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  6. Pal, N.R.; Pal, S.K. Object background segmentation using new definitions of entropy. IEEE Proc. 1989, 136, 284–295. [Google Scholar] [CrossRef]
  7. Pal, N.R.; Pal, S.K. Entropy: A new definition and its applications. IEEE Trans. Syst. Man Cybern. 1991, 21, 1260–1270. [Google Scholar] [CrossRef]
  8. Campbell, L.L. Exponential entropy as a measure of extent of distribution. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1966, 5, 217–225. [Google Scholar] [CrossRef]
  9. Panjehkeh, S.M.; Borzadaran, G.R.M.; Amini, M. Results related to exponential entropy. Int. Inf. Coding Theory 2017, 4, 258–275. [Google Scholar] [CrossRef]
  10. Kvalseth, T.O. On exponential entropies. In Proceedings of the IEEE International Conference on Systems, Man And Cybernatics 2000, Nashville, TN, USA, 8–11 October 2000; Volume 4, pp. 2822–2826. [Google Scholar]
  11. Lad, F.; Sanfilippo, G.; Agro, G. Extropy: Complementary dual of entropy. Stat. Sci. 2015, 30, 40–58. [Google Scholar] [CrossRef]
  12. Vasicek, O. A Test for Normality based on Sample Entropy. J. R. Stat. Soc. Ser. B 1976, 38, 54–59. [Google Scholar] [CrossRef]
  13. Ebrahimi, N.; Pflughoeft, K.; Soofi, E.S. Two Measures of Sample Entropy. Stat. Probab. Lett. 1994, 20, 225–234. [Google Scholar] [CrossRef]
  14. Qiu, G.; Jia, K. Extropy Estimators with Applications in Testing Uniformity. J. Nonparametric Stat. 2018, 30, 182–196. [Google Scholar] [CrossRef]
  15. Noughabi, H.A.; Jarrahiferiz, J. Extropy of order statistics applied to testing symmetry. Commun. Stat.-Simul. 2020, 55, 3389–3399. [Google Scholar] [CrossRef]
  16. Wachowiak, M.P.; Smolikova, R.; Tourassi, G.D.; Elmaghraby, A.S. Estimation of generalized entropies with sample spacing. Pattern Anal. Appl. 2005, 8, 95–101. [Google Scholar] [CrossRef]
  17. Sakr, H.H.; Mohamed, M.S. Sharma-Taneja-Mittal Entropy and Its Application of Obesity in Saudi Arabia. Mathematics 2024, 12, 2639. [Google Scholar] [CrossRef]
  18. Sakr, H.H.; Mohamed, M.S. On residual cumulative generalized exponential entropy and its application in human health. Electron. Res. Arch. 2025, 33, 1633–1666. [Google Scholar] [CrossRef]
  19. Rosenblatt, M. A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. USA 1956, 42, 43–47. [Google Scholar] [CrossRef] [PubMed]
  20. Ibragimov, I.A. Some limit theorems for stochastic processes stationary in the strict sense. Dokl. Akad. Nauk. SSSR 1959, 125, 711–714. [Google Scholar]
  21. Kolmogorov, A.N.; Rozanov, Y.A. On strong mixing conditions for stationary Gaussian processes. Theory Probab. Appl. 1960, 5, 204–208. [Google Scholar] [CrossRef]
  22. Bradley, R.C. Central limit theorems under weak dependence. J. Multivar. Anal. 1981, 11, 1–16. [Google Scholar] [CrossRef]
  23. Masry, E. Recursive probability density estimation for weakly dependent stationary processes. IEEE Trans. Inf. Theory 1986, 32, 254–267. [Google Scholar] [CrossRef]
  24. Rajesh, G.; Abdul-Sathar, E.I.; Maya, R. Local linear estimation of residual entropy function of conditional distribution, Comput. Stat. Data Anal. 2015, 88, 1–14. [Google Scholar] [CrossRef]
  25. Irshad, M.R.; Maya, R.; Buono, F.; Longobardi, M. Kernel Estimation of Cumulative Residual Tsallis Entropy and Its Dynamic Version under ρ-Mixing Dependent Data. Entropy 2022, 24, 9. [Google Scholar] [CrossRef] [PubMed]
  26. Ye, J.; Cui, W. Exponential Entropy for Simplified Neutrosophic Sets and Its Application in Decision Making. Entropy 2018, 20, 357. [Google Scholar] [CrossRef]
  27. Wang, C.; Shi, G.; Sheng, Y.; Ahmadzade, H. Exponential entropy of uncertain sets and its applications to learning curve and portfolio optimization. J. Ind. Manag. 2025, 21, 1488–1502. [Google Scholar] [CrossRef]
  28. Wanke, P. The uniform distribution as a first practical approach to new product inventory management. Int. J. Prod. Econ. 2008, 114, 811–819. [Google Scholar] [CrossRef]
  29. Correa, J.C. A new Estimator of entropy. Commun. Stat. Theory Methods 1995, 24, 2439–2449. [Google Scholar] [CrossRef]
  30. Wegman, E.J.; Davies, H.I. Remarks on some recursive estimators of a probability density. Ann. Stat. 1979, 7, 316–327. [Google Scholar] [CrossRef]
  31. Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  32. Grzegorzewski, P.; Wieczorkowski, R. Entropy-based Goodness-of-fit Test for Exponentiality. Commun. Stat. Theory Methods 1999, 28, 1183–1202. [Google Scholar] [CrossRef]
  33. Reaven, G.M.; Miller, R.G. An attempt to define the nature of chemical diabetes using a multidimensional analysis. Diabetologia 1979, 16, 17–24. [Google Scholar] [CrossRef]
  34. Marhuenda, Y.; Morales, D.; Pardo, M.C. A Comparison of Uniformity Tests. Statistics 2005, 39, 315–327. [Google Scholar] [CrossRef]
  35. Zamanzade, E.; Arghami, N.R. Goodness-of-Fit Test based on Correcting Moments of Modified Entropy Estimator. J. Stat. Comput. Simul. 2011, 81, 2077–2093. [Google Scholar] [CrossRef]
  36. Cramér, H. On the Composition of Elementary Errors: II. Stat. Appl. Scand. Actuar. J. 1928, 1, 141–180. [Google Scholar] [CrossRef]
  37. Kolmogorov, A.N. Sulla Determinazione Empirica di una Legge di Distibuziane. G. Dell’Istitutaitaliano Attuari 1933, 4, 83–91. [Google Scholar]
  38. Anderson, W.T.; Darling, A.D. A Test of Goodness-of-Fit. J. Am. Stat. Assoc. 1954, 49, 765–769. [Google Scholar] [CrossRef]
  39. Stephens, M.A. EDF statistics for goodness of fit and some comparisons. J. Am. Stat. Assoc. 1974, 69, 730–737. [Google Scholar] [CrossRef]
  40. Mohamed, M.S.; Sakr, H.H. Generalizing Uncertainty Through Dynamic Development and Analysis of Residual Cumulative Generalized Fractional Extropy with Applications in Human Health. Fractal Fract. 2025, 9, 388. [Google Scholar] [CrossRef]
Figure 1. Plot of the generalized exponential entropy of exponential distribution with γ = 0.5 , 1 , 1.5 , 2 .
Figure 1. Plot of the generalized exponential entropy of exponential distribution with γ = 0.5 , 1 , 1.5 , 2 .
Symmetry 17 01403 g001
Figure 2. Plot of the complete differential complementary dual of generalized exponential entropy and entropy of E X P ( 1 ) distribution.
Figure 2. Plot of the complete differential complementary dual of generalized exponential entropy and entropy of E X P ( 1 ) distribution.
Symmetry 17 01403 g002
Figure 3. Histogram of Δ 1 with η = 2 (left) and η = 3 (right), and n = 50 .
Figure 3. Histogram of Δ 1 with η = 2 (left) and η = 3 (right), and n = 50 .
Symmetry 17 01403 g003
Figure 4. Histogram of Δ 1 with η = 2 (left) and η = 3 (right), and n = 100 .
Figure 4. Histogram of Δ 1 with η = 2 (left) and η = 3 (right), and n = 100 .
Symmetry 17 01403 g004
Figure 5. Histogram of Δ 2 with η = 2 (left) and η = 3 (right), and n = 50 .
Figure 5. Histogram of Δ 2 with η = 2 (left) and η = 3 (right), and n = 50 .
Symmetry 17 01403 g005
Figure 6. Histogram of Δ 2 with η = 2 (left) and η = 3 (right), and n = 100 .
Figure 6. Histogram of Δ 2 with η = 2 (left) and η = 3 (right), and n = 100 .
Symmetry 17 01403 g006
Figure 7. Histogram of Δ 3 with η = 2 (left) and η = 3 (right), and n = 50 .
Figure 7. Histogram of Δ 3 with η = 2 (left) and η = 3 (right), and n = 50 .
Symmetry 17 01403 g007
Figure 8. Histogram of Δ 3 with η = 2 (left) and η = 3 (right), and n = 100 .
Figure 8. Histogram of Δ 3 with η = 2 (left) and η = 3 (right), and n = 100 .
Symmetry 17 01403 g008
Figure 9. Histogram of Δ 4 with η = 2 (left) and η = 3 (right), and n = 50 .
Figure 9. Histogram of Δ 4 with η = 2 (left) and η = 3 (right), and n = 50 .
Symmetry 17 01403 g009
Figure 10. Histogram of Δ 4 with η = 2 (left) and η = 3 (right), and n = 100 .
Figure 10. Histogram of Δ 4 with η = 2 (left) and η = 3 (right), and n = 100 .
Symmetry 17 01403 g010
Figure 11. Generalized exponential entropy of E X P ( 0.5 ) with η = 2 and n = 10 .
Figure 11. Generalized exponential entropy of E X P ( 0.5 ) with η = 2 and n = 10 .
Symmetry 17 01403 g011
Figure 12. Generalized exponential entropy of E X P ( 0.5 ) with η = 2 and n = 100 .
Figure 12. Generalized exponential entropy of E X P ( 0.5 ) with η = 2 and n = 100 .
Symmetry 17 01403 g012
Figure 13. Generalized exponential entropy of U ( 0 , 1 ) with η = 2 and n = 10 .
Figure 13. Generalized exponential entropy of U ( 0 , 1 ) with η = 2 and n = 10 .
Symmetry 17 01403 g013
Figure 14. Generalized exponential entropy of U ( 0 , 1 ) with η = 3 and n = 100 .
Figure 14. Generalized exponential entropy of U ( 0 , 1 ) with η = 3 and n = 100 .
Symmetry 17 01403 g014
Figure 15. Covariance ellipses of the data.
Figure 15. Covariance ellipses of the data.
Symmetry 17 01403 g015
Figure 16. Violin plot of SSPG by group.
Figure 16. Violin plot of SSPG by group.
Symmetry 17 01403 g016
Figure 17. Histogram of SSPG with gamma distribution.
Figure 17. Histogram of SSPG with gamma distribution.
Symmetry 17 01403 g017
Figure 18. The densities of the four generalized exponential entropy estimators for n = 10 , 20 , 30 , 40 , 50 , and η = 2 .
Figure 18. The densities of the four generalized exponential entropy estimators for n = 10 , 20 , 30 , 40 , 50 , and η = 2 .
Symmetry 17 01403 g018
Figure 19. The densities of the four generalized exponential entropy estimators for n = 10 , 20 , 30 , 40 , 50 , and η = 4 .
Figure 19. The densities of the four generalized exponential entropy estimators for n = 10 , 20 , 30 , 40 , 50 , and η = 4 .
Symmetry 17 01403 g019
Table 1. The standard deviation (in parentheses) and root mean square error for the generalized exponential entropy estimators of the E X P ( 0.5 ) distribution.
Table 1. The standard deviation (in parentheses) and root mean square error for the generalized exponential entropy estimators of the E X P ( 0.5 ) distribution.
η = 2 ; C Ω GEX ( Y ) = 0.753892
n C 1 Ω G E X ( G n ) C 2 Ω G E X ( G n ) C 3 Ω G E X ( G n ) C 4 Ω G E X ( G n )
100.1666190.1395480.1314080.0676092
(0.128746)(0.118047)(0.113455)(0.0647867)
300.07218330.06931930.06145410.0509792
(0.0595626)(0.0589205)(0.0556205)(0.0261327)
500.05039060.04925730.04447550.0503707
(0.0430887)(0.0428833)(0.0412355)(0.0180801)
1000.03226780.03193240.02900940.048996
(0.0279918)(0.0279467)(0.0271723)(0.0128205)
η = 3 ; C Ω G E X ( Y ) = 0.545428
100.1233150.09557820.0904310.0288746
(0.107512)(0.0811191)(0.0779946)(0.0276784)
300.04551410.04410090.03899240.0185159
(0.0374106)(0.0368886)(0.0341483)(0.00760492)
500.03024980.0297180.02684430.0184828
(0.0253976)(0.0251962)(0.0238917)(0.004876)
1000.01806710.01790560.01633720.0181694
(0.0151679)(0.0151121)(0.0145665)(0.00331677)
Table 2. The standard deviation (in parentheses) and root mean square error for the generalized exponential entropy estimators of the U ( 0 , 1 ) distribution.
Table 2. The standard deviation (in parentheses) and root mean square error for the generalized exponential entropy estimators of the U ( 0 , 1 ) distribution.
η = 2 ; C Ω GEX ( Y ) = 0
n C 1 Ω G E X ( G n ) C 2 Ω G E X ( G n ) C 3 Ω G E X ( G n ) C 4 Ω G E X ( G n )
100.2749050.1265260.1043230.158303
(0.101703)(0.11006)(0.0963832)(0.151793)
300.09561540.05532930.05981090.0902685
(0.0578994)(0.0553003)(0.0598273)(0.0799785)
500.06591720.04262880.04501140.0827768
(0.0429443)(0.0423997)(0.0448721)(0.0565385)
1000.04162650.02876820.0300480.0804643
(0.0281866)(0.0286104)(0.0298927)(0.0371721)
η = 3 ; C Ω G E X ( Y ) = 0
100.1707870.07281470.07252070.111723
(0.0648594)(0.0687118)(0.0652757)(0.108251)
300.06475210.03924150.04818930.0745462
(0.04087)(0.0385893)(0.0463916)(0.0668339)
500.05357450.03280330.03832480.069018
(0.0322633)(0.0316971)(0.0361227)(0.0498909)
1000.04311920.02648280.02808860.0675841
(0.0225317)(0.023074)(0.0249522)(0.0355629)
Table 3. The generalized exponential entropy estimators and their absolute biases (in parentheses) of the sspg set of data.
Table 3. The generalized exponential entropy estimators and their absolute biases (in parentheses) of the sspg set of data.
Estimators
η C Ω GEX ( Y ) C 1 Ω GEX ( G n ) C 2 Ω GEX ( G n ) C 3 Ω GEX ( G n ) C 4 Ω GEX ( G n )
0.72.390222.395622.396022.396312.39382
(0.00549344)(0.00584758)(0.00611249)(0.00359942)
0.91.893161.89461.894681.894791.89445
(0.00144317)(0.00152767)(0.00163561)(0.00129579)
11.710111.710851.71091.710961.71087
(0.000742883)(0.000786553)(0.000855934)(0.000765248)
1.51.14521.145231.145231.145241.14525
(0.0000300966)(0.000032158)(0.0000380813)(0.0000512178)
20.8591410.8591280.8591280.8591280.85913
(0.0000129779)(0.0000129042)(0.0000125223)(0.000011189)
Table 4. At the significance level of 0.05 , the limits of the percentage points of the generalized exponential entropy estimators with η = 2 , 3 .
Table 4. At the significance level of 0.05 , the limits of the percentage points of the generalized exponential entropy estimators with η = 2 , 3 .
n; η = 2 C 1 Ω GEX ( G n ) C 2 Ω GEX ( G n ) C 3 Ω GEX ( G n ) C 4 Ω GEX ( G n )
10(−0.422986, −0.0150384)(−0.284684, 0.177716)(−0.177159, 0.189558)(−0.22324, 0.210542)
20(−0.248859, 0.0333156)(−0.141655, 0.128236)(−0.123249, 0.146506)(−0.154934, 0.172644)
30(−0.183537, 0.0377545)(−0.104446, 0.105192)(−0.105972, 0.119938)(−0.0959191, 0.159991)
50(−0.132261, 0.0346427)(−0.0734596, 0.0868403)(−0.0753921, 0.0944493)(−0.0301963, 0.146967)
70(−0.105717, 0.0274583)(−0.0676648, 0.0664032)(−0.0706697, 0.0746701)(−0.0193603, 0.136071)
100(−0.086409, 0.0198325)(−0.0565094, 0.0536249)(−0.056399, 0.0581039)(−0.00358996, 0.126684)
n; η = 3 C 1 Ω G E X ( G n ) C 2 Ω G E X ( G n ) C 3 Ω G E X ( G n ) C 4 Ω G E X ( G n )
10(0.147299, 0.3406)(0.0325989, 0.33247)(−0.121961, 0.173553)(−0.187423, 0.195208)
20(−0.0827349, 0.0755373)(−0.0731537, 0.118101)(−0.115785, 0.120527)(−0.136084, 0.161355)
30(−0.105273, 0.0551253)(−0.077303, 0.0866788)(−0.103773, 0.0997682)(−0.0913948, 0.149395)
50(−0.0974539, 0.032796)(−0.0648665, 0.0697503)(−0.0795326, 0.0741954)(−0.0335902, 0.136741)
70(−0.0884423, 0.0204563)(−0.0634696, 0.0511093)(−0.0700712, 0.055357)(−0.021843, 0.127675)
100(−0.0837358, 0.0115276)(−0.0597903, 0.0369845)(−0.0654341, 0.0408905)(−0.00320919, 0.119708)
Table 5. Power value estimations of the various statistics are compared at a significance level of 0.05, using η = 2 , 3 .
Table 5. Power value estimations of the various statistics are compared at a significance level of 0.05, using η = 2 , 3 .
nAlt.Statistics; η = 2 K & S A & D C & M χ EX
C 1 Ω GEX ( G n ) C 2 Ω GEX ( G n ) C 3 Ω GEX ( G n ) C 4 Ω GEX ( G n )
10 1 1.5 0.1140.0750.230.4210.1520.1660.1680.129
1 2 0.2410.1480.4650.6980.3760.4090.4380.32
2 1.5 0.2470.1760.3140.4160.0370.0150.0260.21
2 2 0.5850.4350.6180.6880.0440.0090.0220.455
2 3 0.9460.8670.8850.940.0870.0180.0510.801
3 1.5 0.1010.0870.1590.1170.1130.1210.0980.082
3 2 0.30.2090.5520.1470.1990.2270.1550.131
20 1 1.5 0.1780.1930.3050.5460.2790.3090.3240.258
1 2 0.470.4950.6440.8680.6990.7560.7710.665
2 1.5 0.4430.4110.510.6070.0590.0320.0420.399
2 2 0.8780.8620.8810.9120.1250.0920.1000.808
2 3 110.99410.4100.5370.5130.991
3 1.5 0.1470.090.1280.0620.1500.1560.1190.155
3 2 0.4120.1880.5220.0850.3150.3710.2500.363
30 1 1.5 0.2810.3190.3360.7230.5200.5920.5940.529
1 2 0.6930.7390.7420.9740.9550.9780.9780.919
2 1.5 0.6080.6060.5860.7940.1130.1080.0930.581
2 2 0.9630.9710.9430.9920.3760.5600.4450.924
2 3 110.99910.9280.9960.9861
3 1.5 0.1310.0950.1530.0490.2300.2410.1750.217
3 2 0.2480.1230.3430.070.5610.6950.5570.782
nAlt.Statistics; η = 3 K & S A & D C & M χ EX
C 1 Ω GEX ( G n ) C 2 Ω GEX ( G n ) C 3 Ω GEX ( G n ) C 4 Ω GEX ( G n )
10 1 1.5 0.1290.1070.2590.4470.1520.1660.1680.129
1 2 0.2630.1880.5760.7180.3760.4090.4380.32
2 1.5 0.2970.2190.3460.430.0370.0150.0260.21
2 2 0.5890.5710.6510.70.0440.0090.0220.455
2 3 0.9490.870.8890.950.0870.0180.0510.801
3 1.5 0.1150.1120.1760.120.1130.1210.0980.082
3 2 0.2570.1220.5670.1490.1990.2270.1550.131
20 1 1.5 0.1360.2420.3290.5510.2790.3090.3240.258
1 2 0.4990.5310.6840.8760.6990.7560.7710.665
2 1.5 0.4470.5990.6160.6120.0590.0320.0420.399
2 2 0.8130.7390.8730.9170.1250.0920.1000.808
2 3 11110.4100.5370.5130.991
3 1.5 0.1560.110.1630.0620.1500.1560.1190.155
3 2 0.4170.2820.6530.0910.3150.3710.2500.363
30 1 1.5 0.3020.3430.3510.7480.5200.5920.5940.529
1 2 0.6980.7880.7880.9770.9550.9780.9780.919
2 1.5 0.7020.7260.6080.7980.1130.1080.0930.581
2 2 0.9680.9850.9660.9980.3760.5600.4450.924
2 3 11110.9280.9960.9861
3 1.5 0.1670.170.1760.0520.2300.2410.1750.217
3 2 0.2550.2660.3880.0590.5610.6950.5570.782
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohamed, M.S.; Sakr, H.H. Uniformity Testing and Estimation of Generalized Exponential Uncertainty in Human Health Analytics. Symmetry 2025, 17, 1403. https://doi.org/10.3390/sym17091403

AMA Style

Mohamed MS, Sakr HH. Uniformity Testing and Estimation of Generalized Exponential Uncertainty in Human Health Analytics. Symmetry. 2025; 17(9):1403. https://doi.org/10.3390/sym17091403

Chicago/Turabian Style

Mohamed, Mohamed Said, and Hanan H. Sakr. 2025. "Uniformity Testing and Estimation of Generalized Exponential Uncertainty in Human Health Analytics" Symmetry 17, no. 9: 1403. https://doi.org/10.3390/sym17091403

APA Style

Mohamed, M. S., & Sakr, H. H. (2025). Uniformity Testing and Estimation of Generalized Exponential Uncertainty in Human Health Analytics. Symmetry, 17(9), 1403. https://doi.org/10.3390/sym17091403

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop