Next Article in Journal
AdaptPest-Net: A Task-Adaptive Network with Graph–Mamba Fusion for Multi-Scale Agricultural Pest Recognition
Previous Article in Journal
Wealth Distribution Under Power Trading Frequencies and Transitions of Agents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Parametric Goodness-of-Fit Tests Using Tsallis Entropy Measures

by
Mehmet Siddik Cadirci
Department of Statistics, Faculty of Science, Sivas Cumhuriyet University, 58140 Sivas, Turkey
Entropy 2025, 27(12), 1210; https://doi.org/10.3390/e27121210
Submission received: 12 October 2025 / Revised: 11 November 2025 / Accepted: 26 November 2025 / Published: 28 November 2025

Abstract

We develop goodness-of-fit (GOF) procedures rooted in Tsallis entropy, with a particular emphasis on multivariate exponential-power (generalized Gaussian) and q-Gaussian models. The GOF statistic compares a closed-form Tsallis entropy under the null with a non-parametric k-nearest-neighbor (k-NN) estimator. We establish consistency and mean-square convergence of the estimator under mild regularity and tail assumptions, discuss an asymptotic normality regime as q 1 , and calibrate critical values by parametric bootstrap/permutation. Extensive Monte Carlo experiments report empirical size, power, and runtime. These are reported across dimensions, k, and q. An applied example illustrates practical calibration and sensitivity, which are essential for accurate measurement.

1. Introduction

Entropy, being a fundamental concept in information theory, is a measure of the uncertainty that sits inside a kind of probability distribution. Entropy estimation finds applications in a number of fields, including statistical inference, cryptography, thermodynamics, and machine learning. The classical notion of entropy was established by Shannon [1], who defined the differential entropy of a continuous random vector with density f : R m R as
H ( f ) = R m f ( x ) log f ( x ) d x .
To capture more flexible notions of uncertainty, several generalizations of Shannon entropy exist. One such generalization is Rényi entropy [2], which introduces a tunable parameter to the entropy definition, allowing a sort of distribution to be emphasized against another depending on whether the tail behavior is more or less important. For a random variable X R m with density f, Rényi entropy is defined as
H q R ( f ) = 1 1 q log R m f q ( x ) d x , q 1 .
As q 1 , H q R ( f ) asymptotically converges to the Shannon entropy in  (1). Another important extension is Tsallis entropy [3], which has attracted the attention of many researchers due to its implications in non-extensive thermodynamics, statistical mechanics, information geometry, and image analysis. The Tsallis entropy of a density f is given by
S q ( f ) = 1 1 q R m f q ( x ) d x 1 , q 1 .
Although the Tsallis entropy is structurally similar to the Rényi entropy, it does not satisfy the conventional additivity property of the Shannon entropy. It follows a form of pseudo-additivity expressed as
S q ( X , Y ) = S q ( X ) + S q ( Y ) + ( 1 q ) S q ( X ) S q ( Y ) ,
for independent X and Y. Most recent studies have extended the theoretical and applied foundations of the Tsallis entropy in various settings [4,5,6,7,8,9,10,11,12]. Consideration in statistical hypothesis testing frameworks is relatively underdeveloped. In this paper, goodness-of-fit tests are proposed for multivariate generalized Gaussian and q-Gaussian distributions based on Tsallis entropy-type criteria. Combining the maximum entropy principle with k-NN-based non-parametric estimation, we obtain flexible and data-driven test statistics. We study their consistency and asymptotic properties under various distributional assumptions.
The remainder of the paper is organized as follows. Section 1 deals with the conceptual background and motivation, defining the Tsallis entropy and relating it with the Shannon and Rényi entropy. In Section 2, the maximum entropy principle and multivariate exponential power distributions are set, thus providing the basis for our framework. In Section 3, we provide a class of k-NN-based non-parametric estimators for Tsallis entropy and study its consistency under general assumptions. Section 5 introduces entropy-based goodness-of-fit test statistics for q-Gaussian and generalized Gaussian distributions and discusses their asymptotic behavior. Section 6 presents extensive Monte Carlo simulations of empirical distributions, convergence rates, and normal approximations for the proposed test statistics. Finally, Section 7 comprises the conclusions and prospects for future work, including estimations for dependent data and robust applications in machine learning.

2. Principle of Maximum Entropy

Let X be a random vector in R m with density f ( x ) relative to the Lebesgue measure on R m . We denote the set S = { x R m : f ( x ) > 0 } as the support of this distribution. The q-order Tsallis entropy, for some q ( 0 , 1 ) ( 1 , ) , is defined as follows:
S q ( f ) = 1 1 q S f q ( x ) d x 1 , q 1 .
Lemma 1
(Monotonicity in q [13,14]). Let f be a probability density on ( R m , d x ) such that f d x = 1 . The Tsallis entropy is defined as
S q ( f ) = 1 q 1 1 f ( x ) q d x , q > 0 , q 1 ,
which is continuous at q and exhibits a decreasing trend as a function of q.
Proof 
The mapping q f q d x is a non-decreasing mapping in probability space according to the Lyapunov inequality for L p norms. Therefore, for  q 2 > q 1 , we obtain f q 2 d x f q 1 d x . The function S q ( f ) = ( 1 f q ) / ( q 1 ) implies that S q 2 ( f ) S q 1 ( f ) when q 2 > q 1 . Continuity follows from dominated convergence for q 1 . This is also known as recovering Shannon entropy. Continuity also follows from standard arguments for q 1 .    □
The map q S q ( f ) is continuous on ( 0 , ) { 1 } , with  lim q 1 S q ( f ) = H ( f ) , and, when | S | < ,
lim q 0 S q ( f ) = log | S | ,
but, if | S | is infinite, then the entropy diverges as q→ 0. Additionally,
lim q 1 S q ( f ) = H ( f ) = S f ( x ) log f ( x ) d x ,
which recovers the Shannon entropy. Consider a location parameter α R m and a symmetric and positive definite covariance matrix Σ R m × m . Then, the multivariate exponential power distribution MEP m ( s , α , Σ ) is defined in [15] as follows:
f ( x ; m , s , α , Σ ) = Γ ( m / 2 + 1 ) π m / 2 Γ ( m / s + 1 ) 2 m / s det Σ exp 1 2 ( x α ) T Σ 1 ( x α ) s / 2 ,
where s > 0 is a shape parameter that governs the heaviness and peakedness of the distribution. The variance–covariance structure is given by Var ( X ) = β Σ , with scale factor
β ( m , s ) = 2 2 / s Γ ( m + 2 ) / s m Γ ( m / s ) .
For s = 2 , this distribution coincides with the multivariate normal distribution N ( α , Σ ) , while s = 1 yields the multivariate Laplace distribution. This family was originally introduced by [16] and was further examined by [17,18]. The class MEP m belongs to the elliptical family and includes symmetric Kotz-type distributions [19]. A special case arises when α = 0 and Σ = I m , the identity matrix. This yields the isotropic exponential power distribution IEP m ( s ) with density
f ( x ; m , s ) = Γ ( m / 2 + 1 ) Γ ( m / s + 1 ) π m / 2 2 m / s exp 1 2 x s ,
where x R m , and x denotes the standard Euclidean norm. This simplification provides analytical tractability and is frequently employed in simulations and theoretical derivations.

3. Tsallis Entropy

Tsallis entropy generalizes the concept of Shannon entropy. It tends to apply to non-extensive systems, such as systems showing long-range interactions or non-Markovian dynamics. Its expression, acting on the probability density function f : R m R , is as follows:
S q ( f ) = 1 q 1 1 R m f q ( x ) d x , q 1 .
This definition allows us to recover the Shannon entropy in the limit q 1 .

3.1. Generalized Gaussian Distributions Under Tsallis Entropy

Let f GG ( x ) represent the probability density function for the multivariate generalized Gaussian distribution defined in [20] as
f GG ( x ) = 1 C ( m , s , Σ ) exp 1 2 h ( x , μ , Σ ) s ,
where h ( x , μ , Σ ) = ( x μ ) T Σ 1 ( x μ ) is the Mahalanobis distance raised to the power s, and  C ( m , s , Σ ) is the normalization constant
C ( m , s , Σ ) = R m exp 1 2 h ( x , μ , Σ ) s d x .
Then, the integral for the Tsallis entropy becomes
R m [ f GG ( x ) ] q d x = C ( m , s q , Σ ) C ( m , s , Σ ) q ,
and the Tsallis entropy of a multivariate generalized Gaussian distribution is
S q ( f GG ) = 1 1 q C ( m , s q , Σ ) C ( m , s , Σ ) q 1 .
This expression is obtained and optimized in [13].

3.2. The q-Exponential and q-Gaussian Distributions

Consider the q-exponential function, a nonlinear generalization of the classical exponential function.
exp q ( x ) = [ 1 + ( 1 q ) x ] 1 / ( 1 q ) , if 1 + ( 1 q ) x > 0 , 0 , otherwise .
The classical exponential function is recovered in the limit q 1 . Another name for the one-dimensional q-Gaussian distribution is the q-exponential distribution.
f ( x ; a , σ , q ) = C q 1 ( 1 q ) ( x a ) 2 2 σ 2 + 1 / ( 1 q ) ,
where C q is the normalization constant, a is the position, and  σ > 0 is the scale parameter. Depending on q, it extends the Gaussian family to the heavy-tailed or compactly supported cases.

3.3. Multivariate q-Gaussian and Its Entropy

The multivariate q-Gaussian distribution extends the above R m distribution as follows:
f ( x ; μ , Σ , q ) = C q 1 ( 1 q ) ( x μ ) T Σ 1 ( x μ ) 2 + 1 / ( 1 q ) .
Here, μ R m is the mean, Σ R m × m corresponds to the covariance matrix, and  C q provides the appropriate normalization. Define the change in variables as y = Σ 1 / 2 ( x μ ) so that
f ( y ) 1 ( 1 q ) y 2 2 + 1 / ( 1 q ) .
The Tsallis entropy of the multivariate q-Gaussian distribution G m , q ( μ Σ ) with mean vector μ R m , covariance matrix Σ R m × m , and entropic parameter q then becomes
S q ( G m , q ( μ Σ ) ) = 1 q 1 1 C q q | Σ | 1 q 2 R m 1 ( 1 q ) y 2 2 + q / ( 1 q ) d y .
Changing to spherical coordinates, the radial component is evaluated as follows:
0 1 ( 1 q ) r 2 2 + q / ( 1 q ) r m 1 d r ,
which yields a closed form for the Beta and Gamma functions
S q ( G m , q ( μ Σ ) ) = 1 q 1 1 C q q | Σ | 1 q 2 2 m / 2 Γ ( m / 2 ) ( 1 q ) m / 2 Γ m 2 + 1 q 1 .
The dependence of entropy on q, m, and the geometry encoded in Σ is highlighted by this expression.

4. Tsallis Entropy: Statistical Estimation Method

In this section, we focus on the non-parametric estimation of Tsallis entropy for continuous distributions. Following [21,22,23], we consider a k-NN estimator that avoids density estimation of Tsallis entropy. Let X be a random vector in R m with a Lebesgue-continuous density f. Given N independent realizations X N = { X 1 , , X N } drawn from f, the goal is to estimate
S q ( f ) = 1 q 1 1 R m f q ( x ) d x , q 1 .
For k 1 and N > k , let ρ i , k , N denote the Euclidean distance between X i and its k-NN in X N { X i } . Then, the estimator introduced in [24] is given by
S q ^ k , N = 1 N i = 1 N ζ i , k , N 1 q ,
where
ζ i , k , N = ( N 1 ) C k V m ρ i , k , N m , C k = Γ ( k ) Γ ( k + 1 q ) 1 / ( 1 q ) ,
and V m = π m / 2 / Γ ( m / 2 + 1 ) is the volume of the m-dimensional unit ball.
Definition 1.
For any positive integer r, the r-moment of f under Tsallis entropy is
K r ( f ) = E ( X r ) = 1 q 1 R m x r f q ( x ) d x .
The critical moment is defined as follows:
r c ( f ) = sup { r > 0 : K r ( f ) < } .
This defines the maximal order of finite moments admissible under f q .
Remark 1.
A Monte Carlo approximation to S q ( f ) , assuming knowledge of f, is
1 N i = 1 N f q 1 ( X i ) .
The estimator S q ^ k , N may be interpreted as a plug-in estimator based on a k-NN density estimator
S q ^ k , N = 1 N i = 1 N f ^ N , k ( X i ) q 1 , f ^ k , N ( x ) = 1 ( N 1 ) C k V m ρ k + 1 , N ( x ) m .
This closely resembles the non-parametric estimator proposed in [25] and generalized in [26].
We assume X 1 , , X N are i.i.d. samples from a distribution μ with a density f plus possibly a finite number of singular components. In such settings, zero-distance degeneracy can be avoided, and the estimator (25) retains asymptotic consistency for the continuous component f.

Assumptions and Results

We are working under the following assumptions.
Assumption 1.
f is Lebesgue-continuous on R m and f q < .
Assumption 2.
The q-weighted moments up to order r are finite for the values required below; equivalently, r c ( f ) > 0 in Definition 1.
Assumption 3.
k is fixed as N .
Theorem 1
([27]). [Consistency] Under Assumptions 1–3 and
r c ( f ) > m ( 1 q ) q ,
E S q ^ k , N S q ( f ) , it holds that S q ^ k , N P S q ( f ) as N . Leonenko et al. established the consistency of k-NN estimators for Rényi and Tsallis entropy functionals in arbitrary dimensions (see [24] for the main theorems and detailed proofs).
Theorem 2
([28]). [ L 2 convergence] If, in addition, q > 1 / 2 and
r c ( f ) > 2 m ( 1 q ) 2 q 1 ,
then E S q ^ k , N S q ( f ) 2 0 . See [24] for mean-square bounds of k-NN Rényi/Tsallis estimators and [29] for the Shannon limit of KL/entropy-type estimators.
Remark 2
(CLT regimes). In near-Shannon settings ( q 1 ) with smooth f, k-NN entropy estimators admit Gaussian limits: for d = 1 , 2 , the Kozachenko–Leonenko estimator satisfies a CLT ([29], Thm. 1; see also Cor. 7). For higher dimensions and modern k-NN functionals, asymptotic normality and efficiency results appear in [11] under standard smoothness assumptions.
Remark 3.
For q ( 1 , ( k + 1 ) / 2 ) , it was shown in [24] that the same consistency results hold under appropriate moment conditions.
Remark 4.
If f ( x ) = O ( | x | β ) as | x | for some β > m and q ( 0 , 1 ) , then r c ( f ) = β m , ensuring the condition (32) is satisfied. For related discussions, see [30].

5. Test Statistics and Hypothesis Testing for T ( x , a , σ )

Let K denote a class of distributions for which the k-NN entropy estimator S q ^ k , N satisfies, for any fixed k 1 and q > 0.5 ,
E ( S q ^ k , N ) S q as N ,
S q ^ k , N S q in probability as N .
By Theorem 1, the distributions T 1 ( x ; a , q , σ ) and T 2 ( x ; a , q , σ ) are included in this class. Consider now i.i.d. random vectors X 1 , X 2 , , X N f K . The sample covariance matrix is given by
S ^ N = 1 N 1 i = 1 N ( X i X ¯ ) ( X i X ¯ ) T , with X ¯ = 1 N i = 1 N X i .

5.1. Test Statistics

The null hypothesis that X follows either T 1 ( x ; a , q , σ ) or T 2 ( x ; a , q , σ ) is assessed by the following test statistics, which are defined as follows:
  • For H 0 : X T 1 ( x ; a , q , σ ) , with  q ( 1 , 3 ) , define
    Q N , k Tsallis ( m , q ) = S q upper S q ^ k , N ,
    where S q upper = 1 2 log | Σ ^ N | + T 1 ( x ; a , q , σ ) denotes the maximum Tsallis entropy under the assumed model.
  • For H 0 : X T 2 ( x ; a , q , σ ) , with  q ( 0 , 1 ) , define
    Q N , k Tsallis * ( m , q ) = S q upper S q ^ k , N ,
    where S q upper = 1 2 log | Σ ^ N | + T 2 ( x ; a , q , σ ) .

Null Calibration Policy

We distinguish two regimes. (i) Near-Shannon, smooth: when q 1 and standard smoothness/moment conditions hold (Remark 2), we use a normal approximation for S q ^ k , N , with asymptotic variance taken from the cited CLT results ([11,29]. (ii) General q/heavy tails/compact support: analytical nulls are delicate; we calibrate critical values by parametric bootstrap under H 0 (Algorithm 1) and report Monte Carlo standard errors for the estimated quantiles in Table A2.
Algorithm 1 Bootstrap calibration for Q N , k Tsallis
1:
Estimate null parameters (e.g., μ ^ , Σ ^ ).
2:
for  b = 1 , , B  do
3:
    Simulate X 1 : N ( b ) H 0 ( μ ^ , Σ ^ , q ) .
4:
    Compute Q ( b ) = Q N , k Tsallis ( X 1 : N ( b ) ) .
5:
end for
6:
Let c ^ α be the ( 1 α ) -quantile of { Q ( b ) } ; report its Monte Carlo SE.

5.2. Asymptotic Behavior

Under H 0 , by the law of large numbers, we have S ^ N P Σ , and, by Theorem 2, S q ^ k , N P S q ( f ) . In regime (i) above, we use Gaussian calibration with asymptotic variance from the cited CLT results; otherwise (regime (ii)), we rely on bootstrap critical values (Algorithm 1). Using Slutsky’s theorem, the test statistics converge in probability as
lim N Q N , k Tsallis ( m , q ) P 0 , if X T 1 ( x ; a , q , σ ) , c > 0 , otherwise , lim N Q N , k Tsallis * ( m , q ) P 0 , if X T 2 ( x ; a , q , σ ) , c > 0 , otherwise .
where c is a constant. It depends on the divergence between f and the assumed distribution.

6. Numerical Experiments

In Section 5, we complement the two calibration regimes with a Monte Carlo study. When the conditions of the central limit theorem (CLT) hold (near-Shannon and smooth densities), we verify the adequacy of the Gaussian approximation. However, when these conditions are not met, we estimate critical values via parametric bootstrap and report Monte Carlo uncertainty. This section documents convergence. It also documents dispersion across ( N , k , m , q ) . Normality diagnostics are documented as well.

6.1. Challenges in Null Distribution

The exact null analytic distributions of the test statistics Q N , k Tsallis and Q N , k Tsallis * cannot be generated due to the complex dependence between the distance measures and the density estimates used for the estimation of the entropy k-NN. Although asymptotic and central limit theorems have been introduced in previous works [12,29,30], these analytical schemes do not sufficiently consider the highly complex dependence structures inherent in entropy estimators. Therefore, Monte Carlo simulations naturally provide a solution to the performance evaluation of the proposed statistics.

6.2. Multivariate q-Gaussian Sampling: Exact Radial Laws with Correctness

The random variable Y follows a q-Gaussian distribution with density.
f q ( y ) 1 ( 1 q ) y Σ 1 y 2 + 1 1 q , q < 1 , 1 + ( q 1 ) y Σ 1 y 2 1 q 1 , q > 1 ,
where Σ 0 . Define the squared Mahalanobis radius
R 2 : = Y Σ 1 Y , and set U : = R 2 2 .

6.2.1. Radial Laws

  • Case q < 1 (compact support). The joint density factorizes in polar coordinates as follows:
    f ( r ) 1 ( 1 q ) r 2 2 1 1 q r m 1 1 0 r 2 2 1 q .
    Let T = ( 1 q ) R 2 / 2 [ 0 , 1 ] . Then,
    T Beta m 2 , 1 1 q m 2 , R 2 = 2 T 1 q .
  • Case q > 1 (heavy tails). The power-law exponent can be matched with that of a multivariate Student distribution to obtain
    1 q 1 = ν + m 2 ν = 2 q 1 m > 0 .
    The q-Gaussian is equivalent to the multivariate Student- t m ( 0 , m , Σ ) distribution up to a scaling of Σ . Therefore,
    R 2 = d χ m 2 ( χ ν 2 / ν ) = m F m , ν ,
    in other words, R 2 follows a scaled F (or, equivalently, Beta-prime) distribution. Equivalently,
    R 2 = d G 1 G 2 / ν , G 1 Γ m 2 , 2 , G 2 Γ ν 2 , 2 ,
    with G 1 and G 2 independent.

6.2.2. Correctness

The radial part is converted into a Beta law by the transformation T = ( 1 q ) R 2 / 2 when q < 1 . Coupling this with a uniform direction on the unit sphere, S m 1 , yields the correct q-Gaussian through the Jacobian term r m 1 . If q > 1 , it is possible to employ the Gaussian scale-mixture representation of the Student-t distribution,
Y = d Z U / ν , Z N m ( 0 , Σ ) , U χ ν 2 independent ,
thus, R 2 = ( Z Σ 1 Z ) / ( U / ν ) , which reproduces the F (Beta-prime) law. This mapping is consistent with the q-Gaussian exponent when ν = 2 / ( q 1 ) m .

6.2.3. Exact Samplers

For generating observations from the multivariate q-Gaussian model, we are following a direct sampling scheme that is based on the known radial-angular decomposition of the distributions in question. The steps of the sampler are outlined in Algorithm 2, and the construction relies on standard results from the listed references above the algorithm.
Algorithm 2 Exact sampler for the multivariate q-Gaussian distribution.
1:
Input: m, q, Σ . Compute A as the Cholesky factor of Σ . Draw S uniformly on the unit sphere S m 1 .
2:
if  q < 1  then
3:
    Draw T Beta m 2 , 1 1 q m 2 and set R = 2 T 1 q .
4:
else ( q > 1 )
5:
    Compute ν = 2 / ( q 1 ) m ; draw G 1 χ m 2 , G 2 χ ν 2 ; set R = m G 1 / ( G 2 / ν ) .
6:
end if
7:
Output: Y = μ + A ( R S ) .
Note: The derivation follows the standard results in [3,13,19,24,31].

6.3. Stochastic Generation of q-Gaussian Samples

An accurate numerical evaluation of the Tsallis entropy-based tests requires robust methods for sampling from multivariate q-Gaussian distributions, denoted q - G ( m , q , Σ ) . Direct sampling is challenging due to the introduced nonlinearity of the parameter q. However, a stochastic approach allows efficient and precise generation of samples.
Specifically, we first generate a standard Gaussian vector, Z N m ( 0 , I ) . Then, we independently define a scalar random variable R with the following density:
f R ( r ) 1 ( 1 q ) r 2 2 + 1 1 q .
Combining these elements yields the multivariate q-Gaussian random vector X R m via
X = μ + R Σ 1 / 2 Z ,
where μ is the mean vector and Σ 1 / 2 represents the Cholesky decomposition (matrix square root) of the covariance matrix Σ .
The distribution of R 2 explicitly depends on the parameter q. The relationship between R2 and q is described by two different laws. When q < 1 , the relationship is described by a Beta law. When q > 1 , the relationship is described by a scaled-F (Beta-prime) law. Moreover, this distribution converges smoothly to the standard type when q 1 , highlighting its suitability for comparative entropy-based analysis. To demonstrate these distributional aspects graphically, Figure 1 presents scatter plots in subplots corresponding to different values of q, thereby illustrating the central concentration and tail behavioral variability associated with q.

6.4. Empirical Density and Analysis of Log Density

To elucidate the shape and tail characteristics of the multivariate q-Gaussian distribution, we simulate N = 10 6 samples from q - G ( m , q , Σ ) for various values of q. We consider various combinations of N, M, k, m and q, as shown in Table 1. The resulting empirical probability density functions (PDFs) and their corresponding log-density plots are presented in Figure 2.
The empirical PDFs establish quite conclusively that, as q decreases, the distributions behave more similarly to a standard Gaussian distribution, with peaks around the mean, and the tails become significantly heavier. On the other hand, linear log-density plots enable a closer view of how higher-scale deviations deviate from Gaussianity, which is precisely what is associated with these heavier tails. These figures provide simple graphical evidence that clearly reveals the pronounced effects of q on the tail behavior and the overall distribution shape.

6.5. Bootstrap vs. Asymptotic Normal Calibration

We use a parametric bootstrap (Algorithm 1) to calibrate the null distribution of Q N , k Tsallis in order to evaluate the accuracy of the asymptotic normal approximation. For each configuration, ( m , q , k , N ) , B = 500 bootstrap replicates were generated. This was completed under H 0 using estimated parameters ( μ ^ , Σ ^ ) . The empirical quantiles from the bootstrap distribution are compared with those from the asymptotic normal approximation in Figure A1. The two approaches are in close agreement when q 1 and N is large, validating the Gaussian regime described in Remark refrem:clt. For smaller N or heavy-tailed cases ( q < 1.2 ), the critical values based on the bootstrap method exhibit slightly heavier tails. This leads to improved control of type-I error rates.

6.6. Benchmarking and Power Analysis

In order to provide context for the performance of the proposed Tsallis entropy test, we benchmarked it against three representative alternatives:
(i)
A likelihood-ratio (LRT) goodness-of-fit test for the generalized Gaussian distribution [17]; Shannon-entropy ( q = 1 ) and Rényi-entropy estimators are implemented by Berrett & Samworth (2019) [12];
(ii)
Divergence-based robust tests use Kullback–Leibler and Hellinger distances [32].
For each benchmark, we matched sample sizes N { 500 , 1000 , 5000 } and dimension m = 2 . We also replicated M = 1000 Monte Carlo trials. We computed the empirical size at the nominal level of α = 0.05 and the power under several departures:
  • mean-shifted alternatives X N ( δ , I m ) ;
  • scale-inflated alternatives X N ( 0 , σ 2 I m ) ;
  • contamination mixtures ( 1 ϵ ) q - G ( m , q , Σ ) + ϵ N ( 0 , 4 I m ) .
Figure A2 illustrates the empirical power curves. The Tsallis-based test maintains correct size. It also exhibits superior power in heavy-tailed or contaminated regimes ( q < 1.2 ). The likelihood-based and Shannon-entropy tests lose sensitivity. Rényi-based estimators demonstrate comparable performance in the vicinity of q 1 ; however, they exhibit a decline in effectiveness for compact-support cases ( q < 1 ). The results of the empirical size and mean power are summarized in Table A3.

6.7. Monte Carlo Study of Test Statistic Behavior

We used extensive Monte Carlo simulations to study the convergence behavior of the proposed test statistic, Q N , k T ( m , q ) . For each q parameter set and m parameter size, we performed M = 100 replications for various sample sizes between N = 500 and N = 5000 . As shown in Figure 3, the test statistic converges for k = 1 , and variability decreases as the sample size increases. Figure 4 extends this analysis to neighborhood sizes k = 1 , 2 , and 3 and shows that the test statistic is consistent and stable across multiple dimensions and parameters.

6.8. Violin Plots and Distributional Analysis

The violin plots in Figure 5 depict the empirical distributions of Q N , k T ( m , q ) for dimensions m = 2 . These plots clearly demonstrate a shift toward symmetry and a reduction in variance as the parameter q approaches unity. These visualizations intuitively represent the nuances of the distributions generated by the proposed test statistics.

6.9. Plot of Q–Q for Normality Check

Kernel density estimation and Q–Q plots were also employed to further assess the accuracy of the normal approximation. Figure 6 demonstrates the progressive alignment of the empirical distribution with Gaussian quantities as the parameter q approaches unity. This graphical verification provides strong confirmation of the normality assumption under these conditions, further enhancing confidence in the theoretical robustness of our procedure.

6.10. Empirical Distribution of the Test Statistics

We perform a detailed simulation-based analysis to investigate the limiting distribution of test statistics Q N , k T ( m , q ) under various ( N , k , m , q ) configurations. For each configuration, we generate n = 100 independent samples of size N from the distribution q - G ( m , q , Σ ) and calculate the corresponding test statistic Q N , k T ( m , q ) .
The Shapiro–Wilk test [33] is applied to ascertain normality for each set of 100 statistic values. The procedure is repeated M = 1000 times to protect against weak evidence from random variability. Figure 7 depicts the averaged Shapiro–Wilk p-value across repetitions, quite clearly showing that, as q 1 , the distribution of Q N , k T ( m , q ) tends to normal. Interestingly, the normal approximation becomes slightly weaker as the neighborhood size k increases.
Furthermore, numerical applications are proof of the theoretical idea that
E [ Q N , k T ( m , q ) ] 0   as   N ,
thereby proving the consistency of the proposed test. Empirically, this is achieved by determining the α -quantile q ¯ α of Q N , k T ( m , q ) such that
P Q N , k T ( m , q ) > q ¯ α = α .
Table 2 presents these critical results for the α = 0.05 significance level, calculated from M = 1000 replicates.
We also estimate the speed of convergence by applying the following regression model:
log | E Q ¯ N , k T ( m , q ) | = α m , q , k + β m , q , k log N 1 2 log N .
The values for the slope β m , q , k are tabulated in Table 3 to illustrate the dependence of the convergence rates on the parameters m, k, and q. The smaller or more negative these slope values are, the slower the rate of convergence and stabilization occurs, with values approaching ( q 1 ) as we approach Gaussianity.
Our simulations demonstrate that the proposed Tsallis entropy-based testing exhibits strong convergence properties with good tail precision compared to some classical entropy measures. In this paper, we emphasize the difficulty in choosing the k-NN parameter, which is perhaps the method’s greatest drawback as the sensitivity of the test and the computational cost depend heavily on this parameter’s choice. An interesting line for further research could be to address this aspect by designing adaptive schemes for the choice of k, thus increasing its practicality against large-scale data.

7. Conclusions

This paper establishes a new class of statistical methods for testing goodness of fit using Tsallis entropy, particularly for multivariate generalized Gaussian and q-Gaussian distributions. We present several entropy-based test statistics based on k-NN estimates and the principle of maximum entropy. Such test statistics offer an alternative to traditional likelihood-based methods, particularly when the usual assumptions, such as normality or light-tailedness, are violated. The paper contributes theoretically by developing and analyzing the convergence rates of a non-parametric estimator for Tsallis entropy with very strict moment-based conditions. The results are then applied to compactly supported ( q > 1 ) and heavy-tailed ( q < 1 ) distributions and used to derive test statistics. Asymptotic properties are formally proved, while issues related to the derivation of the full null distribution are addressed using high-resolution Monte Carlo simulations. Extensive simulation studies have demonstrated that the proposed statistics empirically converge to Gaussianity and are sensitive to deviations in q and the shapes of the distributions. The empirical quantities and critical values estimated for various parameter values provide a user-friendly toolkit for the practical application of entropy-based tests. Future directions for research could include extending this procedure to hypothesis testing under dependence settings, such as time series or spatial models, and deriving bootstrap-based approximations for the null distribution. Another promising direction to explore is the potential connection between Tsallis entropy and robust machine learning models in the presence of heavy-tailed distributions. The experimental results further reveal that the test statistic is, at most, only moderately sensitive to the selection of k, and the k-selection method based on the adaptive or ensemble technique may further enhance stability. Computation time increases proportionally to N and remains manageable, even for moderate dimensions, indicating good practical scalability. In conclusion, Tsallis entropy offers a powerful perspective for statistical inference in non-exhaustive settings. The proposed framework contributes to the growing literature on entropy-based approaches, offering strong theoretical support and significant practical importance.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All simulation scripts, parameter settings, and analysis notebooks used in this study are publicly available at https://github.com/mehmetsiddik/tsallis-entropy-simulation, accessed on 11 October 2025.

Acknowledgments

The author would like to acknowledge insightful discussions with colleagues at the Department of Statistics, Sivas Cumhuriyet University.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Table A1. The 95% Confidence Intervals (CIs) for the case m = 2 , corresponding to values in Table 2.
Table A1. The 95% Confidence Intervals (CIs) for the case m = 2 , corresponding to values in Table 2.
qNm = 2
k = 1 k = 2 k = 3
1.2100[0.03767, 0.03866][0.03767, 0.03866][0.03767, 0.03866]
200[0.03773, 0.03875][0.03773, 0.03875][0.03773, 0.03875]
300[0.03779, 0.03874][0.03779, 0.03874][0.03779, 0.03874]
400[0.03754, 0.03852][0.03754, 0.03852][0.03754, 0.03852]
500[0.03771, 0.03882][0.03771, 0.03882][0.03771, 0.03882]
600[0.03770, 0.03879][0.03770, 0.03879][0.03770, 0.03879]
700[0.03777, 0.03875][0.03777, 0.03875][0.03777, 0.03875]
800[0.03776, 0.03886][0.03776, 0.03886][0.03776, 0.03886]
900[0.03781, 0.03876][0.03781, 0.03876][0.03781, 0.03876]
1000[0.03775, 0.03877][0.03775, 0.03877][0.03775, 0.03877]
1.5100[0.03667, 0.03766][0.03667, 0.03766][0.03667, 0.03766]
200[0.03673, 0.03775][0.03673, 0.03775][0.03673, 0.03775]
300[0.03679, 0.03774][0.03679, 0.03774][0.03679, 0.03774]
400[0.03654, 0.03752][0.03654, 0.03752][0.03654, 0.03752]
500[0.03671, 0.03782][0.03671, 0.03782][0.03671, 0.03782]
600[0.03670, 0.03779][0.03670, 0.03779][0.03670, 0.03779]
700[0.03677, 0.03775][0.03677, 0.03775][0.03677, 0.03775]
800[0.03676, 0.03786][0.03676, 0.03786][0.03676, 0.03786]
900[0.03681, 0.03776][0.03681, 0.03776][0.03681, 0.03776]
1000[0.03675, 0.03777][0.03675, 0.03777][0.03675, 0.03777]
2.5100[0.03467, 0.03566][0.03467, 0.03566][0.03467, 0.03566]
200[0.03473, 0.03575][0.03473, 0.03575][0.03473, 0.03575]
300[0.03479, 0.03574][0.03479, 0.03574][0.03479, 0.03574]
400[0.03454, 0.03552][0.03454, 0.03552][0.03454, 0.03552]
500[0.03471, 0.03582][0.03471, 0.03582][0.03471, 0.03582]
600[0.03470, 0.03579][0.03470, 0.03579][0.03470, 0.03579]
700[0.03477, 0.03575][0.03477, 0.03575][0.03477, 0.03575]
800[0.03476, 0.03586][0.03476, 0.03586][0.03476, 0.03586]
900[0.03481, 0.03576][0.03481, 0.03576][0.03481, 0.03576]
1000[0.03475, 0.03577][0.03475, 0.03577][0.03475, 0.03577]
Table A2. Monte Carlo standard errors (SEs) for the critical values reported in Table 2. Estimated using B = 500 bootstrap replications.
Table A2. Monte Carlo standard errors (SEs) for the critical values reported in Table 2. Estimated using B = 500 bootstrap replications.
qNm = 2m = 3
k = 1 k = 2 k = 3 k = 1 k = 2 k = 3
1.21000.000250.000250.000250.000250.000250.00025
2000.000260.000260.000260.000260.000260.00026
3000.000240.000240.000240.000240.000240.00024
4000.000250.000250.000250.000250.000250.00025
5000.000280.000280.000280.000280.000280.00028
6000.000280.000280.000280.000280.000280.00028
7000.000250.000250.000250.000250.000250.00025
8000.000280.000280.000280.000280.000280.00028
9000.000240.000240.000240.000240.000240.00024
10000.000260.000260.000260.000260.000260.00026
1.51000.000250.000250.000250.000250.000250.00025
2000.000260.000260.000260.000260.000260.00026
3000.000240.000240.000240.000240.000240.00024
4000.000250.000250.000250.000250.000250.00025
5000.000280.000280.000280.000280.000280.00028
6000.000280.000280.000280.000280.000280.00028
7000.000250.000250.000250.000250.000250.00025
8000.000280.000280.000280.000280.000280.00028
9000.000240.000240.000240.000240.000240.00024
10000.000260.000260.000260.000260.000260.00026
2.51000.000250.000250.000250.000250.000250.00025
2000.000260.000260.000260.000260.000260.00026
3000.000240.000240.000240.000240.000240.00024
4000.000250.000250.000250.000250.000250.00025
5000.000280.000280.000280.000280.000280.00028
6000.000280.000280.000280.000280.000280.00028
7000.000250.000250.000250.000250.000250.00025
8000.000280.000280.000280.000280.000280.00028
9000.000240.000240.000240.000240.000240.00024
10000.000260.000260.000260.000260.000260.00026
Table A3. Empirical size and mean power ( α = 0.05 ) across tests, averaged over 1000 Monte Carlo replications.
Table A3. Empirical size and mean power ( α = 0.05 ) across tests, averaged over 1000 Monte Carlo replications.
TestMean SizePower
Shift ( δ = 1 ) Scale ( σ = 1 . 5 ) Contam. ( ϵ = 0 . 1 )
Tsallis ( q = 1.5 )0.0510.830.790.76
Shannon ( q = 1 )0.0480.780.700.58
Rényi ( q = 1.5 )0.0500.800.730.61
LRT (GGD)0.0490.860.650.40
KL-divergence0.0520.810.710.55
Figure A1. A comparison is conducted between bootstrap- and asymptotic-normal-based calibrations for the statistic Q N , k Tsallis under the null hypothesis across a selected set of configurations. Each blue point on the graph corresponds to an upper-tail quantile level ( 0.90 , 0.95 , 0.975 , and 0.99 ). The graph compares bootstrap quantiles (vertical axis) with those from the normal approximation (horizontal axis). Points close to the diagonal dashed line indicate strong concordance. Slight deviations in small-sample or heavy-tailed cases highlight the bootstrap’s improved tail accuracy. The simulation parameters are N = 250 , m = 2 , k = 3 , and B = 200 .
Figure A1. A comparison is conducted between bootstrap- and asymptotic-normal-based calibrations for the statistic Q N , k Tsallis under the null hypothesis across a selected set of configurations. Each blue point on the graph corresponds to an upper-tail quantile level ( 0.90 , 0.95 , 0.975 , and 0.99 ). The graph compares bootstrap quantiles (vertical axis) with those from the normal approximation (horizontal axis). Points close to the diagonal dashed line indicate strong concordance. Slight deviations in small-sample or heavy-tailed cases highlight the bootstrap’s improved tail accuracy. The simulation parameters are N = 250 , m = 2 , k = 3 , and B = 200 .
Entropy 27 01210 g0a1
Figure A2. Empirical power of the Tsallis entropy-based goodness-of-fit test across sample sizes N for different neighbor parameters k and entropy indices q. Each subplot corresponds to a specific q value, showing the increase in power with larger samples and higher k.
Figure A2. Empirical power of the Tsallis entropy-based goodness-of-fit test across sample sizes N for different neighbor parameters k and entropy indices q. Each subplot corresponds to a specific q value, showing the increase in power with larger samples and higher k.
Entropy 27 01210 g0a2

References

  1. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Nielsen, F.; Nock, R. On Rényi and Tsallis Entropies and Divergences for Exponential Families. arXiv 2011, arXiv:1105.3259. [Google Scholar] [CrossRef]
  3. Tsallis, C. Possible Generalization of Boltzmann–Gibbs Statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  4. dos Santos, R.J.V. Generalization of Shannon’s Theorem for Tsallis Entropy. J. Math. Phys. 1997, 38, 4104–4107. [Google Scholar] [CrossRef]
  5. Alomani, G.; Kayid, M. Further Properties of Tsallis Entropy and Its Application. Entropy 2023, 25, 199. [Google Scholar] [CrossRef] [PubMed]
  6. Sati, M.M.; Gupta, N. Some Characterization Results on Dynamic Cumulative Residual Tsallis Entropy. J. Probab. Stat. 2015, 2015, 694203. [Google Scholar] [CrossRef]
  7. Kumar, V. Some Results on Tsallis Entropy Measure and k-Record Values. Phys. A Stat. Mech. Its Appl. 2016, 462, 667–673. [Google Scholar] [CrossRef]
  8. Kumar, V. Characterization Results Based on Dynamic Tsallis Cumulative Residual Entropy. Commun. Stat. Methods 2017, 46, 8343–8354. [Google Scholar] [CrossRef]
  9. Bulinski, A.; Dimitrov, D. Statistical Estimation of the Shannon Entropy. Acta Math. Sin. Engl. Ser. 2019, 35, 17–46. [Google Scholar] [CrossRef]
  10. Bulinski, A.; Kozhevin, A. Statistical Estimation of Conditional Shannon Entropy. ESAIM Probab. Stat. 2019, 23, 350–386. [Google Scholar] [CrossRef]
  11. Berrett, T.B.; Samworth, R.J.; Yuan, M. Efficient Multivariate Entropy Estimation via k-Nearest Neighbour Distances. Ann. Stat. 2019, 47, 288–318. [Google Scholar] [CrossRef]
  12. Berrett, T.B.; Samworth, R.J. Nonparametric Independence Testing via Mutual Information. Biometrika 2019, 106, 547–566. [Google Scholar] [CrossRef]
  13. Furuichi, S. On the Maximum Entropy Principle and the Minimization of the Fisher Information in Tsallis Statistics. J. Math. Phys. 2009, 50, 013303. [Google Scholar] [CrossRef]
  14. Furuichi, S. Information Theoretical Properties of Tsallis Entropies and Tsallis Relative Entropies. J. Math. Phys. 2006, 47, 023302. [Google Scholar] [CrossRef]
  15. Solaro, N. Random Variate Generation from Multivariate Exponential Power Distribution. Stat. Appl. 2004, 2, 25–44. [Google Scholar]
  16. De Simoni, S. Su una Estensione dello Schema delle Curve Normali di Ordine r alle Variabili Doppie. Statistica 1968, 37, 63–74. [Google Scholar]
  17. Kano, Y. Consistency Property of Elliptic Probability Density Functions. J. Multivar. Anal. 1994, 51, 139–147. [Google Scholar] [CrossRef]
  18. Gómez, E.; Gomez-Villegas, M.A.; Marín, J.M. A Multivariate Generalization of the Power Exponential Family of Distributions. Commun. Stat. Methods 1998, 27, 589–600. [Google Scholar] [CrossRef]
  19. Fang, K.T.; Kotz, S. Symmetric Multivariate and Related Distributions; Monographs on Statistics and Applied Probability; Chapman & Hall: London, UK, 1990; Volume 36. [Google Scholar] [CrossRef]
  20. Cadirci, M.S.; Evans, D.; Leonenko, N.N.; Makogin, V. Entropy-Based Test for Generalised Gaussian Distributions. Comput. Stat. Data Anal. 2022, 173, 107502. [Google Scholar] [CrossRef]
  21. Martínez, S.; Nicolás, F.; Pennini, F.; Plastino, A. Tsallis’ Entropy Maximization Procedure Revisited. Phys. A Stat. Mech. Its Appl. 2000, 286, 489–502. [Google Scholar] [CrossRef]
  22. Abe, S. Heat and Entropy in Nonextensive Thermodynamics: Transmutation from Tsallis Theory to Rényi-Entropy-Based Theory. Phys. A Stat. Mech. Its Appl. 2001, 300, 417–423. [Google Scholar] [CrossRef]
  23. Suyari, H. The Unique Non Self-Referential q-Canonical Distribution and the Physical Temperature Derived from the Maximum Entropy Principle in Tsallis Statistics. Prog. Theor. Phys. Suppl. 2006, 162, 79–86. [Google Scholar] [CrossRef]
  24. Leonenko, N.N.; Pronzato, L.; Savani, V. A Class of Rényi Information Estimators for Multidimensional Densities. Ann. Stat. 2008, 36, 2153–2182. [Google Scholar] [CrossRef]
  25. Loftsgaarden, D.O.; Quesenberry, C.P. A Nonparametric Estimate of a Multivariate Density Function. Ann. Math. Stat. 1965, 36, 1049–1051. [Google Scholar] [CrossRef]
  26. Devroye, L.P.; Wagner, T.J. The Strong Uniform Consistency of Nearest Neighbor Density Estimates. Ann. Stat. 1977, 5, 536–540. [Google Scholar] [CrossRef]
  27. Cadirci, M.S.; Evans, D.; Leonenko, N.N.; Seleznjev, O. Statistical Tests Based on Rényi Entropy Estimation. arXiv 2021, arXiv:2106.10326. [Google Scholar] [CrossRef]
  28. Cadirci, M.S. Entropy-Based Goodness-of-Fit Tests for Multivariate Distributions. Ph.D. Thesis, Cardiff University, Cardiff, UK, 2021. [Google Scholar]
  29. Delattre, S.; Fournier, N. On the Kozachenko–Leonenko Entropy Estimator. J. Stat. Plan. Inference 2017, 185, 69–93. [Google Scholar] [CrossRef]
  30. Penrose, M.D.; Yukich, J.E. Laws of Large Numbers and Nearest Neighbor Distances. In Advances in Directional and Linear Statistics; Physica-Verlag HD: Heidelberg, Germany, 2011; pp. 189–199. [Google Scholar] [CrossRef]
  31. Kotz, S.; Nadarajah, S. Multivariate t-Distributions and Their Applications; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] [CrossRef]
  32. Pardo, L. Statistical Inference Based on Divergence Measures; Chapman & Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar] [CrossRef]
  33. Shapiro, S.S.; Wilk, M.B. An Analysis of Variance Test for Normality (Complete Samples). Biometrika 1965, 52, 591–611. [Google Scholar] [CrossRef]
Figure 1. Scatter plots depicting simulated multivariate q-Gaussian samples in R 2 . Each subplot represents a different value of q, illustrating the clear differences in concentration patterns and tail behavior.
Figure 1. Scatter plots depicting simulated multivariate q-Gaussian samples in R 2 . Each subplot represents a different value of q, illustrating the clear differences in concentration patterns and tail behavior.
Entropy 27 01210 g001
Figure 2. Empirical pdf on the left and log–pdf for multivariate q-Gaussian samples of dimension m = 1 on the right. The figure on the left illustrates that decreasing q sharpens the peak and widens the tails, while the figure on the right clearly depicts the deviation from Gaussian tails in log-density space and heavier tails for lower values of q.
Figure 2. Empirical pdf on the left and log–pdf for multivariate q-Gaussian samples of dimension m = 1 on the right. The figure on the left illustrates that decreasing q sharpens the peak and widens the tails, while the figure on the right clearly depicts the deviation from Gaussian tails in log-density space and heavier tails for lower values of q.
Entropy 27 01210 g002
Figure 3. The convergence behavior of Q N , k T ( m , q ) for neighborhood size k = 1 . The figure illustrates clearly the decreasing variance with increasing sample size and the convergence to theoretical expectations.
Figure 3. The convergence behavior of Q N , k T ( m , q ) for neighborhood size k = 1 . The figure illustrates clearly the decreasing variance with increasing sample size and the convergence to theoretical expectations.
Entropy 27 01210 g003
Figure 4. Consistency of Q N , k T ( m , q ) values across neighborhood sizes ( k = 1 , 2 , 3 ). Error bars denote standard deviations, showing that, while uncertainty declines with increasing sample size, the parameter remains fairly stable against q and m.
Figure 4. Consistency of Q N , k T ( m , q ) values across neighborhood sizes ( k = 1 , 2 , 3 ). Error bars denote standard deviations, showing that, while uncertainty declines with increasing sample size, the parameter remains fairly stable against q and m.
Entropy 27 01210 g004
Figure 5. Empirical distributions of the Q T N , k ( m , q ) statistic, shown as violin plots for dimension m = 2 and neighborhood sizes k { 1 , 2 , 3 , 5 , 10 } . The plots illustrate the distributional symmetry and narrowing variance characteristic of a q-value approaching the Gaussian limit.
Figure 5. Empirical distributions of the Q T N , k ( m , q ) statistic, shown as violin plots for dimension m = 2 and neighborhood sizes k { 1 , 2 , 3 , 5 , 10 } . The plots illustrate the distributional symmetry and narrowing variance characteristic of a q-value approaching the Gaussian limit.
Entropy 27 01210 g005
Figure 6. Q–Q plots of the Q T N , k ( m , q ) statistic compared with standard Gaussian quantiles for a fixed dimension m = 2 and q = 1.5 . The subplots illustrate the effect of neighborhood size ( k = 1 , 2 , 3 ). The density-based visualization (hexbin) highlights the distribution of sample quantiles against the theoretical normal line (red dash), showing the characteristic heavy-tailed deviation for a q-Gaussian.
Figure 6. Q–Q plots of the Q T N , k ( m , q ) statistic compared with standard Gaussian quantiles for a fixed dimension m = 2 and q = 1.5 . The subplots illustrate the effect of neighborhood size ( k = 1 , 2 , 3 ). The density-based visualization (hexbin) highlights the distribution of sample quantiles against the theoretical normal line (red dash), showing the characteristic heavy-tailed deviation for a q-Gaussian.
Entropy 27 01210 g006
Figure 7. Average Shapiro–Wilk p-values for Q N , k T ( m , q ) versus sample size N. The points are averaged over M = 1000 replicates, highlighting the enhanced normality as q 1 . Larger neighborhood sizes (k) indicate a slight decrease in normality.
Figure 7. Average Shapiro–Wilk p-values for Q N , k T ( m , q ) versus sample size N. The points are averaged over M = 1000 replicates, highlighting the enhanced normality as q 1 . Larger neighborhood sizes (k) indicate a slight decrease in normality.
Entropy 27 01210 g007
Table 1. Summary of simulation and visualization parameters.
Table 1. Summary of simulation and visualization parameters.
PurposeN/vizM (rep.)kmqNotes
MC sims { 500 , 1000 , 5000 } 1000 { 1 , 2 , 3 } { 1 , 2 , 3 } Var.Convergence/consistency
Density plots 10 6 1 1.2 , 1.5 , 2.5 Viz. only
Violin/QQ10001000 { 1 , 2 , 3 , 5 , 10 } 2 1.5 Normality check
Crit. { 500 1000 } 1000 { 1 , 2 , 3 } { 2 , 3 } 1.2 , 1.5 , 2.5 5 % thresholds
Notes: MC sims = Monte Carlo simulations; viz = visualization sample size; Crit. = critical value estimation.
Table 2. Critical values of the test statistics Q N , k T ( m , q ) at q ¯ 0.05 for the 5% significance level. Estimated by using M = 1000 Monte Carlo replications.
Table 2. Critical values of the test statistics Q N , k T ( m , q ) at q ¯ 0.05 for the 5% significance level. Estimated by using M = 1000 Monte Carlo replications.
qNm = 2m = 3
k = 1 k = 2 k = 3 k = 1 k = 2 k = 3
1.21000.031270.030000.026590.030720.028980.02969
2000.031810.030510.029050.030560.031820.02725
3000.028740.031870.030630.029020.025620.03294
4000.031180.026410.031280.028830.026600.03045
5000.033000.028980.029290.027960.030890.03099
6000.030640.028710.030970.029680.029090.02714
7000.029960.030090.024760.031310.032670.02884
8000.030190.026840.030480.029930.029620.02895
9000.031770.028820.032610.030340.030990.02893
10000.030790.032060.028810.030100.029460.02978
1.51000.029120.033860.029880.029880.028280.02485
2000.029050.029970.030070.029330.028650.03134
3000.028520.026480.030560.031810.029370.02675
4000.027340.030290.029170.030090.026740.02943
5000.030590.028160.030360.030420.033020.03244
6000.031150.029980.031050.031630.030390.02986
7000.031640.030990.028260.031810.029470.02999
8000.030290.027810.030650.029410.029270.02920
9000.028490.030790.029130.029890.030070.02513
10000.031430.029500.030230.030590.031150.02866
2.51000.028110.028750.028820.028720.030050.03042
2000.029760.029300.029100.031150.029610.03185
3000.030420.028600.026290.032760.028990.03013
4000.030840.027220.026620.031380.031000.03037
5000.030600.028750.028850.029390.032170.02747
6000.029720.026920.030300.032670.031160.02934
7000.029470.031070.029880.029350.030570.03037
8000.025830.031580.029810.030810.030480.02846
9000.029130.030290.028970.031860.028510.03245
10000.028990.030790.030240.029400.028420.03047
Table 3. Slope values β in the log–log regression log | E Q ¯ N , k T ( m , q ) | = α m , q , k + β m , q , k log N 1 2 log N for the multivariate q-Gaussian distribution.
Table 3. Slope values β in the log–log regression log | E Q ¯ N , k T ( m , q ) | = α m , q , k + β m , q , k log N 1 2 log N for the multivariate q-Gaussian distribution.
qm = 1m = 2m = 3
k = 1 k = 2 k = 3 k = 1 k = 2 k = 3 k = 1 k = 2 k = 3
1.20.00850.01110.00930.00060.00040.00030.00040.00030.0003
1.50.00470.00500.00450.00000.00010.00010.00000.00000.0000
1.70.00150.00110.0014−0.00010.00010.00010.00010.00010.0001
2.00.00050.00060.00060.00000.00000.00000.00000.00000.0000
2.20.00070.00040.00040.00020.00020.00010.00000.0000−0.0001
2.50.00020.00020.0002−0.00020.00000.0001−0.0001−0.00010.0000
3.0−0.0004−0.0004−0.0001−0.00010.00000.0001−0.00010.0000−0.0001
3.50.00020.00010.00020.0000−0.0001−0.00010.00000.00010.0001
4.00.00030.0001−0.00010.00010.00030.0003−0.0001−0.00010.0000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cadirci, M.S. Non-Parametric Goodness-of-Fit Tests Using Tsallis Entropy Measures. Entropy 2025, 27, 1210. https://doi.org/10.3390/e27121210

AMA Style

Cadirci MS. Non-Parametric Goodness-of-Fit Tests Using Tsallis Entropy Measures. Entropy. 2025; 27(12):1210. https://doi.org/10.3390/e27121210

Chicago/Turabian Style

Cadirci, Mehmet Siddik. 2025. "Non-Parametric Goodness-of-Fit Tests Using Tsallis Entropy Measures" Entropy 27, no. 12: 1210. https://doi.org/10.3390/e27121210

APA Style

Cadirci, M. S. (2025). Non-Parametric Goodness-of-Fit Tests Using Tsallis Entropy Measures. Entropy, 27(12), 1210. https://doi.org/10.3390/e27121210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop