Next Article in Journal
A Deep Dive into AI-Based Network Traffic Prediction Using Heterogeneous Real Datasets
Previous Article in Journal
The Effect of V-Shaped Surface Texture Distribution and Geometric Parameters on the Hydrodynamic Lubrication Performance of the Unidirectional Thrust Washer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameterized Kolmogorov–Smirnov Test for Normality

Institute of Exact and Technical Sciences, Pomeranian University, Slupsk, Arciszewskiego 22a, 76-200 Słupsk, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(1), 366; https://doi.org/10.3390/app16010366 (registering DOI)
Submission received: 8 November 2025 / Revised: 3 December 2025 / Accepted: 15 December 2025 / Published: 29 December 2025

Abstract

The first (main) aim of the article is to define and practically apply the parameterized Kolmogorov–Smirnov (PKS) goodness-of-fit test for normality. The second contribution is to expand the family of empirical distribution functions with four new proposals. The third contribution is to construct a family of alternative distributions that includes both older and newer distributions. The fourth contribution is to calculate the power of the analyzed tests under alternative distributions with parameters chosen to be similar to the normal distribution in various ways. The new proposal is distinguished for left-skewed alternative distributions and symmetric distributions characterizing by negative excess kurtosis. The effectiveness of the PKS test is also illustrated by the analysis of thirty real data sets.

1. Introduction

Goodness-of-fit tests (GoFTs) are applied in many scientific fields, including economics and finance (to analyze market behavior, assess market efficiency, identify deviations, and analyze stochastic processes), demography (regarding the fertility curve distribution), and econometrics (to check if regression errors are normally distributed for proper model evaluation).
The Kolmogorov–Smirnov (KS) test is among the most widely utilized normality testing procedures implemented in statistical software [1,2]. This method is categorized within the class of tests based on empirical distribution functions (ECDFs). The other tests are: the Cramér–von Mises (CM) test [3,4], Lilliefors (LF) test [5], Kuiper (K) test [6], Watson (W) test [7], and Anderson–Darling (AD) test [8].
Let x 1 , x 2 , , x n be random values observed on a sample of size n drawn from some general population. The values are independent, as is understood in probability theory. Moreover, all x i follow the same probability distribution. As a general rule, analysts do not know what is the actual probability distribution. The only thing they can do is to set up an H 0 hypothesis that the population distribution has a form F x . For the H 0 to be accepted, it must pass the goodness-of-fit test. Very often, analysts assume H 0 : F x = Φ x against H 1 : F x Φ x , where Φ x is the CDF of the Normal distribution. The EDF has a form F n x = 1 n i = 1 n θ x x i , where θ u is the Heaviside step function.
The δ -corrected KS test [9], which was studied by Khamis [10,11,12], involves redefining the EDF value at each observation and comparing it against the CDF at those same points. Let the EDF at the i-th data point be defined as follows:
F δ x i = i δ n 2 δ + 1 , 0 δ 1 .
In his study, Harter [9] examined the specific values of δ = 0 , 0.5 , 1 .
Bloom [13] introduced the α , β transformation:
F α , β x i = i α n α β + 1 , 0 α , β 1
to decrease the mean square error (MSE) of certain statistics. Note that F δ , δ x = F δ x . The transformation (2) was used to create the GoFTs.
Using Bloom’s formula, Sulewski [14] proposed a one-component LF GoFT with the statistic defined as
L F α , β = m a x 1 i n F α , β x i Φ x i .
It is well-established that the maximum discrepancy between theoretical and empirical distribution functions can occur at various points within a series. The probability of this discrepancy occurring at a specific positional statistic r decreases as r reaches more extreme values. This observation led to the development of the two-component test statistic described in [15]. The first component, consistent with the original LF test, represents the absolute value of the maximum difference between the sample and population distributions, while the second component identifies the position within the ordered sample where this maximum discrepancy occurs.
Sulewski and Stoltmann [16] used Bloom’s formula to create the modified CM (MCM) GoFT with statistic
M C M α , β = 1 12 n + i = 1 n F α , β x i Φ x i 2 .
Simulation studies evaluating the MCM test, alongside the one- and two-component LF tests, were conducted using the following methods of calculating F α , β x i 0 α , β 1 :
  • F 0 , 1 x i = i n —occurs in the KS statistic,
  • F 1 , 0 x i = i 1 n —occurs in the KS statistic,
  • F 0.5 , 0.5 x i = i 0.5 n = 2 i 1 2 n —appears in the CM statistic, expressed as the sum,
  • F 0 , 0 x i = i n + 1 —the i-th order statistic’s mean for the beta distribution
  • F 0.3 , 0.3 x i = i 0.3 n + 0.4 —the beta distribution’s i-th order statistic median,
  • F 0.375 , 0.375 x i = i 0.375 n + 0.25 – the mean of the i-th order statistic of the Gaussian distribution,
  • F 0.3175 , 0.3175 x i = i 0.3175 n + 0.365 —founded by Filliben [17],
  • F 1 , 1 x i = i 1 n 1 – founded by Harter [9].
Six of the EDF definitions listed above (except F 0 , 1 and F 1 , 0 ) have α = β .
Recently, numerous studies have focused on goodness-of-fit tests (GoFTs) for normality. Table A1 (see Appendix A) summarizes publications from the 21st century alongside their analyzed sample sizes, with values of n 50 highlighted in bold. These small sample sizes, which predominate in Table A1, are particularly relevant in fields such as experimental economics, where studies involving only a dozen or so participants per group are frequently published. This is where strong tests can be put to great use. It may be that the results are as follows: in the original paper, the hypothesis was accepted, whereas under a stronger test, the hypothesis is rejected.
The first (main) aim of the article is to define and practically apply the parameterized Kolmogorov–Smirnov goodness-of-fit test for normality. These modifications consist of varying the formula used to compute the empirical distribution function (EDF). The second contribution is to expand the EDF family with four new proposals. The third contribution is to construct a family of alternative distributions, comprising both older and newer distributions, which, owing to their flexibility, span all combinations of skewness and kurtosis signs. Critical values are obtained using 10 6 order statistics for sample sizes n = 10 , 20 and at a significance level 0.05 . The fourth contribution is to compute the power of the analyzed tests under alternative distributions, based on 10 5 test-statistic values, with parameters selected to be similar to the normal distribution in various ways. The effectiveness of the analyzed tests is demonstrated using thirty real datasets. The new proposal is distinguished by left-skewed alternative distributions and symmetric distributions characterized by negative excess kurtosis.
The rest of this article is structured as follows. In Section 2, we define the parameterized KS GoFT test for normality. Section 3 discusses the similarity measures between the alternatives and the normal distribution. In Section 4, we introduce the alternative distributions, categorized into nine groups based on their skewness and excess kurtosis. Section 5 presents the power study, followed by real-world data applications in Section 6. Finally, Section 7 offers concluding remarks, while other material is provided in Appendix A.

2. Parameterized Kolmogorov–Smirnov Test for Normality

Before we present the parameterized KS test (the first contribution of the paper), we would like to expand the EDF family (the second contribution of the paper) with four new variants, F 0.1 , 0.1 , F 0.9 , 0.1 , F 0.9 , 0.9 and F 0.1 , 0.9 , given by the following:
F 1 10 , 1 10 x i = i 0.1 n + 0.8 ,
F 9 10 , 1 10 x i = i 0.9 n , F 9 10 , 9 10 x i = i 0.9 n 0.8 , F 1 10 , 9 10 x i = i 0.1 n ,
thus, eight of the EDF definitions listed earlier are on the line β = α (blue line), and five of them are on the line β = α + 1 (green line) (see Figure 1). The previously analyzed values are unevenly distributed along the β = α line in the interval 0 , 1 . Four values belong to the interval 0.3 , 0.5 . The value 0.1 represents the interval 0 , 0.3 , and the values 0.9 represent the interval 0.5 , 1 . The new representatives of β = α + 1 line, also located at the corners of the square, are EDFs with α = 0.9 ,   β = 0.1 and α = 0.1 ,   β = 0.9 .
Let x ^ = i = 1 n x i / n ,   s 2 = i = 1 n x i x ^ 2 / n 1 ,   z i = x i x ¯ / s . Let us remember that the KS test statistic is given by the following formula:
K S = m a x m a x 1 i n i n Φ z i , m a x 1 i n Φ z i i 1 n .
Our idea is to parametrize the EDF in (5) using Bloom’s formula (2). Parametrized KS (PKS) test statistic is defined as follows:
P K S α , β = m a x m a x 1 i n i α n α β + 1 Φ z i , m a x 1 i n Φ z i i α 1 n α β + 1 .
Note that P K S 0 , 1 = K S . It should be noted that the P K S 0 , 1 statistic is equivalent to the K S statistic.
Sulewski and Stoltmann [16] as well as Sulewski [14,18] showed that noteworthy parameterized tests are defined using F α , β x i α , β 0 , 1 , so the values of α , β chosen for the simulation study, except for new proposals, are as follows: 0 , 0 , 1 , 0 , 1 , 1 , 0 , 1 .

3. Similarity Measure

Let us assume that m k = 1 n i = 1 n x i x ¯ k and γ 1 = m 3 s 3 , γ ¯ 2 = m 4 s 4 3 . The Malachov inequality is defined as γ ¯ 2 γ 1 2 2 [19]. This formula not only presents the relationship between two characteristics, but also helps to choose the parameter values of the Edgeworth series and Pearson distribution.
A review of the current statistical literature reveals that cases characterized by low skewness γ 1 and excess kurtosis γ ¯ 2 are not predominant in normality testing. Consequently, it is of particular interest to examine how GoFTs perform when applied to samples coming from alternative distributions that closely resemble the normal distribution.
Consider an alternative distribution with the PDF f x ; θ , where θ denotes the vector of parameters. The similarity measure M between the alternative distribution (A) and the normal distribution is given by [14]
M A θ ; μ , σ = m i n f x ; θ , ϕ x ; μ , σ d x ,
where ϕ x ; μ , σ denotes the PDF of the normal distribution. The measure M A θ ; μ , σ is bounded within the interval 0 , 1 , reaching unity only when the PDFs are identical. Calculated as a definite integral, this measure represents the shared area beneath the probability density curves.
Figure 2 shows values of the similarity measure M A θ ; μ , σ , when an alternative distribution is the Student t distribution with v degrees of freedom. Note that if v + , then obviously M t v ; 0 , 1 1 .

4. Alternative Distributions

As previously noted, a vast body of literature is dedicated to normality testing. These studies employ a wide range of alternative distributions, encompassing both symmetric and asymmetric forms. Notably, the Cauchy and slash distributions serve as examples of symmetric alternatives for which γ 1 and γ ¯ 2 remain undefined.
Alternative distributions can be classified into four primary groups based on their support and the shape of their density functions (see e.g., [20,21]. These categories comprise symmetric and asymmetric distributions with support on , , as well as those with support on 0 , and 0 , 1 . Alternative distributions are also classified into five groups based on symmetry and tail behavior (asymmetric short/long-tailed, symmetric short/long-tailed, and symmetric close-to-normal), as proposed in [21,22,23]. Sulewski [14,15] divided alternatives into twelve groups, A1–F2, based on their γ 1 and γ ¯ 2 signs as well as bimodality.
We propose a classification of the alternative distributions into nine distinct groups based on the signs of their skewness γ 1 and excess kurtosis γ ¯ 2 [16]. These categories, designated as Group 0 and Groups A–H, are formally defined in Table 1.
The primary criterion for selecting alternative distributions for Monte Carlo simulations is that the skewness γ 1 and excess kurtosis γ ¯ 2 , calculated from their parameters, must fall within the designated groups. Five distributions defined on an infinite domain satisfy this requirement: the Edgeworth series (ES) and Pearson (P) distributions—both monolithic models with γ 1 and γ ¯ 2 parameters—as well as the normal mixture (NM). Additionally, this includes the normal distribution with a plasticizing component (NDPC), a mixture of normal and non-normal distributions, and the plasticizing component mixture (PCM), comprising two identical non-normal distributions.
The second criterion for selecting an alternative distribution for Monte Carlo simulation is that the γ 1 and γ ¯ 2 values, calculated based on the distribution’s parameters, must fall within all analyzed groups with the exception of Group 0.This criterion is fulfilled by the Laplace mixture (LM) distribution belonging to the groups A–H and defined in an infinite domain.
The third criterion for selecting an alternative for Monte Carlo simulation is that the γ 1 and γ ¯ 2 values, calculated from the distribution’s parameters, must fall within the analyzed groups, with the exception of two specific categories. These alternatives can be very similar to the normal distribution. This criterion is fulfilled by the SB Johnson distribution (except groups 0, C) and the SU Johnson distribution (except groups 0, D).
The fourth criterion for selecting an alternative for Monte Carlo simulation is that the γ 1 and γ ¯ 2 values, calculated based on the distribution’s parameters, must fall within Groups C and D. These symmetric alternatives can be very similar to the normal distribution. This criterion is satisfied by the extended easily changeable kurtosis (EECK) distribution defined on a finite domain and the exponential power (EP) distribution defined on an infinite domain.
PDFs of selected alternatives and their special cases are presented in Appendix A.
Let γ 1 , γ ¯ 2 be the coordinates of a point described by skewness and excess kurtosis, respectively. For every alternative, values of γ 1 , γ ¯ 2 are calculated for 10 4 randomly determined values of parameters influencing γ 1 and γ ¯ 2 in the Malakhov area (MA) γ ¯ 2 γ 1 2 2 [19]. If γ ¯ 2 2 , 14 , then γ 1 4 , 4 . The parameter ranges of the alternatives are selected to maximize MA filling (see Table 2). Figure 3, Figure 4, Figure 5 and Figure 6 present sets of points γ 1 , γ ¯ 2 located in the MA for non-symmetric alternatives. Interestingly, the MA for SB and SU are separate; they complement each other.
Additionally, we employ the skewness–kurtosis–square (SKS) measure to evaluate the flexibility of the alternative distributions. For this purpose, circles of diameter δ are mapped within the MA space, with their center coordinates determined by the respective values of γ 1 and γ ¯ 2 . Subsequently, the fraction of the colored area is determined. Utilizing squares with sides of length δ serves as a practical alternative to circles, as this approach simplifies the computation of the total colored area. In cases where squares overlap, the area is counted only once to avoid double-counting. The SKS measure is thus defined as follows [15]:
S K S δ = S S / T T ,
where T T denotes the total number of squares within the MA region, and S S represents the number of squares containing at least one γ 1 , γ ¯ 2 point. The S K S δ measure is bounded within the interval 0 , 1 , where the maximum value signifies an ideal dispersal of the γ 1 , γ ¯ 2 coordinates throughout the MA space.
Table 2 shows that the numerical ranges of γ 1 and γ ¯ 2 for asymmetric alternatives in the MA 14 γ ¯ 2 γ 1 2 2 , due to the appropriately randomly selected parameter of the alternatives, except the SU, are similar. The range is not the most important. The interior is also important and therefore we also present values of SKS measures obtained for square side δ = 0.5 , 0.1 , 0.075 , 0.05 (see Table 3). Figure 3 and Table 3 confirm what could be expected, that the most flexible distributions are P and ES.
We select specific values for the alternative parameters to achieve the appropriate similarity measure M. This measure is predominantly analyzed using values of M = 0.5 , 0.75 , 0.9 .
Appendix A contains Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, which provide the alternative parameter vectors θ , along with the corresponding means μ a , standard deviations σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measures M for all considered distributions. The skewness and excess kurtosis tend to zero, while the similarity measure tends to unity. Often, the mean tends to zero, and the standard deviation tends to unity, while the similarity measure tends to unity. The PDF formulas and their corresponding curves (see Figure A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10), associated with the alternative parameter vectors θ , are also included in the Appendix.
As illustrated in Figure A1, the ES distribution is unsuitable for simulation studies concerning groups D–H; despite satisfying the normalization condition, the density function exhibits invalid negative values.
Figure A2 shows bathtub shapes. Figure A3, Figure A4, and Figure A6 show unimodal and bimodal shapes. In Figure A5 we can see very interesting multimodal shapes. In Figure A7 dominate unimodal shapes and Figure A8 shows only unimodal shapes. In Figure A9, we observe flat modes, and in Figure A10, very flat modes with table shapes.

5. Power Study

In [24], a sample of the most recent comparisons (since 1990) has been used to rank 55 different normality tests. The overall winner of this analysis is the regression-based Shapiro–Wilk (SW) test of normality.
The parametrized KS (PKS) test with the statistic (6) was compared with the one-component LF test with the statistic (3), Shapiro–Wilk (SW) [25], Shapiro–Francia (SF) [26], AD and CM tests. To study the power of tests, critical values c v 0.05 (the type I error equals α = 0.05 ) were calculated using m = 10 6 order statistics. The power of tests (PoTs) was calculated based on r e p = 10 5 values of test statistics. Table 4 shows critical values (CVs) and test sizes (TSs) for sample sizes n = 10 , 20 as representative of values not exceeding 50 (see Table A1 and [21]), [14,15,16,27,28]. The test TS values are approximately 0.05, confirming the validity of the simulation procedures.
The R codes related to the pseudo-random number generators (PRNGs) for all analyzed alternatives, as well as the R codes for calculating the P K S α , β statistics, critical values, and power analysis are provided in Appendix A.
Complete simulation results with power values are presented in a table with 20 columns (20 tests) and 60 rows (10 alternatives, 2 sample sizes, 3 similarity measures). Presenting such large tables is difficult due to the size of the article. Therefore, the conclusions are applied to the full results, and only the most interesting results (tests with the highest power) will be shown in Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12. The alternative distributions are indexed such that a higher index value corresponds to a closer resemblance to the normal distribution (e.g., index three in most cases denotes the similarity measure 0.9). The highest values are in bold.
Of course, it is expected that the power of the GoFTs increases as the sample size increases. This basic assumption is not met for the SB (group D) and SU (group E) alternatives. In these cases, the power is close to the significance level. Group D for the SB distribution, due to the choice of parameter values to obtain an appropriate similarity measure, is characterized by excess kurtosis values close to zero and is very similar to group C, for which SB is not defined. Group E for the SU distribution, due to the choice of parameter values to obtain an appropriate similarity measure, is characterized by skewness and excess kurtosis values close to zero and is very similar to group D, for which SU is not defined. It is expected that the power of the GoFTs decreases as the value of the similarity measure (7) increases. This basic assumption is not met for the P (groups A–H), PCM (groups C, D, and H), LM (groups D and H), SB (groups C, D, and H), SU (groups C, D, E, F, and H), and EECK (group C) alternatives.
The average PoT is highest for group B alternatives, followed by groups E and F. This indicates that the GoFTs best detect samples from asymmetric distributions with positive excess kurtosis. The worst thing, as you might expect, is detecting samples from symmetric distributions. The GoFTs perform best on samples from the Pearson distribution and worst on samples from the EECK distribution.
Based on the results from Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12, we can conclude that L F 0 , 1 and SF tests are the most powerful for the group A of alternatives; P K S 0.9 , 0.1 and SF tests are the most powerful for the group B of alternatives; SF test is the most powerful for the group C of alternatives; P K S 0 , 0 and L F 0 , 0 tests are the most powerful for the group D of alternatives; L F 0 , 1 test is the most powerful for the group E of alternatives; P K S 0 , 0 and SW tests are the most powerful for the group F of alternatives; L F 0 , 1 test is the most powerful for the group G of alternatives; and P K S 0 , 0 test is the most powerful for the group H of alternatives.

6. Real Data Examples

In this section, we apply the analyzed GoFTs to empirical datasets to demonstrate their practical utility. Comprehensive details for examples I–XXX are summarized in Table 13.
“When fitting the normal distribution to the data, we compute p-value for the analyzed GoFTs using 10 5 simulated statistic values (see Table 14, Table 15 and Table 16).” The lowest p-value for the analyzed tests is in bold. The non-normality is the most pronounced by parameterized GoFTs, namely L F 0 , 0 (example V), L F 0 , 1 (examples I–III, V–II, XII, XIV, XV, XVII, XVIII, XXIV–XXVI, XXVIII and XXIX), L F 0.1 , 0.1 (example V), L F 0.1 , 0.9 (example V), P K S 0 , 0 (examples IV, XIII, XVI and XX), P K S 1 , 0 (examples IX, X, XI, XXII, XXIII, XXVII and XXX), P K S 1 , 1 (examples XIX and XXIII) and P K S 0.9 , 0.1 (examples X, XI and XXI). The obtained results indicate that the new proposal is very universal and can handle various data regarding different groups of skewness and kurtosis signs and sample sizes from 16 to 153.

7. Conclusions

The analyzed GoFTs detect samples from asymmetric distributions with positive excess kurtosis most effectively, and those from symmetric distributions with positive excess kurtosis least effectively. The mentioned tests detect samples from a Pearson distribution most effectively and those from an EP distribution least effectively.
The parameterized GoFT, called L F 0 , 1 , stands out in the alternative groups A, E and G. The SW test is the best test for the analyzed groups when the alternative distribution is the P distribution (except for group C). The SF test performs excellently for the alternative distributions in group C. The AD test has the highest power only very rarely, for example, when the alternative NDPC distribution is for groups B and C, and the similarity measures are 0.5 and 0.9, respectively.
The new parameterized KS GoFT distribution with parameters α = 0.9 , β = 0.1 stands out among the alternative distributions from group B, characterized by negative skewness and positive excess kurtosis. The P K S α = 0 , β = 0 stands out for symmetric alternative distributions with negative excess kurtosis (group D), left-skewed alternative distributions with negative excess kurtosis (group F), and left-skewed alternative distributions with zero excess kurtosis (group H). Among the mentioned distributions, as can be seen in the Appendix, there are alternative distributions: asymmetric short-tailed and long-tailed, symmetric short-tailed and long-tailed, and symmetric distributions close to the Gaussian distribution, i.e., they meet the division proposed in [22], Krauczi [23], and Torabi et al. [21]
The good performance of the parameterized GoFTs, including the new proposal, was demonstrated by analyzing thirty real datasets.
In our opinion, future research should focus on alternative distributions which, thanks to appropriately selected parameter values, are close to the normal distribution, i.e., their similarity measure to the Gaussian distribution is at least 0.9.

Author Contributions

Conceptualization, D.S. and P.S.; methodology, P.S.; software, D.S. and P.S.; validation, D.S. and P.S.; formal analysis, P.S.; investigation, D.S. and P.S.; resources, D.S. and P.S.; data curation, D.S. and P.S.; writing—original draft preparation, D.S. and P.S.; writing—review and editing, D.S. and P.S.; visualization, D.S. and P.S.; supervision, P.S.; project administration, D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

    The following abbreviations are used in this manuscript:
AAlternative distribution
ADAnderson–Darling test
ALTAlternative distribution
CDFCumulative distribution function
CMCramér–von Mises test
CVCritical value
EDFEmpirical distribution function
EECKExtended easily changeable kurtosis distribution
EPExponential power distribution
ESEdgeworth series
GoFTGoodness-of-fit test
KKuiper test
KSKolmogorov–Smirnov test
LCNLocation contaminated normal distribution
LFLilliefors test
LMLaplace mixture
MSimilarity measure
MAMalakhov area
MCMModified Cramér–von Mises test
MPMalakhov parabola
MSEMean square error
NNormal distribution
NDPCNormal distribution with plasticizing component distribution
nSample size
NMNormal mixture distribution
PPearson distribution
PCMPlasticizing component mixture distribution
PDFProbability density function
PKSParametrized KS test
PoTPower of test
SBJohnson SB distribution
SCNscale contaminated normal distribution
SFShapiro–Francia test
SKSSkewness-kurtosis-square measure
SSnumber of non-empty squares
SUJohnson SU distribution
SWShapiro–Wilk test
TSTest size
TTTotal number of squares within the MA
WWatson test
γ 1 Skewness
γ ¯ 2 Excess kurtosis
δ Diameter of circle, side of square

Appendix A

Appendix A.1. Literature Review

21st-century literature on normality GoFTs.
Table A1. Literature devoted to normal GoFTs created in the 21st century [16].
Table A1. Literature devoted to normal GoFTs created in the 21st century [16].
ArticleSample SizesArticleSample Sizes
Bonett and Seier [29]10, 20,…, 50, 100Afeez et al. [30]10, 30, 50, 100, 300,
500,1 000
Aliaga et al. [31]-Marange and Qin [32]15, 30, 50, 80, 100, 150, 200
Bontemps and Meddahi [33]100, 250, 500, 1000Sulewski [34] (2019)10, 12,…, 30, 40, 50
Luceno [35]100Tavakoli et al. [36]5, 6,…, 15, 20, 25, 30, 40,50,…, 100
Yazici and Yolacan [37]20, 30, 40, 50Mishra et al. [38] n < 30 , n > 30
Gel et al. [39]20, 50, 100Kellner and Celisse [40]50, 75, 100, 200, 300, 400
Coin [41]20, 50, 200Wijekularathna et al. [42]5, 10, 20, 30, 50, 75, 100, 200,
500, 1000, 2000
Brys et al. [43]100, 1000Sulewski [14]10, 14, 20
Gel and Gastwirth [44]30, 50, 100Hernandez [24]5, 10,…, 30
Romao et al. [45]25, 50, 100Khatun [46]10, 20, 25, 30, 40, 50, 100,
200, 300
Razali and Wah [47]20, 30, 50, 100, 200,…,
500, 1000, 2000
Arnastauskaitė et al. [48] 2 5 , 2 6 ,…, 2 1 0
Noughabi and Arghami [49]10, 20, 30, 50Bayoud [50]10, 20,…, 50, 60, 80, 100
Yap and Sim [51]10, 20, 30, 50, 100, 300,
500, 1000, 2000
Uhm and Yi [52]10, 20, 30, 100, 200, 300
Chernobai et al. [53] Sulewski [18]20, 50, 100
Ahmad and Khan [54]10, 20,…, 50, 100, 200, 500Desgagné et al. [55]20, 50, 100, 200
Mbah and Paothong [56]10, 20, 30, 50, 100, 200,…,
500, 1000, 2500, 5000
Uyanto [27]10, 30, 50, 70, 100
Torabi et al. [21]10, 20Ma et al. [28]10, 30, 50
Feuerverger [57]200Giles [58]10, 25, 50, 100, 250,
500, 1000
Nosakhare and Bright [59]5, 10,…, 50, 100Borrajo et al. [60]50, 100, 200, 500
Desgagné and Lafaye de Micheaux [61]10, 12,…, 20, 50, 100, 200Terán-García and Pérez-Fernández [62]25, 900

Appendix A.2. Edgeworth Series Distribution

PDF of the Edgeworth series (ES) with parameters γ 1 and γ ¯ 2 is given by [63]:
f E S x ; γ 1 , , γ ¯ 2 = ϕ x ; 0 , 1 1 + 1 3 ! γ 1 , x 3 3 x + 1 4 ! γ ¯ 2 x 4 6 x 2 + 3 x R ,
where γ 1 , R , γ ¯ 2 2 . We have ϕ x ; 0 , 1 = f E S x ; 0 , 0 , obviously.
Table A2. Vectors of ES parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Table A2. Vectors of ES parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Group θ = γ 1 , , γ ¯ 2 μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
0 0 , 0 0100 M θ ; 0 , 1 = 1
(0.4, 3.33)010.43.33 M θ ; 0 , 1 = 0.8
A(0.3, 2.499)010.32.499 M θ ; 0 , 1 = 0.85
(0.2, 1.666)010.21.666 M θ ; 0 , 1 = 0.9
(−0.4, 3.33)01−0.43.33 M θ ; 0 , 1 = 0.8
B(−0.3, 2.499)01−0.32.499 M θ ; 0 , 1 = 0.85
(−0.2, 1.666)01−0.21.666 M θ ; 0 , 1 = 0.9
(0, 3.428)0103.428 M θ ; 0 , 1 = 0.8
C(0, 2.571)0102.571 M θ ; 0 , 1 = 0.85
(0, 1.71)0101.71 M θ ; 0 , 1 = 0.9
(0, −3.428)010−3.428 M θ ; 0 , 1 = 0.8
D(0, −2.571)010−2.571 M θ ; 0 , 1 = 0.85
(0, −1.71)010−1.71 M θ ; 0 , 1 = 0.9
(1.39, −0.067)011.39−0.067 M θ ; 0 , 1 = 0.825
E(1.175, −0.46)011.175−0.46 M θ ; 0 , 1 = 0.85
(0.775, −0.408)010.775−0.408 M θ ; 0 , 1 = 0.9
(−1.39, −0.067)01−1.39−0.067 M θ ; 0 , 1 = 0.825
F(−1.175, −0.46)01−1.175−0.46 M θ ; 0 , 1 = 0.85
(−0.775, −0.408)01−0.775−0.408 M θ ; 0 , 1 = 0.9
(1.391, 0)011.3910 M θ ; 0 , 1 = 0.825
G(1.19, 0)011.190 M θ ; 0 , 1 = 0.85
(0.795, 0)010.7950 M θ ; 0 , 1 = 0.9
(−1.391, 0)01−1.3910 M θ ; 0 , 1 = 0.825
H(−1.19, 0)01−1.190 M θ ; 0 , 1 = 0.85
(−0.795, 0)01−0.7950 M θ ; 0 , 1 = 0.9
Figure A1. PDF curves of the ES distribution for parameter values presented in Table A2.
Figure A1. PDF curves of the ES distribution for parameter values presented in Table A2.
Applsci 16 00366 g0a1

Appendix A.3. Pearson Distribution

Let
a = 2 γ ¯ 2 3 γ 1 2 10 γ ¯ 2 5 γ 1 2 + 12 , b = γ 1 γ ¯ 2 + 6 10 γ ¯ 2 5 γ 1 2 + 12 , c = 4 γ ¯ 2 3 γ 1 2 + 12 10 γ ¯ 2 5 γ 1 2 + 12 , Δ = b 2 4 a c ,
then the PDF of the Pearson (P) distribution is given by [64]:
f P x ; γ 1 , , γ ¯ 2 = e x p 2 a b b a 2 a x + b C 1 2 a x + b 1 / a Δ = 0 e x p b 2 a b a 4 a c b 2 t a n 1 2 a x + b 4 a c b 2 C 2 a x 2 + b x + c 1 / ( 2 a ) Δ < 0 2 a x + b b 2 4 a c 2 a x + b + b 2 4 a c b 2 a b 2 a b 2 4 a c C 3 a x 2 + b x + c 1 / ( 2 a ) Δ > 0
where x R , γ 1 , R , γ ¯ 2 2 and C 1 , C 2 , C 3 are normalizing constants defined as follows:
C 1 = e x p 2 a b b a 2 a x + b 2 a x + b 1 / a d x , C 2 = e x p b 2 a b a 4 a c b 2 t a n 1 2 a x + b 4 a c b 2 a x 2 + b x + c 1 / ( 2 a ) d x ,
C 3 = 2 a x + b Δ 2 a x + b + Δ b 2 a b 2 a Δ C 8 a x 2 + b x + c 1 / ( 2 a ) d x .
A special case of the P distribution is the normal N 0 , 1 for γ 1 = γ ¯ 2 = 0 .
Table A3. Vectors of the P parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Table A3. Vectors of the P parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Group θ = γ 1 , , γ ¯ 2 μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
0(0, 0)0100 M ( θ ; 0 , 1 ) = 1
(2.04, 4.1)012.044.1 M ( θ ; 0 , 1 ) = 0.5
A(1.62, 3.845)011.623.845 M ( θ ; 0 , 1 ) = 0.75
(0.9, 2)010.92 M ( θ ; 0 , 1 ) = 0.9
(−2.04, 4.1)01−2.044.1 M ( θ ; 0 , 1 ) = 0.5
B(−1.62, 3.845)01−1.623.845 M ( θ ; 0 , 1 ) = 0.75
(−0.9, 2)01−0.92 M ( θ ; 0 , 1 ) = 0.9
(0, 11.2)01011.2 M ( θ ; 0 , 1 ) = 0.9
C(0, 3.65)0103.65 M ( θ ; 0 , 1 ) = 0.925
(0, 1.521)0101.521 M ( θ ; 0 , 1 ) = 0.95
(0, −1.695)010−1.695 M ( θ ; 0 , 1 ) = 0.5
D(0, −1.315)010−1.315 M ( θ ; 0 , 1 ) = 0.75
(0, −0.89)010−0.89 M ( θ ; 0 , 1 ) = 0.9
(0.985, −0.5)010.985−0.5 M ( θ ; 0 , 1 ) = 0.5
E(0.715, −0.475)010.715−0.475 M ( θ ; 0 , 1 ) = 0.75
(0.515, −0.2)010.515−0.2 M ( θ ; 0 , 1 ) = 0.9
(−0.985, −0.5)01−0.985−0.5 M ( θ ; 0 , 1 ) = 0.5
F(−0.715, −0.475)01−0.715−0.475 M ( θ ; 0 , 1 ) = 0.75
(−0.515, −0.2)01−0.515−0.2 M ( θ ; 0 , 1 ) = 0.9
(1.164, 0)011.1640 M ( θ ; 0 , 1 ) = 0.5
G(0.879, 0)010.8790 M ( θ ; 0 , 1 ) = 0.75
(0.578, 0)010.5780 M ( θ ; 0 , 1 ) = 0.9
(−1.164, 0)01−1.1640 M ( θ ; 0 , 1 ) = 0.5
H(−0.879, 0)01−0.8790 M ( θ ; 0 , 1 ) = 0.75
(−0.578, 0)01−0.5780 M ( θ ; 0 , 1 ) = 0.9
Figure A2. PDF curves of the P distribution for parameter values presented in Table A3.
Figure A2. PDF curves of the P distribution for parameter values presented in Table A3.
Applsci 16 00366 g0a2

Appendix A.4. Normal Mixture Distribution

The PDF of the normal mixture (NM) distribution is given by the following:
f N M x ; θ = ω ϕ x ; μ 1 , σ 1 + 1 ω ϕ x ; μ 2 , σ 2 x R ,
where θ = μ 1 , σ 1 , μ 2 , σ 2 , ω and μ 1 , μ 2 R , σ 1 , σ 2 > 0 , ω 0 , 1 .
Special cases of the NM distribution are as follows:
  • normal N μ 1 , σ 1 for ω = 1 , N μ 2 , σ 2 for ω = 0 ,
  • location contaminated normal (LCN) f L C M x ; μ 1 , ω = f N M x ; μ 1 , 1 , 0 , 1 , ω ,
  • scale contaminated normal (SCN) f S C N x ; σ 1 , ω = f N M x ; 0 , σ 1 , 0 , 1 , ω .
Table A4. Vectors of the NM parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Table A4. Vectors of the NM parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Group θ = μ 1 , σ 1 , μ 2 , σ 2 , ω μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
0 μ 1 , σ 1 , μ 2 , σ 2 , 1 0100 M θ ; μ 1 , σ 1 = 1
μ 1 , σ 1 , μ 2 , σ 2 , 0 0100 M θ ; μ 2 , σ 2 = 1
(0.572, 2.472, 5.614, 3.454, 0.787)1.6463.4080.6850.755 M ( θ ; 0 , 1 ) = 0.5
A(−0.215, 1.254, 1.979, 1.99, 0.639)0.5771.8830.6450.502 M ( θ ; 0 , 1 ) = 0.75
(0.497, 1.376, −0.268, 0.884, 0.612)0.21.2650.2870.249 M ( θ ; 0 , 1 ) = 0.9
(0.502, 2.019, 1.708, 0.953, 0.36)1.2741.544−0.7481.502 M ( θ ; 0 , 1 ) = 0.5
B(0.06, 1.437, 1.004, 0.609, 0.634)0.4061.285−0.50.499 M ( θ ; 0 , 1 ) = 0.75
(0.709, 0.368, −0.072, 1.115, 0.193)0.0791.06−0.3010.15 M ( θ ; 0 , 1 ) = 0.9
(0.519, 6.599, 0.519, 1.058, 0.665)0.5195.41601.398 M ( θ ; 0 , 1 ) = 0.5
C(0.137, 0.581, 0.137, 2.391, 0.294)0.1372.03401.054 M ( θ ; 0 , 1 ) = 0.75
(0.1, 0.988, 0.1, 1.543, 0.532)0.11.27800.554 M ( θ ; 0 , 1 ) = 0.9
(−0.511, 1.353, 4.293, 1.021, 0.551)1.6452.6810−1.28 M ( θ ; 0 , 1 ) = 0.5
D(2.707, 0.013, 0.017, 1.125, 0.238)0.6571.5090−1.001 M ( θ ; 0 , 1 ) = 0.75
(1.243, 0.621, −0.39, 0.811, 0.347)0.1111.090−0.63 M ( θ ; 0 , 1 ) = 0.9
(−0.475, 2.22, 5.318, 2.427, 0.721)1.1413.4570.5−0.204 M ( θ ; 0 , 1 ) = 0.5
E(−0.019, 1.369, 2.979, 1.15, 0.829)0.4941.7480.339−0.1 M ( θ ; 0 , 1 ) = 0.75
(2.635, 0.35, −0.015, 1.166, 0.038)0.0861.2530.137−0.075 M ( θ ; 0 , 1 ) = 0.9
(−0.692, 0.705, 2.1, 0.679, 0.324)1.1951.476−0.542−0.852 M ( θ ; 0 , 1 ) = 0.5
F(−0.055, 1.277, 1.781, 0.443, 0.775)0.3581.377−0.3−0.5 M ( θ ; 0 , 1 ) = 0.75
(−0.09, 1.08, −1.581, 0.92, 0.9)−0.2391.155−0.071−0.042 M ( θ ; 0 , 1 ) = 0.9
(2.686, 3.099, −0.964, 2.217, 0.471)0.7553.2320.40 M ( θ ; 0 , 1 ) = 0.5
G(−0.56, 1.465, 1.411, 1.45, 0.8)−0.1661.6610.1510 M ( θ ; 0 , 1 ) = 0.75
(−0.286, 1.114, 0.984, 1.105, 0.801)−0.0331.2220.1010 M ( θ ; 0 , 1 ) = 0.9
(2.425, 1.101, 0.272, 1.693, 0.526)1.4041.775−0.4990 M ( θ ; 0 , 1 ) = 0.5
H(0.864, 1.125, −1.339, 1.241, 0.735)0.281.511−0.3860 M ( θ ; 0 , 1 ) = 0.75
(0.429, 1.078, −0.364, 1.228, 0.434)−0.021.23−0.10 M ( θ ; 0 , 1 ) = 0.9
Figure A3. PDF curves of the NM distribution for parameter values presented in Table A4.
Figure A3. PDF curves of the NM distribution for parameter values presented in Table A4.
Applsci 16 00366 g0a3

Appendix A.5. Normal Distribution with Plasticizing Component

PDF of the normal distribution with plasticizing component (NDPC) is given by [65]
f N D P C x ; θ = ω ϕ x ; μ 1 , σ 1 + 1 ω c 2 σ 2 x μ 2 σ 2 c 2 1 ϕ x μ 2 σ 2 c 2 ; 0 , 1 x R ,
where θ = μ 1 , σ 1 , μ 2 , σ 2 , c 2 , ω and μ 1 , μ 2 R , σ 1 , σ 2 > 0 , c 2 1 , ω 0 , 1 .
Special cases of the NDPC distribution are as follows: N μ 1 , σ 1 for ω = 1 ; N μ 2 , σ 2 for c 2 = 1 , ω = 0 and plasticizing component for ω = 0 .
Table A5. Vectors of NDPC parameter θ , mean μ a ,standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Table A5. Vectors of NDPC parameter θ , mean μ a ,standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Group θ = μ 1 , σ 1 , μ 2 , σ 2 , c 2 , ω μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
0( μ 1 , σ 1 , μ 2 , σ 2 , c 2 , 1 )0100 M θ ; μ 1 , σ 1 = 1
( μ 1 , σ 1 , μ 2 , σ 2 , 1 , 0 )0100 M θ ; μ 2 , σ 2 = 1
(1.194, 0.601, 2.186, 2.592, 2,0.666)1.5261.51.0021.001 M ( θ ; 0 , 1 ) = 0.5
A(0.265, 0.415, 0.996, 1.541, 1.16, 0.313)0.7671.2880.4260.152 M ( θ ; 0 , 1 ) = 0.75
(0.173, 0.358, 0.289, 1.268, 1.132, 0.198)0.2661.1040.0560.071 M ( θ ; 0 , 1 ) = 0.9
(−1.321, 1.842, 0.741, 0.459, 2.56, 0.287)0.151.4−1.7643.3 M ( θ ; 0 , 1 ) = 0.5
B(0.539, 0.632, −1.078, 2.061, 1.174, 0.741)0.121.34−1.4992.986 M ( θ ; 0 , 1 ) = 0.75
(−0.966, 1.824, 0.259, 0.889, 1.1, 0.26)−0.0591.305−0.8991.999 M ( θ ; 0 , 1 ) = 0.9
(1.308, 0.656, 1.308, 3.261, 2, 0.613)1.3081.88400.504 M ( θ ; 0 , 1 ) = 0.5
C(0.571, 1.023, 0.571, 1.962, 1.15, 0.505)0.5711.50800.325 M ( θ ; 0 , 1 ) = 0.75
(−0.097, 1.332, −0.097, 1.058, 1.1, 0.614)−0.0971.22300.101 M ( θ ; 0 , 1 ) = 0.9
(−0.692, 2.203, −0.692, 2.544, 1.759, 0.25)−0.6922.2650−1 M ( θ ; 0 , 1 ) = 0.5
D(0.323, 1.312, 0.605, 1.335, 1.2, 0.01)0.6021.2660−0.587 M ( θ ; 0 , 1 ) = 0.75
(0.179, 0.494, 0.179, 1.163, 1.426, 0.443)0.1790.8620−0.202 M ( θ ; 0 , 1 ) = 0.9
(0.675, 0.284, 2.122, 1.968, 2.104, 0.374)1.5811.5650.749−0.849 M ( θ ; 0 , 1 ) = 0.5
E(0.423, 1.032, 1.058, 2.077, 1.815, 0.494)0.7441.5440.311−0.667 M ( θ ; 0 , 1 ) = 0.75
(−0.134, 0.993, 0.671, 1.211, 1.479, 0.583)0.2021.1150.115−0.4 M ( θ ; 0 , 1 ) = 0.9
(1.609, 0.59, 0.322, 2.194, 1.609, 0.309)0.721.784−0.491−0.728 M ( θ ; 0 , 1 ) = 0.5
F(0.617, 0.737, 0.129, 1.752, 1.465, 0.332)0.2911.395−0.239−0.526 M ( θ ; 0 , 1 ) = 0.75
(−0.046, 1.156, 1.261, 0.799, 1.87, 0.876)0.1161.191−0.1−0.2 M ( θ ; 0 , 1 ) = 0.9
(1.88, 2.736, −0.848, 1.122, 6.437, 0.679)1.0052.6560.5240 M ( θ ; 0 , 1 ) = 0.5
G(2.419, 1.56, 0.237, 1.384, 1.476, 0.074)0.3981.4090.350 M ( θ ; 0 , 1 ) = 0.75
(0.055, 0.702, 0.474, 1.586, 1.328, 0.473)0.2761.1910.310 M ( θ ; 0 , 1 ) = 0.9
(1.642, 1.247, 0.202, 2.681, 1.428, 0.554)12.018−0.5940 M ( θ ; 0 , 1 ) = 0.5
H(−1.246, 1.326, 0.858, 1.103, 1.242, 0.313)0.21.496−0.50 M ( θ ; 0 , 1 ) = 0.75
(−0.115, 1.286, 0.306, 1.091, 1.093, 0.465)0.111.189−0.10 M ( θ ; 0 , 1 ) = 0.9
Figure A4. PDF curves of the NDPC for parameter values presented in Table A5.
Figure A4. PDF curves of the NDPC for parameter values presented in Table A5.
Applsci 16 00366 g0a4

Appendix A.6. Plasticizing Component Mixture Distribution

PDF of the plasticizing component mixture distribution (PCM) is given by [65]:
f P C M x ; θ = ω f P C x ; μ 1 , σ 1 , c 1 + 1 ω f P C x ; μ 2 , σ 2 , c 2 x R ,
where f P C x ; μ , σ , c = c σ x μ σ c 1 ϕ x μ σ c ; 0 , 1 x R and θ = μ 1 , σ 1 , c 1 , μ 2 , σ 2 , c 2 , ω ,
μ 1 , μ 2 R , σ 1 , σ 2 > 0 , c 1 , c 2 1 , ω 0 , 1 .
Special cases of the PCM distribution are as follows: N μ 1 , σ 1 for c 1 = 1 , ω = 1 ; N μ 2 , σ 2 for c 2 = 1 , ω = 0 ; plasticizing components P C μ 1 , σ 1 , c 1 , and P C μ 2 , σ 2 , c 2 for ω = 0 , ω = 1 , respectively.
Table A6. Vectors of PCM parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Table A6. Vectors of PCM parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups 0, A–H.
Group θ = μ 1 , σ 1 , c 1 , μ 2 , σ 2 , c 2 , ω μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
0 ( μ 1 , σ 1 , 1 , μ 2 , σ 2 , c 2 , 1 )0100 M θ ; μ 1 , σ 1 = 1
( μ 1 , σ 1 , c 1 , μ 2 , σ 2 , 1 , 0 )0100 M θ ; μ 2 , σ 2 = 1
(1.415, 1.684, 2.194, 11.252, 5.474, 2.331, 0.9)2.3993.6222.6477.663 M ( θ ; 0 , 1 ) = 0.5
A(0.444, 0.899, 1.602, 1.653, 2.506, 1.876, 0.64)0.8791.6040.9130.412 M ( θ ; 0 , 1 ) = 0.75
(−0.076, 1.056, 1.1, 0.701, 1.646, 1.095, 0.71)0.1491.2680.3740.374 M ( θ ; 0 , 1 ) = 0.9
(1.366, 0.572, 1.11, 0.502, 1.669, 1.253, 0.658)1.0711.099−0.9781.565 M ( θ ; 0 , 1 ) = 0.5
B(0.67, 0.425, 1.576, -0.323, 1.696, 1.05, 0.349)0.0241.444−0.5690.606 M ( θ ; 0 , 1 ) = 0.75
(−0.204, 2.209, 1.205, 0.133, 1.139, 1.05, 0.076)0.1071.224−0.1220.457 M ( θ ; 0 , 1 ) = 0.9
(1.597, 2.518, 1.263, 1.596, 0.856, 1.285, 0.526)1.5971.79700.601 M ( θ ; 0 , 1 ) = 0.5
C(0.012, 0.274, 1.256, 0.012, 2.046, 1.01, 0.183)0.0121.84600.598 M ( θ ; 0 , 1 ) = 0.75
(0.127, 1.089, 1.01, 0.127, 0.183, 1.01, 0.863)0.1271.0100.401 M ( θ ; 0 , 1 ) = 0.9
(1.631, 0.893, 1.05, 1.632, 2.104, 1.554, 0.498)1.6321.4880−0.268 M ( θ ; 0 , 1 ) = 0.5
D(0.639, 1.576, 1.167, 0.64, 1.085, 1.199, 0.163)0.641.120−0.251 M ( θ ; 0 , 1 ) = 0.75
(0.666, 1.123, 4.041, 0.233, 1.069, 1.05, 0.01)0.2371.0520−0.198 M ( θ ; 0 , 1 ) = 0.9
(1.472, 0.782, 1.11, 0.236, 0.291, 3.203, 0.692)1.0910.8610.38−0.8 M ( θ ; 0 , 1 ) = 0.5
E(−0.196, 0.341, 1.064, 0.613, 0.758, 1.204, 0.153)0.4890.7340.201−0.7 M ( θ ; 0 , 1 ) = 0.75
(0.722, 0.703, 1.304, −0.57, 0.598, 1.05, 0.455)0.0180.8930.179−0.617 M ( θ ; 0 , 1 ) = 0.9
(0.261, 1.419, 1.909, 3.099, 0.744, 1.567, 0.57)1.4811.757−0.3−1.107 M ( θ ; 0 , 1 ) = 0.5
F(0.037, 1.295, 1.076, 1.316, 1.171, 1.654, 0.485)0.6961.326−0.204−0.4 M ( θ ; 0 , 1 ) = 0.75
(0.201, 0.121, 1.573, 0.184, 1.177, 1.161, 0.066)0.1851.087−0.003−0.331 M ( θ ; 0 , 1 ) = 0.9
(1.088, 0.894, 3.782, 1.969, 2.71, 1.792, 0.55)1.4841.7930.60 M ( θ ; 0 , 1 ) = 0.5
G(1.515, 2.553, 3.55, 0.07, 1.328, 1.619, 0.07)0.1711.3590.5010 M ( θ ; 0 , 1 ) = 0.75
(−0.034, 1.072, 1.159, 1.146, 1.51, 1.301, 0.756)0.2541.2380.4010 M ( θ ; 0 , 1 ) = 0.9
(0.816, 1.867, 1.24, 1.787, 1.272, 1.05, 0.278)1.5171.475−0.3020 M ( θ ; 0 , 1 ) = 0.5
H(−0.364, 1.889, 1.057, 0.29, 1.413, 1.05, 0.527)−0.0551.682−0.1540 M ( θ ; 0 , 1 ) = 0.75
(0.286, 0.405, 1.27, -0.263, 1.261, 1.05, 0.112)−0.2021.188−0.1280 M ( θ ; 0 , 1 ) = 0.9
Figure A5. PDF curves of the PCM distribution for parameter values presented in Table A6.
Figure A5. PDF curves of the PCM distribution for parameter values presented in Table A6.
Applsci 16 00366 g0a5

Appendix A.7. Laplace Mixture Distribution

The PDF of the Laplace mixture (LM) distribution is given by the following:
f L M x ; θ = ω 1 2 σ 1 e x p x μ 1 σ 1 + 1 ω 1 2 σ 2 e x p x μ 2 σ 2 ,
where θ = μ 1 , σ 1 , μ 2 , σ 2 , ω and x R , μ 1 , μ 2 R , σ 1 , σ 2 > 0 , ω 0 , 1 .
Special cases of the LM distribution are Laplace (L) L μ 1 , σ 1 for ω = 1 and L μ 2 , σ 2 for ω = 0 .
Table A7. Vectors of LM parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups A–H.
Table A7. Vectors of LM parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups A–H.
Group θ = μ 1 , σ 1 , μ 2 , σ 2 , ω μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
(4.521, 7.174, −0.757, 1.959, 0.313)0.8956.5941.1729.074 M ( θ ; 0 , 1 ) = 0.5
A(1.169, 1.491, −0.019, 0.849, 0.56)0.6461.8630.43.454 M ( θ ; 0 , 1 ) = 0.75
(0.452, 0.818, −0.947, 0.482, 0.762)0.1191.2190.2241.644 M ( θ ; 0 , 1 ) = 0.9
(−0.358, 0.405, −2.549, 2.309, 0.234)−2.0363.018−0.4073.5 M ( θ ; 0 , 1 ) = 0.5
B(0.94, 0.335, −0.571, 1.585, 0.122)−0.3872.164−0.2023.136 M ( θ ; 0 , 1 ) = 0.75
(−0.736, 0.911, 0.04, 0.878, 0.132)−0.0621.275−0.0342.773 M ( θ ; 0 , 1 ) = 0.9
(1.445, 1.571, −2.516, 1.87, 1)1.4452.22203 M ( θ ; 0 , 1 ) = 0.5
C(0.246, 0.844, −0.59, 0.905, 0.043)−0.5541.28702.894 M ( θ ; 0 , 1 ) = 0.75
(0.319, 0.86, −0.21, 0.874, 0.222)−0.0921.25102.815 M ( θ ; 0 , 1 ) = 0.9
(−6.131, 0.945, −0.386, 1.54, 0.366)−2.4873.3640−0.648 M ( θ ; 0 , 1 ) = 0.5
D(−4.898, 0.343, −0.415, 1.234, 0.29)−1.7162.5230−0.597 M ( θ ; 0 , 1 ) = 0.6
(2.115, 0.07, −-0.512, 0.822, 0.208)0.0341.4860−0.005 M ( θ ; 0 , 1 ) = 0.7
(7.186, 1.509, -0.869, 0.58, 0.309)1.623.9661.005−0.403 M ( θ ; 0 , 1 ) = 0.5
E(−1.711, 0.177, 0.773, 0.823, 0.421)−0.2741.5220.5−0.32 M ( θ ; 0 , 1 ) = 0.6
(1.023, 0.358, −0.118, 0.348, 0.428)0.370.7530.15−0.014 M ( θ ; 0 , 1 ) = 0.75
(−3.863, 0.348, 1.522, 1.359, 0.248)0.1842.872−0.18−0.556 M ( θ ; 0 , 1 ) = 0.4
F(0.006, 0.065, 0.703, 0.189, 0.227)0.5450.378−0.17−0.286 M ( θ ; 0 , 1 ) = 0.45
(−0.466, 0.161, 0.08, 0.159, 0.48)−0.1820.354−0.05−0.2 M ( θ ; 0 , 1 ) = 0.5
(2.309, 1.022, −1.1, 0.418, 0.391)0.2331.9490.850 M ( θ ; 0 , 1 ) = 0.5
G(−0.208, 1.335, 7.917, 1.899, 0.712)2.1324.2610.8390 M ( θ ; 0 , 1 ) = 0.6
(0.679, 0.702, −1.434, 0.642, 0.532)−0.311.4220.0360 M ( θ ; 0 , 1 ) = 0.75
(−9.234, 0.124, 1.581, 2.321, 0.161)−0.1594.983−0.5560 M ( θ ; 0 , 1 ) = 0.4
H(−1.322, 0.83, 2.398, 1.181, 0.291)1.3172.287−0.10 M ( θ ; 0 , 1 ) = 0.45
(0.81, 0.479, 2.254, 0.229, 0.736)1.1910.878−0.0320 M ( θ ; 0 , 1 ) = 0.5
Figure A6. PDF curves of the LM distribution for parameter values presented in Table A7.
Figure A6. PDF curves of the LM distribution for parameter values presented in Table A7.
Applsci 16 00366 g0a6

Appendix A.8. Johnson SB Distribution

The PDF of the Johnson SB (SB) distribution is given by [66]:
f S B x ; a , b , c , d = b d x c c + d x Φ a + b l n x c c + d x ; 0 , 1 , x c , c + d
where a , c R ; b , d > 0 .
Table A8. Vectors of the SB parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups A–B, D–H.
Table A8. Vectors of the SB parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups A–B, D–H.
Group θ = μ 1 , σ 1 , μ 2 , σ 2 , ω μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
(1.972, 1.819, −0.45, 4)0.6130.4110.6490.3 M ( θ ; 0 , 1 ) = 0.5
A(2.482, 2.23, −1.665, 7.423)0.2370.6180.5840.298 M ( θ ; 0 , 1 ) = 0.75
(3.092, 2.702, −2.908, 12.271)0.1320.8320.5180.267 M ( θ ; 0 , 1 ) = 0.9
(−4.086, 2.097, −5.424, 6.348)0.0740.351−11.488 M ( θ ; 0 , 1 ) = 0.5
B(−2.614, 2.258, −5.722, 7.58)−0.0210.611−0.60.341 M ( θ ; 0 , 1 ) = 0.75
(−1.992, 2.198, −6.446, 8.974)−0.1290.823−0.4850.099 M ( θ ; 0 , 1 ) = 0.9
(0, 3.149, −2.116, 4.115)−0.0590.3190−0.176 M ( θ ; 0 , 1 ) = 0.5
D(0, 3.958, −4.707, 9.414)00.5850−0.117 M ( θ ; 0 , 1 ) = 0.75
(0, 4.304, −8.154, 15.856)−0.2270.9090−0.1 M ( θ ; 0 , 1 ) = 0.9
(0.664, 0.45, −0.027, 4.679)1.3771.380.856−0.558 M ( θ ; 0 , 1 ) = 0.5
E(0.834, 0.754, −0.727, 3.258)0.260.7260.788−0.25 M ( θ ; 0 , 1 ) = 0.75
(0.867, 2.297, −4.627, 10.828)−0.181.0950.2−0.227 M ( θ ; 0 , 1 ) = 0.9
(−0.716, 0.448, −0.622, 1.618)0.5340.47−0.931−0.4 M ( θ ; 0 , 1 ) = 0.5
F(−1.044, 1.22, −4.394, 5.493)−0.6650.88−0.603−0.145 M ( θ ; 0 , 1 ) = 0.75
(−1.202, 1.515, −4.252, 6.217)−0.0650.837−0.522−0.1 M ( θ ; 0 , 1 ) = 0.9
(1.64, 2.044, −3.761, 8.045)−1.1990.8190.4520 M ( θ ; 0 , 1 ) = 0.5
G(1.825, 2.345, −1.984, 6.623)0.1450.5960.4010 M ( θ ; 0 , 1 ) = 0.75
(2.952, 4.082, −-5.487, 16.27)−0.1350.870.240 M ( θ ; 0 , 1 ) = 0.9
(−1.357; 1.565; −1.601; 3.202)0.6050.41−0.5630 M ( θ ; 0 , 1 ) = 0.5
H(−2.046; 2.695; −5.081; 7.468)−0.0320.592−0.3540 M ( θ ; 0 , 1 ) = 0.75
(−2.068; 2.73; −7.098; 10.398)−0.070.814−0.350 M ( θ ; 0 , 1 ) = 0.9
Figure A7. PDFcurves of the SB distribution for parameter values presented in Table A8.
Figure A7. PDFcurves of the SB distribution for parameter values presented in Table A8.
Applsci 16 00366 g0a7

Appendix A.9. Johnson SU Distribution

The PDF of the Johnson SU (SU) distribution is given by [66]:
f S U x ; a , b , c , d = b x c 2 + d 2 ϕ a + b s i n h 1 x c d ; 0 , 1 x R
where b > 0 , a , c R , d 0 .
Table A9. Vectors of the SU parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups A–C, E–H.
Table A9. Vectors of the SU parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups A–C, E–H.
Group θ = a , b , c , d μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
(−1.246, 2.021, 0.257, 0.731)0.8000.5011.0142.911 M ( θ ; 0 , 1 ) = 0.5
A(−0.569, 2.063, −1.301, 2.625)−0.4771.4990.4931.720 M ( θ ; 0 , 1 ) = 0.75
(−0.11, 2.762, −0.069, 3.319)0.0721.2860.0490.648 M ( θ ; 0 , 1 ) = 0.9
(2.502, 2.889, 2.029, 3.828)−1.9492−0.81.455 M ( θ ; 0 , 1 ) = 0.5
B(2.564, 3.308, 2.137, 1.902)0.4350.8−0.6360.926 M ( θ ; 0 , 1 ) = 0.75
(2.296, 5.558,2.36,6.031)−0.2461.2−0.2180.2 M ( θ ; 0 , 1 ) = 0.9
(0, 1.821, −1.617, 3.096)−1.6171.99202 M ( θ ; 0 , 1 ) = 0.5
C(0, 1.829, −0.205, 2.967)−0.2051.89701.97 M ( θ ; 0 , 1 ) = 0.75
(0, 3.372, 0.204, 2.935)0.2040.9100.403 M ( θ ; 0 , 1 ) = 0.9
(−22.518, 45.262, −11.095, 19.766)−0.8480.4920.031−0.007 M ( θ ; 0 , 1 ) = 0.5
E(1.29, 40.539, −0.294, −17.564)0.1550.4840.491−0.784 M ( θ ; 0 , 1 ) = 0.6
(0.244, 21.027, −0.134, −12.383)0.010.590.002−0.007 M ( θ ; 0 , 1 ) = 0.75
(0.861, 18.997, −1.158, 14.674)−1.8240.774−0.011−0.005 M ( θ ; 0 , 1 ) = 0.3
F(0.756, 3.676, 0.166, 0.819)−0.0100.236−0.359−0.450 M ( θ ; 0 , 1 ) = 0.4
(13.843, 36.36, 4.174, 11.623)−0.3600.343−0.030−0.077 M ( θ ; 0 , 1 ) = 0.5
(−9.342, 11.021, −1.575, 1.981)0.320.250.2070 M ( θ ; 0 , 1 ) = 0.4
G(−23.944, 18.041, −8.486, 5.409)1.0090.6060.150 M ( θ ; 0 , 1 ) = 0.5
(−9.349, 85.071, −3.763, 35.754)0.1740.4230.0040 M ( θ ; 0 , 1 ) = 0.6
(0.738, 49.723, 3.6, 73.029)2.5161.469−0.0010 M ( θ ; 0 , 1 ) = 0.3
H(2.547, 7.276, 0.835, 1.65)0.240.243−0.1410 M ( θ ; 0 , 1 ) = 0.4
(4.211, 10.507, 1.959, 3.55)0.4910.367−0.110 M ( θ ; 0 , 1 ) = 0.5
Figure A8. PDF curves of the SU distribution for parameter values presented in Table A9.
Figure A8. PDF curves of the SU distribution for parameter values presented in Table A9.
Applsci 16 00366 g0a8

Appendix A.10. Extended Easily Changeable Kurtosis Distribution

The PDF of the extended easily changeable kurtosis (EECK) distribution is given by [67]:
f E E C K x ; a , b = 1 x b a Γ a + 1 b + 1 2 Γ a + 1 Γ 1 b + 1 x 1 , 1 a 0 , x 1 , 1 1 < a < 0 ,
where a > 1 , b > 0 . Special cases of the EECK distribution are: uniform U 1 , 1 = E E C K 0 , b , triangle a = b = 1 , and easily changeable kurtosis b = 2 [68] distributions.
Table A10. Vectors of the EECK parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups C-D.
Table A10. Vectors of the EECK parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M. Groups C-D.
Group θ = a , b μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
(46.018, 1.043)00.03202.256 M ( θ ; 0 , 0.096 ) = 0.5
C(40.914, 1.366)00.0600.921 M ( θ ; 0 , 0.096 ) = 0.75
(10.676, 1.184)00.12800.912 M ( θ ; 0 , 0.096 ) = 0.9
(60.495, 4.846)00.2440−0.921 M ( θ ; 0 , 0.096 ) = 0.5
D(48.76, 2.738)00.150−0.51 M ( θ ; 0 , 0.096 ) = 0.75
(48.76, 2.211)00.1150−0.238 M ( θ ; 0 , 0.096 ) = 0.9
Figure A9. PDF curves of the EECK distribution for parameter values presented in Table A10.
Figure A9. PDF curves of the EECK distribution for parameter values presented in Table A10.
Applsci 16 00366 g0a9

Appendix A.11. Exponential Power Distribution

The PDF of the exponential power (EP) distribution is given by [69,70]:
f E P x ; a , b , c = 1 2 c c c Γ 1 + 1 c e x p 1 c x a b c , x R a R , b , c > 0 .
Special case of the EP distribution is the N a , b for c = 2 .
Table A11. Vectors of the EP parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M.
Table A11. Vectors of the EP parameter θ , mean μ a , standard deviation σ a , skewness γ 1 , excess kurtosis γ ¯ 2 , and similarity measure M.
Group θ = a , b , c μ a σ a γ 1 γ ¯ 2 M θ ; μ , σ
(−0.796, 2.985, 1.609)−0.7963.25700.536 M ( θ ; 0 , 1 ) = 0.5
C(90.611, 1.385, 1.695)0.6111.47800.386 M ( θ ; 0 , 1 ) = 0.75
(90.251, 1.033, 1.785)0.2511.07900.253 M ( θ ; 0 , 1 ) = 0.9
(−0.611, 3.71, 28.792)−0.6112.3680−1.188 M ( θ ; 0 , 1 ) = 0.5
D(−0.673, 1.198, 3.828)−0.6730.9940−0.783 M ( θ ; 0 , 1 ) = 0.75
(−0.05, 1.272, 3.117)−0.051.110−0.619 M ( θ ; 0 , 1 ) = 0.9
Figure A10. PDF curves of the EP distribution for parameter values presented in Table A11.
Figure A10. PDF curves of the EP distribution for parameter values presented in Table A11.
Applsci 16 00366 g0a10

Appendix A.12. More Important R Codes

# Pseudo-random numbers generators (PRNGs)
# 1) Edgeworth series (ES)
# PDF as auxiliary function
 
dES=function(x,a,b) {
  if(b>=a*a-2)  return(dnorm(x,0,1)*(1+a*(x^3-3*x)/6+b*(x^4-6*x^2+3)/24))
  else return("error")
}
# PRNG
rES=function(n,a,b){
  if(b>=a*a-2){
    wyn=numeric(n)
    e=optimize(function(x) dES(x,a,b),interval=c(-5,5), maximum=1)$maximum
    d=dES(e,a,b)
    for (i in 1:n){
      R1 = runif(1,-5,5); R2 = runif(1,0,d);  w = dES(R1,a,b)
      while(w<R2){
        R1 = runif(1,-5,5); R2 = runif(1,0,d); w = dES(R1,a,b)
      }
      wyn[i]=R1
    }
    return(sort(wyn))
  }
  else return("error")
}
 
# 2) PRNG of Pearson (P)
library(PearsonDS)
rP = function(n, a, b) return(sort(rpearson(n,moments=c(0,1,a,b+3))))
 
# 3) PRNG of normal mixture (NM)
rNM = function(n, a1, b1, a2, b2, w) {
  x = ifelse(runif(n, 0, 1) < w, rnorm(n, a1, b1), rnorm(n, a2, b2))
  return(sort(x))
}
 
# 4) PRNG of normal distribution with plasticizing component (NDPC)
library(PSDistr)
rNDPC=function(n, a1, b1, a2, b2, c, w) {
  x=ifelse(runif(n, 0, 1)< w,rnorm(n, a1, b1),rpc(n,a2,b2,c))
  return(sort(x))
}
 
# 5) PRNG of plasticizing component mixture (PCM)
library(PSDistr)
rPCM = function(n, a1, b1, c1, a2, b2, c2, w) {
  x = ifelse(runif(n, 0, 1) < w, rpc(n, a1, b1, c1), rpc(n, a2, b2, c2))
  return(sort(x))
}
 
# 6) PRNG of Laplace mixture (LM)
library(LaplacesDemon)
rLM=function(n, a1, b1, a2, b2, w) {
  x=ifelse(runif(n, 0, 1)< w,rlaplace(n, a1, b1),rlaplace(n, a2, b2))
  return(sort(x))
}
 
# 7) PRNG of Johnson SB (SB)
library(ExtDist)
rSB=function(n, a, b, c, d) return(sort(rJohnsonSB(n,a,b,c,d)))
 
# 8) PRNG of Johnson SU (SU)
library(ExtDist)
rSU=function(n, a, b, c, d) return(sort(rJohnsonSU(n,a,b,c,d)))
 
# 9) extended easily changeable kurtosis (EECK)
# auxiliary functions
H=function(p,q) return(2*gamma(p+1)*gamma(1+1/q)/gamma(1+p+1/q))
dEECK = function(x,p,q) ifelse(abs(x)<=1,((1-abs(x)^q)^p)/H(p,q),0)
# PRNG of EECK
rEECK=function(n,p,q){
  wyn=numeric(n)
  e=optimize(function(x) dEECK(x,p,q),interval=c(-1,1), maximum=1)$maximum
  d=dEECK(0,p,q)
  for (i in 1:n){
    R1 = runif(1,-1,1); R2 = runif(1,0,d);  w = dEECK(R1,p,q)
    while(w<R2){
      R1 = runif(1,-1,1); R2 = runif(1,0,d); w = dEECK(R1,p,q)
    }
    wyn[i]=R1
  }
  return(sort(wyn))
}
 
# 10) exponential power (EP)
library(LaplacesDemon)
rEP =function(n,a,b,c) return(sort(rpe(n,a,b,c)))
# Parametrized Kolmogorov - Smirnow test statistic with parameters a, b
PKS = function(x,a,b) {
  n=length(x)
  z = (x - mean(x)) / (sd(x))
  CDF = pnorm(z,0,1)
  ad = max((seq(z) - a) / (n - a - b + 1) - CDF)
  ag = max(CDF - (seq(z) - a - 1) / (n - a - b + 1))
  return(max(ad, ag))
}
#critical values
 
n = 10 # sample size
alpha = 0.05 # significance level
rep1 = 10 ^ 6 # number of repeats
res = numeric(rep1) # statistic values
numer = (rep1 - alpha * rep1) # appropriate quantile
a=0; b=1 # parameters of the PKS statistic
 
# critical value (cv)
for (i in 1:rep1) {
  print(i)
  data = sort(rnorm(n, 0, 1))
  res[i]=PKS(data, a, b)
}
res=sort(res)
cv=res[numer] # cv
 
# power study for a given alternative
rep2 = 10 ^ 5 # number of repeats
pow = 0
for (i in 1:rep2){
  print(i)
  # generate sample from alternative distribution
  # data=sort(rnorm(n,0,1)) # test size
    data = sort(rNM(n, 0.572, 2.472, 5.614, 3.454, 0.787))
    if (PKS(data, a, b) > cv) pow = pow + 1
  }
power = pow / rep2
power

References

  1. Kolmogorov, A.N. Sulla Determinazione Empirica di una Legge di Distribuzione. G. Dell’Istituto Ital. Degli Attuari 1933, 4, 83–91. [Google Scholar]
  2. Smirnov, N.V. Tabled distribution of the Kolmogorov–Smirnov statistic (sample). Ann. Math. Stat. 1948, 19, 279–281. [Google Scholar] [CrossRef]
  3. Cramér, H. On the Composition of Elementary Errors. Scand. Actuar. J. 1928, 1, 13–74. [Google Scholar] [CrossRef]
  4. von Mises, R.E. Wahrscheinlichkeit, Statistik und Wahrheit; Julius Springer: Berlin, Germany, 1931. [Google Scholar]
  5. Lilliefors, H.W. On the Kolmogorov–Smirnov test for normality with mean and variance unknown. J. Am. Stat. Assoc. 1967, 62, 399–402. [Google Scholar] [CrossRef]
  6. Kuiper, N.H. Tests concerning random points on a circle. Proc. K. Ned. Akad. Van Wet. Ser. A 1960, 63, 38–47. [Google Scholar] [CrossRef]
  7. Watson, G.S. Goodness-of-fit tests on a circle (Part II). Biometrika 1962, 49, 57–63. [Google Scholar] [CrossRef]
  8. Anderson, T.W.; Darling, D.A. Asymptotic Theory of Certain “Goodness-of-Fit” Criteria Based on Stochastic Processes. Ann. Math. Stat. 1952, 23, 193–212. [Google Scholar] [CrossRef]
  9. Harter, H.L.; Khamis, H.J.; Lamb, R.E. Modified Kolmogorov–Smirnov Tests of Goodness of Fit. Commun. Stat. Simul. Comput. 1984, 13, 293–323. [Google Scholar] [CrossRef]
  10. Khamis, H.J. The δ-Corrected Kolmogorov–Smirnov Test for Goodness of Fit. J. Stat. Plan. Inference 1990, 24, 317–335. [Google Scholar] [CrossRef]
  11. Khamis, H.J. The δ-Corrected Kolmogorov–Smirnov Test with Estimated Parameters. J. Nonparametric Stat. 1992, 2, 17–27. [Google Scholar] [CrossRef]
  12. Khamis, H.J. A Comparative Study of the δ-Corrected Kolmogorov–Smirnov Test. J. Appl. Stat. 1993, 20, 401–421. [Google Scholar] [CrossRef]
  13. Bloom, G. Statistical Estimates and Transformed Beta Variables; Wiley: New York, NY, USA, 1958. [Google Scholar]
  14. Sulewski, P. Modified Lilliefors goodness-of-fit test for normality. Commun. Stat. Simul. Comput. 2020, 51, 1199–1219. [Google Scholar] [CrossRef]
  15. Sulewski, P. Two component modified Lilliefors test for normality. Equilib. Q. J. Econ. Econ. Policy 2021, 16, 429–455. [Google Scholar] [CrossRef]
  16. Sulewski, P.; Stoltmann, D. Modified Cramér–von Mises goodness-of-fit test for normality. Stat. Rev. 2023, 70, 1–36. [Google Scholar] [CrossRef]
  17. Filliben, J.J. The Probability Plot Correlation Coefficient Test for Normality. Technometrics 1975, 17, 111–117. [Google Scholar] [CrossRef]
  18. Sulewski, P. Two-piece power normal distribution. Commun. Stat. Theory Methods 2021, 50, 2619–2639. [Google Scholar] [CrossRef]
  19. Malachov, A.N. A Cumulant Analysis of Random Non-Gaussian Processes and Their Transformations; Soviet Radio: Moscow, Russia, 1978. [Google Scholar]
  20. Esteban, M.D.; Castellanos, M.E.; Morales, D.; Vajda, I. Monte Carlo Comparison of Four Normality Tests Using Different Entropy Estimates. Commun. Stat. Simul. Comput. 2001, 30, 285–761. [Google Scholar] [CrossRef]
  21. Torabi, H.; Montazeri, N.H.; Grane, A. A test of normality based on the empirical distribution function. SORT 2016, 40, 55–88. [Google Scholar]
  22. Gan, F.F.; Koehler, K.J. Goodness-of-Fit Tests Based on P–P Probability Plots. Technometrics 1990, 32, 289–303. [Google Scholar] [CrossRef]
  23. Krauczi, E. A study of the quantile correlation test of normality. Test Off. J. Span. Soc. Stat. Oper. Res. 2009, 18, 156–165. [Google Scholar] [CrossRef]
  24. Hernandez, H. Testing for Normality: What Is the Best Method. Forsch. Res. Rep. 2021, 6, 1–38. [Google Scholar]
  25. Shapiro, S.S.; Wilk, M.B. An analysis of variance test for normality. Biometrika 1965, 52, 591–611. [Google Scholar] [CrossRef]
  26. Shapiro, S.S.; Francia, R.S. An approximate analysis of variance test for normality. J. Am. Stat. Assoc. 1972, 67, 215–216. [Google Scholar] [CrossRef]
  27. Uyanto, S.S. An extensive comparison of 50 univariate goodness-of-fit tests for normality. Austrian J. Stat. 2022, 51, 45–97. [Google Scholar] [CrossRef]
  28. Ma, Y.; Kitani, M.; Murakami, H. On modified Anderson–Darling test statistics with asymptotic properties. Commun. Stat. Theory Methods 2024, 53, 1420–1439. [Google Scholar] [CrossRef]
  29. Bonett, D.G.; Seier, E. A Test of Normality with High Uniform Power. Comput. Stat. Data Anal. 2002, 40, 435–445. [Google Scholar] [CrossRef]
  30. Afeez, B.M.; Maxwell, O.; Otekunrin, O.; Happiness, O. Selection and Validation of Comparative Study of Normality Test. Am. J. Math. Stat. 2018, 8, 190–201. [Google Scholar]
  31. Aliaga, A.M.; Martínez-González, E.; Cayón, L.; Argüeso, F.; Sanz, J.L.; Barreiro, R.B. Goodness-of-Fit Tests of Gaussianity: Constraints on the Cumulants of the MAXIMA Data. New Astron. Rev. 2003, 47, 821–826. [Google Scholar] [CrossRef]
  32. Marange, C.; Qin, Y. An empirical likelihood ratio based comparative study on tests for normality of residuals in linear models. Adv. Methodol. Stat. 2019, 16, 1–16. [Google Scholar] [CrossRef]
  33. Bontemps, C.; Meddahi, N. Testing Normality: A GMM Approach. J. Econom. 2005, 124, 149–186. [Google Scholar] [CrossRef]
  34. Sulewski, P. Modification of Anderson–Darling goodness-of-fit test for normality. Afinidad 2019, 76, 588. [Google Scholar]
  35. Luceno, A. Fitting the generalized Pareto distribution to data using maximum goodness-of-fit estimators. Comput. Stat. Data Anal. 2006, 51, 904–917. [Google Scholar] [CrossRef]
  36. Tavakoli, M.; Arghami, N.; Abbasnejad, M. A goodness-of-fit test for normality based on Balakrishnan–Sanghvi information. J. Iran. Stat. Soc. 2019, 18, 177–190. [Google Scholar] [CrossRef]
  37. Yazici, B.; Yolacan, S.A. Comparison of various tests of normality. J. Stat. Comput. Simul. 2007, 77, 175–183. [Google Scholar] [CrossRef]
  38. Mishra, P.; Pandey, C.M.; Singh, U.; Gupta, A.; Sahu, C.; Keshri, A. Descriptive statistics and normality tests for statistical data. Ann. Card. Anaesth. 2019, 22, 67–74. [Google Scholar] [CrossRef]
  39. Gel, Y.R.; Miao, W.; Gastwirth, J.L. Robust Directed Tests of Normality against Heavy-Tailed Alternatives. Comput. Stat. Data Anal. 2007, 51, 2734–2746. [Google Scholar] [CrossRef]
  40. Kellner, J.; Celisse, A. A One-Sample Test for Normality with Kernel Methods. Bernoulli 2019, 25, 1816–1837. [Google Scholar] [CrossRef]
  41. Coin, D. A Goodness-of-Fit Test for Normality Based on Polynomial Regression. Comput. Stat. Data Anal. 2008, 52, 2185–2198. [Google Scholar] [CrossRef]
  42. Wijekularathna, D.K.; Manage, A.B.; Scariano, S.M. Power analysis of several normality tests: A Monte Carlo simulation study. Commun. Stat. Simul. Comput. 2020, 51, 757–773. [Google Scholar] [CrossRef]
  43. Brys, G.; Hubert, M.; Struyf, A. Goodness-of-Fit Tests Based on a Robust Measure of Skewness. Comput. Stat. 2008, 23, 429–442. [Google Scholar] [CrossRef]
  44. Gel, Y.R.; Gastwirth, J.L. A Robust Modification of the Jarque–Bera Test of Normality. Econ. Lett. 2008, 99, 30–32. [Google Scholar] [CrossRef]
  45. Romao, X.; Delgado, R.; Costa, A. An empirical power comparison of univariate goodness-of-fit tests of normality. J. Stat. Comput. Simul. 2010, 80, 545–591. [Google Scholar] [CrossRef]
  46. Khatun, N. Applications of Normality Test in Statistical Analysis. Open J. Stat. 2021, 11, 113–123. [Google Scholar] [CrossRef]
  47. Razali, N.M.; Wah, Y.B. Power comparisons of Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors and Anderson–Darling tests. J. Stat. Model. Anal. 2011, 2, 21–33. [Google Scholar]
  48. Arnastauskaitė, J.; Ruzgas, T.; Bražėnas, M. An Exhaustive Power Comparison of Normality Tests. Mathematics 2021, 9, 788. [Google Scholar] [CrossRef]
  49. Noughabi, H.A.; Arghami, N.R. Monte Carlo comparison of seven normality tests. J. Stat. Comput. Simul. 2011, 81, 965–972. [Google Scholar] [CrossRef]
  50. Bayoud, H.A. Tests of Normality: New Test and Comparative Study. Commun. Stat. Simul. Comput. 2021, 50, 4442–4463. [Google Scholar] [CrossRef]
  51. Yap, B.W.; Sim, C.H. Comparisons of various types of normality tests. J. Stat. Comput. Simul. 2011, 81, 2141–2155. [Google Scholar] [CrossRef]
  52. Uhm, T.; Yi, S. A comparison of normality testing methods by empirical power and distribution of p-values. Commun. Stat. Simul. Comput. 2023, 52, 4445–4458. [Google Scholar] [CrossRef]
  53. Chernobai, A.; Rachev, S.T.; Fabozzi, F.J. Composite Goodness-of-Fit Tests for Left-Truncated Loss Samples. In Handbook of Financial Econometrics and Statistics; Lee, C.-F., Ed.; Springer: New York, NY, USA, 2012; pp. 575–596. [Google Scholar]
  54. Ahmad, F.; Khan, R.A. A Power Comparison of Various Normality Tests. Pak. J. Stat. Oper. Res. 2015, 11, 331–345. [Google Scholar] [CrossRef]
  55. Desgagné, A.; Lafaye de Micheaux, P.; Ouimet, F. Goodness-of-Fit Tests for Laplace, Gaussian and Exponential Power Distributions Based on λ-th Power Skewness and Kurtosis. Statistics 2022, 56, 1–29. [Google Scholar] [CrossRef]
  56. Mbah, A.K.; Paothong, A. Shapiro–Francia test compared to other normality tests using expected p-value. J. Stat. Comput. Simul. 2015, 85, 3002–3016. [Google Scholar] [CrossRef]
  57. Feuerverger, A. On Goodness of Fit for Operational Risk. Int. Stat. Rev. 2016, 84, 434–455. [Google Scholar] [CrossRef]
  58. Giles, D.E. New Goodness-of-Fit Tests for the Kumaraswamy Distribution. Stats 2024, 7, 373–389. [Google Scholar] [CrossRef]
  59. Nosakhare, U.H.; Bright, A.F. Evaluation of techniques for univariate normality test using Monte Carlo simulation. Am. J. Theor. Appl. Stat. 2017, 6, 51–61. [Google Scholar]
  60. Borrajo, M.I.; González-Manteiga, W.; Martínez-Miranda, M.D. Goodness-of-Fit Test for Point Processes First-Order Intensity. Comput. Stat. Data Anal. 2024, 194, 107929. [Google Scholar] [CrossRef]
  61. Desgagné, A.; Lafaye de Micheaux, P. A Powerful and Interpretable Alternative to the Jarque–Bera Test of Normality Based on 2nd-Power Skewness and Kurtosis, Using the Rao’s Score Test on the APD Family. J. Appl. Stat. 2018, 45, 2307–2327. [Google Scholar] [CrossRef]
  62. Terán-García, E.; Pérez-Fernández, R. A robust alternative to the Lilliefors test of normality. J. Stat. Comput. Simul. 2024, 94, 1494–1512. [Google Scholar] [CrossRef]
  63. Stuart, A.; Kendall, M.G. The Advanced Theory of Statistics; Hafner Publishing Company: New York, NY, USA, 1968. [Google Scholar]
  64. Pearson, K. Memoir on skew variation in homogeneous material. Philos. Trans. R. Soc. Lond. Ser. A 1895, 186, 343–414. [Google Scholar]
  65. Sulewski, P. Normal distribution with plasticizing component. Commun. Stat. Theory Methods 2022, 51, 3806–3835. [Google Scholar] [CrossRef]
  66. Johnson, N.L. Systems of Frequency Curves Generated by Methods of Translation. Biometrika 1949, 36, 149–176. [Google Scholar] [CrossRef] [PubMed]
  67. Sulewski, P. Extended Easily Changeable Kurtosis Distribution. REVSTAT Stat. J. 2023, 23, 463–489. [Google Scholar]
  68. Sulewski, P. Easily Changeable Kurtosis Distribution. Austrian J. Stat. 2023, 52, 1–24. [Google Scholar] [CrossRef]
  69. Lunetta, G. Di una generalizzazione dello schema della curva normale. Ann. Della Fac. Econ. Commer. Palermo 1963, 17, 237–244. [Google Scholar]
  70. Subbotin, M.T. On the law of frequency of errors. Mat. Sb. 1923, 31, 296–301. [Google Scholar]
Figure 1. Visual representation of EDF definitions.
Figure 1. Visual representation of EDF definitions.
Applsci 16 00366 g001
Figure 2. Similarity measure M t v ; 0 , 1 for Student’s t-distribution with v degrees of freedom.
Figure 2. Similarity measure M t v ; 0 , 1 for Student’s t-distribution with v degrees of freedom.
Applsci 16 00366 g002
Figure 3. Graphical range of γ 1 and γ ¯ 2 . The ES and P distributions.
Figure 3. Graphical range of γ 1 and γ ¯ 2 . The ES and P distributions.
Applsci 16 00366 g003
Figure 4. Graphical range of γ 1 and γ ¯ 2 . The NM and NDPC distributions.
Figure 4. Graphical range of γ 1 and γ ¯ 2 . The NM and NDPC distributions.
Applsci 16 00366 g004
Figure 5. Graphical range of γ 1 and γ ¯ 2 . The PCM and LM distributions.
Figure 5. Graphical range of γ 1 and γ ¯ 2 . The PCM and LM distributions.
Applsci 16 00366 g005
Figure 6. Graphical range of γ 1 and γ ¯ 2 . The SB and SU distributions.
Figure 6. Graphical range of γ 1 and γ ¯ 2 . The SB and SU distributions.
Applsci 16 00366 g006
Table 1. Classification of alternatives based on the signs (values) of γ 1 and γ ¯ 2 .
Table 1. Classification of alternatives based on the signs (values) of γ 1 and γ ¯ 2 .
Group γ 1 γ ¯ 2 Group γ 1 γ ¯ 2
0zerozero
ApositivepositiveEpositivenegative
BnegativepositiveFnegativenegative
CzeropositiveGpositivezero
DzeronegativeHnegativezero
Table 2. A numerical range of γ 1 and γ ¯ 2 for alternatives in the MA 14 γ ¯ 2 γ 1 2 2 .
Table 2. A numerical range of γ 1 and γ ¯ 2 for alternatives in the MA 14 γ ¯ 2 γ 1 2 2 .
AlternativeParameter Ranges γ 1 γ ¯ 2
ES γ 1 4 , 4 , γ ¯ 2 2 , 14 3.986 , 3.972 1.991 , 13.998
P γ 1 4 , 4 , γ ¯ 2 2 , 14 3.972 , 3.924 1.984 , 13.997
NM μ 1 , μ 2 100 , 100 σ 1 , σ 2 0 , 50 , ω 0 , 1 3.926 , 3.779 1.999 , 13.979
NDPC μ 1 , μ 2 100 , 100 , c 2 1 , 100 σ 1 , σ 2 0 , 50 , ω 0 , 1 3.999 , 3.983 2.000 , 14.000
PCM μ 1 , μ 2 100 , 100 , c 1 , c 2 1 , 100 σ 1 , σ 2 0 , 50 , ω 0 , 1 3.982 , 3.936 1.999 , 13.993
LM μ 1 , μ 2 100 , 100 σ 1 , σ 2 0 , 50 , ω 0 , 1 3.891 , 3.814 1.987 , 13.992
SB a 5 , 5 , b 0 , 10 3.728 , 3.483 1.999 , 13.964
SU a 5 , 5 , b 0 , 10 2.584 , 2.586 0.041 , 13.994
EECK a 1 , 100 , b 0 , 50 0 1.977 , 13.728
EP a 50 , 50 , b , c 0 , 50 0 2 , 13.592
Table 3. The SKS measure for alternatives in the MA 14 γ ¯ 2 γ 1 2 2 for the square side δ .
Table 3. The SKS measure for alternatives in the MA 14 γ ¯ 2 γ 1 2 2 for the square side δ .
Alternative SKS δ = 0.5 SKS δ = 0.1 SKS δ = 0.075 SKS δ = 0.05
ES0.97640.67330.47440.2507
P0.97120.67910.47540.2524
NM0.88740.37270.27790.1701
NDPC0.95290.38450.28230.1677
PCM0.96070.36440.26200.1546
LM0.90050.36970.27700.1698
SB0.41620.10520.07360.0432
SU0.39530.09450.06870.0428
EECK0000
EP0000
Table 4. Critical Values (CVs) and test sizes (TSs) of the analyzed GoFTs for sample sizes n = 10 , 20 .
Table 4. Critical Values (CVs) and test sizes (TSs) of the analyzed GoFTs for sample sizes n = 10 , 20 .
NoGoFTCVTS
n = 10 n = 20 n = 10 n = 20
1 L F 0 , 0 0.20100.16220.0490.050
2 L F 1 , 0 0.24170.17840.0510.050
3 L F 1 , 1 0.23160.17480.0510.049
4 L F 0 , 1 0.24190.17850.0490.050
5 L F 0.1 , 0.1 0.20260.1630.0490.050
6 L F 0.9 , 0.1 0.23250.17460.0510.050
7 L F 0.9 , 0.9 0.22680.1730.0500.049
8 L F 0.1 , 0.9 0.23270.17470.0500.049
9 P K S 0 , 0 0.27410.19710.0520.050
10 P K S 1 , 0 0.34130.22680.0510.050
11 P K S 1 , 1 0.32110.21330.0510.050
12 P K S 0 , 1 0.26190.1920.0500.050
13 P K S 0.1 , 0.1 0.27690.19810.0510.050
14 P K S 0.9 , 0.1 0.33130.22180.0510.050
15 P K S 0.9 , 0.9 0.31410.2110.0510.050
16 P K S 0.1 , 0.9 0.26350.19250.0500.050
17 C M 0.11940.12320.0500.050
18 A D 0.68670.72270.0500.050
19 S F 0.84240.90340.0500.051
20 S W 0.84450.90440.0500.051
Table 5. The power of numbered tests (see Table 4) for group A of alternatives (ALTs). The highest values are in bold.
Table 5. The power of numbered tests (see Table 4) for group A of alternatives (ALTs). The highest values are in bold.
ALTn192018173n192018173
E S 1 100.2850.2540.2480.2340.220200.5700.5020.4730.4190.371
E S 2 100.2010.1750.1730.1630.159200.3960.3330.3110.2750.248
E S 3 100.1340.1170.1150.1080.110200.2420.2000.1790.1590.149
ALTn201819174n2019181711
P 1 100.8460.8150.8040.7880.758200.9970.9930.9930.9860.976
P 2 100.8460.8120.8040.7830.754200.9970.9930.9930.9860.977
P 3 100.8450.8140.8050.7850.757200.9970.9940.9930.9870.977
ALTn48192018n48192018
N M 1 100.1350.1330.1110.1040.103200.1990.1930.1940.1880.171
N M 2 100.1360.1330.1050.1020.100200.2000.1940.1770.1770.167
N M 3 100.0820.0810.0660.0640.064200.0980.0940.0850.0810.079
ALTn48171819n4817183
N D P C 1 100.3870.3840.3400.3390.334200.6800.6730.6620.6650.621
N D P C 2 100.1800.1780.1330.1290.119200.2990.2920.2440.2270.222
N D P C 3 100.0630.0630.0570.0560.058200.0690.0690.0620.0590.064
ALTn20481817n201819417
P C M 1 100.5940.5490.5500.5890.578200.8960.8950.8630.8310.871
P C M 2 100.2270.2590.2550.2190.204200.4930.4790.4650.4580.434
P C M 3 100.0670.0790.0780.0640.062200.0940.0820.0970.0980.076
ALTn191817208n191820173
L M 1 100.4090.3930.3930.3760.378200.7040.6910.6590.6890.638
L M 2 100.1740.1520.1480.1540.143200.3160.2670.2720.2500.230
L M 3 100.1000.0920.0900.0940.079200.1610.1300.1430.1190.115
ALTn48201918n48201918
S B 1 100.1170.1150.0890.0870.084200.1720.1660.1650.1520.143
S B 2 100.1070.1050.0790.0790.076200.1460.1410.1320.1250.116
S B 3 100.0960.0950.0710.0710.069200.1270.1220.1120.1080.100
ALTn48192018n19204188
S U 1 100.1500.1470.1370.1290.124200.2500.2400.2250.2140.219
S U 2 100.1000.1000.1000.0910.089200.1650.1460.1350.1310.132
S U 3 100.0610.0610.0710.0660.065200.0960.0830.0680.0760.068
Table 6. The power of numbered tests (see Table 4) for group B of alternatives (ALTs). The highest values are in bold.
Table 6. The power of numbered tests (see Table 4) for group B of alternatives (ALTs). The highest values are in bold.
ALTn1920181711n1920181711
E S 1 100.2820.2490.2490.2330.225200.5720.5010.4720.4180.382
E S 2 100.2020.1770.1740.1630.163200.3940.3310.3060.2700.258
E S 3 100.1320.1150.1170.1110.115200.2400.1980.1810.1620.164
ALTn201819179n201918179
P 1 100.8460.8150.8040.7870.780200.9970.9940.9930.9860.970
P 2 100.8450.8140.8040.7860.778200.9970.9930.9930.9860.971
P 3 100.8470.8130.8060.7840.777200.9970.9940.9930.9870.970
ALTn151410112n141015112
N M 1 100.1700.1640.1640.1710.163200.2610.2610.2660.2690.253
N M 2 100.1410.1430.1430.1410.142200.2230.2230.2150.2140.214
N M 3 100.1020.1070.1070.1010.106200.1520.1520.1410.1390.146
ALTn151114102n1920181115
N D P C 1 100.6820.6780.6890.6890.689200.9500.9490.9600.9520.952
N D P C 2 100.4010.4030.3900.3900.389200.7100.6890.6820.6760.672
N D P C 3 100.1650.1660.1570.1570.156200.3020.2820.2470.2600.257
ALTn151114102n141011152
P C M 1 100.2690.2700.2580.2580.257200.4540.4540.4640.4610.444
P C M 2 100.2370.2340.2430.2430.242200.4310.4310.4160.4180.421
P C M 3 100.0600.0610.0600.0600.060200.0680.0680.0680.0680.067
ALTn1920181711n1920181711
L M 1 100.2020.1890.1860.1790.180200.3770.3400.3380.3120.293
L M 2 100.1670.1480.1460.1410.141200.3050.2600.2470.2270.217
L M 3 100.1620.1400.1440.1420.131200.2840.2370.2390.2300.204
ALTn14102156n141022013
S B 1 100.1670.1670.1660.1650.163200.2740.2740.2630.2930.260
S B 2 100.1110.1110.1100.1080.108200.1580.1580.1510.1400.150
S B 3 100.0970.0970.0960.0940.094200.1300.1300.1230.1040.123
ALTn151114102n141011152
S U 1 100.1300.1300.1300.1300.129200.1920.1920.1890.1880.184
S U 2 100.1120.1120.1120.1120.111200.1560.1560.1520.1520.149
S U 3 100.0700.0700.0680.0680.068200.0780.0780.0770.0770.075
Table 7. The power of numbered tests (see Table 4) for group C of alternatives (ALTs). The highest values are in bold.
Table 7. The power of numbered tests (see Table 4) for group C of alternatives (ALTs). The highest values are in bold.
ALTn192018173n192018173
E S 1 100.2840.2490.2450.2300.220200.5710.4950.4700.4130.366
E S 2 100.2010.1740.1730.1610.159200.3970.3300.3080.2720.245
E S 3 100.1350.1170.1160.1090.111200.2400.1960.1790.1590.149
ALTn191820317n192018173
P 1 100.1370.1240.1210.1190.119200.2430.2090.1930.1780.169
P 2 100.1370.1230.1220.1190.117200.2420.2100.1900.1740.166
P 3 100.1380.1200.1220.1160.115200.2400.2060.1900.1740.166
ALTn37191718n17371819
N M 1 100.2220.2190.2040.2160.202200.3910.3870.3800.3570.325
N M 2 100.1380.1350.1350.1300.127200.2140.2130.2080.2030.205
N M 3 100.0640.0640.0720.0630.064200.0700.0720.0700.0750.095
ALTn31771819n17318711
N D P C 1 100.1810.1790.1780.1690.166200.3180.3020.2920.2950.279
N D P C 2 100.0590.0590.0590.0600.064200.0630.0640.0650.0640.062
N D P C 3 100.0520.0530.0520.0540.052200.0520.0510.0540.0510.050
ALTn37191117n37111715
P C M 1 100.0850.0830.0930.0820.082200.1080.1050.1040.1090.101
P C M 2 100.1190.1170.1030.1070.108200.1750.1720.1610.1590.158
P C M 3 100.0840.0820.0790.0790.077200.1060.1040.1020.0970.099
ALTn19183177n191820173
L M 1 100.1770.1590.1590.1570.156200.3120.2710.2600.2650.253
L M 2 100.1700.1520.1510.1500.148200.3000.2570.2510.2490.238
L M 3 100.1640.1460.1450.1420.141200.2900.2420.2400.2320.221
ALTn19201837n192018173
S U 1 100.1040.0930.0910.0900.088200.1690.1440.1310.1200.115
S U 2 100.1060.0940.0920.0910.089200.1690.1430.1300.1200.116
S U 3 100.0630.0590.0580.0590.059200.0790.0710.0650.0630.063
ALTn19371817n191820173
E E C K 1 100.1550.1400.1370.1370.135200.2630.2230.2170.2170.210
E E C K 2 100.0910.0840.0820.0820.080200.1340.1070.1110.1030.104
E E C K 3 100.0960.0880.0860.0850.084200.1410.1160.1160.1130.115
ALTn19371811n19203187
E P 1 100.0720.0670.0660.0650.065200.0930.0790.0760.0760.074
E P 2 100.0660.0630.0620.0600.060200.0790.0690.0670.0660.066
E P 3 100.0600.0570.0560.0570.057200.0690.0620.0600.0590.059
Table 8. The power of numbered tests (see Table 4) for group D of alternatives (ALTs). The highest values are in bold.
Table 8. The power of numbered tests (see Table 4) for group D of alternatives (ALTs). The highest values are in bold.
ALTn201817191n201819171
P 1 100.6670.6010.5280.4810.481200.9810.9490.9170.8880.802
P 2 100.6660.5980.5240.4830.480200.9810.9490.9190.8880.801
P 3 100.6640.5970.5230.4830.477200.9800.9510.9180.8900.804
ALTn9113520n18920131
N M 1 100.1990.2080.1890.1990.184200.4710.4160.4130.4020.426
N M 2 100.2370.2180.2180.2060.213200.4320.4700.4790.4480.413
N M 3 100.0550.0580.0530.0560.047200.0600.0640.0540.0620.071
ALTn1951317n1175918
N D P C 1 100.1260.1190.1200.1140.111200.2270.2370.2190.2120.231
N D P C 2 100.0630.0640.0620.0620.052200.0840.0700.0810.0820.066
N D P C 3 100.0490.0490.0490.0500.047200.0490.0460.0480.0490.045
ALTn911352n195134
P C M 1 100.0490.0490.0490.0480.048200.0500.0490.0490.0490.049
P C M 2 100.0610.0610.0590.0600.054200.0770.0760.0750.0740.066
P C M 3 100.0500.0500.0500.0490.048200.0530.0520.0520.0510.049
ALTn11851720n18171520
L M 1 100.2150.1970.2080.2010.177200.4220.4320.4120.4010.341
L M 2 100.2700.2700.2600.2600.250200.5720.5390.5140.5020.507
L M 3 100.2130.2240.2050.2060.212200.4580.3960.3840.3740.424
ALTn195134n9131514
S B 1 100.0490.0490.0480.0490.048200.0480.0480.0480.0470.047
S B 2 100.0490.0490.0490.0490.049200.0510.0510.0500.0490.050
S B 3 100.0510.0500.0500.0490.049200.0480.0480.0480.0480.047
ALTn9113514n9113518
E E C K 1 100.0650.0630.0620.0600.052200.0860.0850.0820.0800.078
E E C K 2 100.0530.0520.0520.0510.048200.0570.0550.0550.0530.046
E E C K 3 100.0500.0490.0500.0490.047200.0510.0500.0500.0490.044
ALTn9113518n20189113
E P 1 100.0830.0810.0780.0770.074200.1880.1630.1300.1290.124
E P 2 100.0590.0580.0570.0560.047200.0560.0610.0720.0700.069
E P 3 100.0550.0540.0530.0530.045200.0450.0500.0610.0600.059
Table 9. The power of numbered tests (see Table 4) for group E of alternatives (ALTs). The highest values are in bold.
Table 9. The power of numbered tests (see Table 4) for group E of alternatives (ALTs). The highest values are in bold.
ALTn201817419n201819174
P 1 100.7750.7400.7010.6840.685200.9900.9790.9700.9610.936
P 2 100.7780.7410.7030.6870.685200.9910.9790.9700.9600.935
P 3 100.7760.7410.7040.6880.686200.9910.9800.9700.9620.937
ALTn48181720n48181720
N M 1 100.1370.1350.0930.0920.089200.2120.2050.1660.1630.157
N M 2 100.0930.0910.0640.0650.063200.1220.1170.0870.0870.086
N M 3 100.0650.0640.0530.0530.051200.0710.0700.0570.0560.052
ALTn4817181n4817181
N D P C 1 100.6930.6890.6590.6450.628200.9630.9620.9740.9690.951
N D P C 2 100.1070.1050.0720.0730.077200.1650.1590.1270.1270.124
N D P C 3 100.0810.0790.0580.0560.062200.1020.0990.0750.0730.077
ALTn481518n481185
P C M 1 100.1530.1510.1300.1250.116200.2780.2700.2440.2440.236
P C M 2 100.0870.0860.0760.0740.066200.1350.1310.1200.1090.117
P C M 3 100.0730.0710.0560.0550.051200.0900.0870.0670.0690.065
ALTn148518n148518
L M 1 100.8640.8740.8730.8590.873200.9970.9970.9970.9970.998
L M 2 100.4340.4110.4090.4170.395200.7410.7320.7280.7280.739
L M 3 100.1250.1290.1280.1220.109200.2110.2150.2130.2060.191
ALTn20481817n201819174
S B 1 100.4950.4420.4370.4580.427200.8920.8440.7990.7910.746
S B 2 100.2130.2320.2290.1970.186200.5020.4410.4010.3940.410
S B 3 100.0480.0630.0620.0480.048200.0490.0510.0420.0510.069
ALTn481512n41982018
S U 1 100.0520.0520.0500.0500.049200.0520.0520.0520.0510.050
S U 2 100.0510.0510.0510.0510.051200.0490.0500.0490.0500.050
S U 3 100.0500.0510.0520.0520.051200.0490.0490.0490.0490.048
Table 10. The power of numbered tests (see Table 4) for group F of alternatives (ALTs). The highest values are in bold.
Table 10. The power of numbered tests (see Table 4) for group F of alternatives (ALTs). The highest values are in bold.
ALTn201891317n201819179
P 1 100.7780.7380.7350.7260.701200.9900.9800.9690.9610.953
P 2 100.7770.7410.7360.7270.703200.9900.9800.9700.9620.954
P 3 100.7780.7390.7350.7260.701200.9900.9790.9690.9610.954
ALTn91314102n91314102
N M 1 100.3220.3210.3120.3120.310200.5970.5950.5890.5890.577
N M 2 100.1050.1010.0890.0890.089200.1520.1470.1320.1320.127
N M 3 100.0560.0560.0570.0570.057200.0580.0580.0600.0600.059
ALTn13914102n91410132
N D P C 1 100.2950.2950.2890.2890.287200.5500.5490.5490.5490.536
N D P C 2 100.1010.1020.0990.0990.098200.1450.1440.1440.1440.137
N D P C 3 100.0540.0540.0530.0530.053200.0550.0560.0550.0560.054
ALTn91314102n201891317
P C M 1 100.1430.1410.1300.1300.130200.3400.3180.2530.2490.269
P C M 2 100.0670.0640.0580.0580.058200.0630.0670.0840.0800.066
P C M 3 100.0510.0500.0480.0480.048200.0410.0440.0520.0510.045
ALTn91318176n18179131
L M 1 100.2760.2770.2990.2870.275200.6240.5890.5200.5190.529
L M 2 100.2340.2360.2250.2240.237200.4750.4640.4290.4300.397
L M 3 100.1310.1270.1120.1150.111200.2040.2160.2260.2200.229
ALTn91314102n209131410
S B 1 100.5240.5180.4880.4880.485200.9160.8140.8080.7920.792
S B 2 100.1290.1290.1260.1260.125200.1810.2000.2000.2020.202
S B 3 100.1060.1060.1060.1060.105200.1230.1490.1490.1530.153
ALTn111514102n1114101519
S U 1 100.0510.0510.0510.0510.051200.0510.0520.0510.0510.051
S U 2 100.0670.0660.0650.0650.065200.0780.0770.0770.0760.078
S U 3 100.0520.0520.0510.0510.051200.0540.0540.0540.0540.050
Table 11. The power of numbered tests (see Table 4) for group G of alternatives (ALTs). The highest values are in bold.
Table 11. The power of numbered tests (see Table 4) for group G of alternatives (ALTs). The highest values are in bold.
ALTn201817194n201819174
P 1 100.8010.7670.7330.7270.715200.9920.9850.9780.9710.950
P 2 100.8000.7650.7320.7270.716200.9920.9850.9780.9710.949
P 3 100.8010.7660.7330.7260.715200.9930.9840.9780.9700.950
ALTn48181720n48201819
N M 1 100.0950.0940.0660.0650.066200.1240.1190.0940.0910.087
N M 2 100.0630.0620.0520.0520.051200.0670.0660.0560.0540.056
N M 3 100.0570.0570.0510.0510.050200.0610.0600.0520.0510.052
ALTn481518n4818201
N D P C 1 100.1740.1720.1280.1270.122200.3040.2960.2380.2440.237
N D P C 2 100.0950.0950.1040.1010.094200.1550.1550.1680.1640.170
N D P C 3 100.0950.0940.0650.0650.068200.1310.1260.0940.0880.087
ALTn481518n4818171
P C M 1 100.2730.2700.2080.2070.206200.5140.5040.4070.4030.405
P C M 2 100.1370.1370.1390.1350.130200.2410.2390.2580.2500.244
P C M 3 100.0860.0840.0610.0610.063200.1050.1010.0800.0740.072
ALTn4817181n1748181
L M 1 100.5730.5690.5180.5120.517200.8700.8810.8770.8680.859
L M 2 100.5130.5090.4560.4480.422200.8340.8350.8300.8200.778
L M 3 100.0890.0900.1020.1000.117200.1720.1430.1470.1650.186
ALTn48201817n48201819
S B 1 100.0900.0890.0640.0620.061200.1170.1130.0960.0870.087
S B 2 100.0830.0810.0590.0590.058200.1030.0990.0840.0770.077
S B 3 100.0680.0670.0530.0530.053200.0770.0740.0610.0580.059
ALTn4819175n48192018
S U 1 100.0640.0630.0540.0530.053200.0710.0690.0600.0600.057
S U 2 100.0590.0590.0510.0510.051200.0660.0640.0540.0550.053
S U 3 100.0500.0500.0500.0500.050200.0500.0500.0500.0490.049
Table 12. The power of numbered tests (see Table 4) for group H of alternatives (ALTs). The highest values are in bold.
Table 12. The power of numbered tests (see Table 4) for group H of alternatives (ALTs). The highest values are in bold.
ALTn201891317n201819179
P 1 100.8010.7640.7570.7500.732200.9920.9840.9780.9710.963
P 2 100.8010.7660.7590.7520.731200.9930.9840.9790.9700.962
P 3 100.8000.7670.7570.7510.735200.9930.9850.9790.9710.962
ALTn14102136n14102139
N M 1 100.1180.1180.1170.1150.114200.1750.1750.1670.1680.168
N M 2 100.0980.0980.0970.0950.095200.1310.1310.1250.1230.123
N M 3 100.0580.0580.0580.0580.058200.0630.0630.0610.0600.060
ALTn141021315n141021315
N D P C 1 100.1650.1650.1640.1570.163200.2800.2800.2690.2640.269
N D P C 2 100.0950.0950.0940.0960.092200.1310.1310.1240.1260.122
N D P C 3 100.0550.0550.0550.0570.054200.0600.0600.0580.0590.057
ALTn14102139n14102139
P C M 1 100.0800.0800.0790.0780.078200.1010.1010.0970.0960.095
P C M 2 100.0630.0630.0630.0630.063200.0710.0710.0690.0690.069
P C M 3 100.0740.0740.0740.0740.074200.0920.0920.0880.0880.088
ALTn181720136n181720913
L M 1 100.3870.3610.3730.3480.366200.7440.6930.7070.6320.638
L M 2 100.1280.1320.1170.1660.159200.2390.2520.1930.2780.277
L M 3 100.1330.1350.1220.0920.080200.2560.2600.2060.1880.181
ALTn14102139n14109132
S B 1 100.1100.1100.1090.1100.110200.1650.1650.1600.1600.157
S B 2 100.0790.0790.0780.0770.077200.0990.0990.0940.0950.094
S B 3 100.0790.0790.0780.0770.076200.0990.0990.0950.0940.094
ALTn111410215n141011152
S U 1 100.0510.0510.0510.0510.051200.0510.0510.0510.0510.050
S U 2 100.0600.0600.0600.0600.060200.0680.0670.0660.0660.066
S U 3 100.0580.0580.0580.0580.058200.0620.0620.0610.0610.060
Table 13. Real data examples with R sources, sample size, γ 1 and γ ¯ 2 values.
Table 13. Real data examples with R sources, sample size, γ 1 and γ ¯ 2 values.
ExDescriptionR Sourcen γ 1 γ ¯ 2
ISocio-economic data (percentage of draftees receiving the highest mark on the army examination) for 47 French-speaking provinces of Switzerland.swiss [3]47 0.461 0.011
IIThe data give the distances taken to stop.cars [2]500.7820.248
IIISocio-economic data (draftees receiving highest mark on army examination) for 47 French-speaking provinces of Switzerland.Swiss [3]470.461−0011
IVMeasurements of the height of timber in 31 felled black cherry trees.trees [2]31−0.375−0.569
VDisplacement of 32 cars (1973–74 models).mtcars [3]320.400−1.090
VIGross horsepower of 32 cars (1973–74 models).mtcars [4]320.7610.052
VIIRear axle ratio of 32 cars (1973–74 models).mtcars [5]320.279−0.565
VIIIThe data includes the weight (1000 lbs) of 32 cars (1973–74 models).mtcars [6]320.4440.172
IXLawyers’ ratings of state judges in the US Superior Court (Preparation for trial).US Judge Ratings [7]43−0.6810.141
XLawyers’ ratings of state judges in the US Superior Court (Judicial integrity).US Judge Ratings [2]43−0.8430.414
XILawyers’ ratings of state judges in the US Superior Court (Demeanor).US Judge Ratings [3]43−0.9480432
XIIDaily air quality measurements in New York (wind in mph).air quality [3]1530.3440.069
XIIIStatistics in arrests per 100,000 residents for the percent urban population in each of the 50 US states.US Arrests [3]50−0.219−0.784
XIVA regular time series giving the luteinizing hormone in blood samples at 10 min intervals from a human female, 48 samples.l h480.284−0.746
XVStatistics in arrests per 100,000 residents for assault in each of the 50 US states.US Arrests [2]500.227−1.069
XVIFrom a survey of the clerical employees of a large financial organization, the data are aggregated from the questionnaires of the approximately 35 employees for each of 30 (randomly selected) departments. The numbers give the percentage proportion of favorable responses to questions in each department (variable: “does not allow special privileges”).attitude [2]30−0.227−0.514
XVIIAs in example XVI (variable “Too critical”).attitude [5]300.208−0.431
XVIIIA set of macroeconomic data that provides information on the number of unemployed.longley [3]160.158−1.065
XIXA set of macroeconomic data that provides information on the number of people in the armed forces.longley [4]16−0.404−0.949
XXA set of macroeconomic data that provides information on the number of people employed.longley [7]16−0.094−1.351
XXIDaily air quality measurements in New York (temperature in degrees F).air quality [4]153−0.374−0.429
XXIIMeasurements on 48 rock samples from a petroleum reservoir (area of pore space, in pixels out of 256 by 256).rock [1]48−0.304−0.262
XXIIIAs in example XVI (variable: “handling of employee complaints”).attitude [1]30−0.377−0.609
XXIVA set of macroeconomic data that provides information on the number of unemployed.longley [3]160.158−1.065
XXVThe data give the distances taken to stop.cars [2]500.7820.248
XXVIMeasurements in centimeters of the sepal length for 50 flowers from each of 3 species of iris. The species are Iris setosa, versicolor, and virginica.iris [1]1500.312−0.574
XXVIIAn experiment to compare yields (as measured by dried weight of plants).Plant Growth [1]30−0.153−0.659
XXVIIIThe data consists of five experiments, each consisting of 20 consecutive ’runs’. The response is the speed of light measurement, suitably coded (km/sec, with 299,000 subtracted).morley [3]100−0.0180.263
XXIXThe mean annual temperature in degrees Fahrenheit in New Haven, Connecticut.nhtemp60−0.0740.499
XXXA classical N, P, K (nitrogen, phosphate, potassium) factorial experiment on the growth of peas in pounds/plot (the plots were (1/70) acre).npk [5]240.261−0.290
Table 14. The p-values for the GoFTs related to examples I–X.
Table 14. The p-values for the GoFTs related to examples I–X.
GoFTIIIIIIIVVVIVIIVIIIIXX
L F 0 , 0 0.290.0390.2980.2160.0020.0200.0240.2120.2170.036
L F 1 , 0 0.5070.1020.5170.1720.0160.0840.1150.3360.1480.019
L F 1 , 1 0.290.0490.2940.3640.0050.0300.0550.0910.2540.032
L F 0 , 1 0.1940.0250.1930.5740.0020.0130.0200.0820.4440.089
L F 0.1 , 0.1 0.2870.0390.2960.2260.0020.0200.0260.1940.2190.035
L F 0.9 , 0.1 0.4570.0840.4590.1820.0110.0640.0890.2780.1570.020
L F 0.9 , 0.9 0.2860.0470.2920.3440.0040.0280.0500.0980.2480.032
L F 0.1 , 0.9 0.2040.0270.2050.4950.0020.0140.0210.0870.3870.072
P K S 0 , 0 0.5330.0950.5310.1320.0120.0730.0840.4720.1390.020
P K S 1 , 0 0.5150.2690.5200.1500.0840.2990.3370.3780.1260.017
P K S 1 , 1 0.4930.1160.4960.2380.0220.1020.1650.2480.1660.018
P K S 0 , 1 0.2780.0420.2900.2740.0030.0230.0350.1370.2300.033
P K S 0.1 , 0.1 0.5270.0960.5260.1390.0120.0750.0890.4400.1400.020
P K S 0.9 , 0.1 0.5270.2230.5280.1520.0610.2350.3060.3830.1280.017
P K S 0.9 , 0.9 0.5030.1130.5120.2230.0210.0980.1520.2630.1610.018
P K S 0.1 , 0.9 0.3060.0490.3230.2400.0040.0290.0440.1620.2050.028
C M 0.370.0490.3540.4380.0230.0540.0500.1660.2740.072
A D 0.3790.0510.3640.4390.0220.0590.0540.1060.2330.048
S F 0.2910.0440.2840.5200.0520.0570.1240.1060.1570.029
S W 0.2650.0390.2570.4050.0210.0500.1090.0930.1710.022
Table 15. The p-values for the GoFTs related to examples XI–XX.
Table 15. The p-values for the GoFTs related to examples XI–XX.
GoFTXIXIIXIIIXIVXVXVIXVIIXVIIIXIXXX
L F 0 , 0 0.0090.0160.4420.2200.0280.4840.4490.5250.1280.411
L F 1 , 0 0.0050.0220.4110.3970.0890.3980.7280.8190.0500.338
L F 1 , 1 0.0090.0110.6940.2050.0500.6910.3630.8880.0770.723
L F 0 , 1 0.0300.0090.8040.1350.0220.7840.2560.4710.1590.439
L F 0.1 , 0.1 0.0090.0150.4630.2170.0290.4990.4350.5530.1190.431
L F 0.9 , 0.1 0.0050.0200.4310.3470.0730.4200.6480.8360.0530.355
L F 0.9 , 0.9 0.0090.0120.6650.2040.0470.6660.3670.8470.0790.679
L F 0.1 , 0.9 0.0230.0100.7510.1440.0230.8050.2710.4920.1680.459
P K S 0 , 0 0.0050.0270.3140.4160.0710.3260.7020.7930.0690.249
P K S 1 , 0 0.0040.0420.3510.6320.0890.3520.7810.7940.0480.314
P K S 1 , 1 0.0050.0190.5410.3950.1180.5150.7100.7800.0420.531
P K S 0 , 1 0.0090.0130.5570.2080.0370.5730.3930.6900.0940.536
P K S 0.1 , 0.1 0.0050.0260.3310.4110.0740.3380.7260.7910.0650.262
P K S 0.9 , 0.1 0.0040.0370.3580.6400.0910.3560.7860.7950.0480.315
P K S 0.9 , 0.9 0.0050.0200.5120.3940.1110.4870.7090.8230.0430.478
P K S 0.1 , 0.9 0.0070.0150.5180.2340.0430.5230.4440.7670.0750.470
C M 0.0220.0520.5900.3160.0640.5880.6450.7120.1360.485
A D 0.0150.0540.5440.3510.0530.5690.7380.6650.1070.398
S F 0.0110.1110.5950.4470.1020.5890.8480.6770.1750.452
S W 0.0060.1170.4390.2710.0400.5540.8970.4810.1120.260
Table 16. The p-values for the GoFTs related to examples XXI–XXX.
Table 16. The p-values for the GoFTs related to examples XXI–XXX.
GoFTXIXIIXIIIXIVXVXVIXVIIXVIIIXIXXX
L F 0 , 0 0.0170.4840.1160.5220.0380.0050.7380.1000.3980.824
L F 1 , 0 0.0110.3400.0560.8180.1020.0110.5350.1450.3320.694
L F 1 , 1 0.0150.4810.0850.8880.0480.0080.7270.0740.2440.868
L F 0 , 1 0.0260.7340.2650.4730.0260.0040.9640.0580.2150.769
L F 0.1 , 0.1 0.0170.4810.1110.5510.0380.0050.7320.0960.3780.843
L F 0.9 , 0.1 0.0120.3580.0600.8350.0840.0100.5590.1290.3500.718
L F 0.9 , 0.9 0.0150.4780.0870.8480.0470.0080.7230.0760.2560.864
L F 0.1 , 0.9 0.0230.6740.2130.4920.0270.0050.9270.0620.2280.790
P K S 0 , 0 0.0120.3490.0680.7910.0950.0090.5620.1670.3990.742
P K S 1 , 0 0.0090.2920.0500.7930.2670.0220.4780.2650.2810.644
P K S 1 , 1 0.0100.3470.0500.7760.1150.0140.5520.1280.2840.714
P K S 0 , 1 0.0160.4740.0960.6880.0420.0060.7170.0850.3100.866
P K S 0.1 , 0.1 0.0120.3460.0640.7890.0960.0090.5540.1630.3840.728
P K S 0.9 , 0.1 0.0090.2970.0510.7940.2230.0200.4830.2350.2880.648
P K S 0.9 , 0.9 0.0100.3450.0510.8190.1120.0140.5440.1310.2930.703
P K S 0.1 , 0.9 0.0150.4370.0820.7640.0480.0070.6680.0940.3400.823
C M 0.0220.6980.2270.7100.0460.0490.9590.2230.2170.871
A D 0.0150.7130.2500.6630.0480.0220.9710.2560.2740.892
S F 0.0260.6590.3530.6740.0430.0280.9710.3060.3470.816
S W 0.0100.5570.2570.4800.0380.0100.8850.5130.5980.860
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sulewski, P.; Stoltmann, D. Parameterized Kolmogorov–Smirnov Test for Normality. Appl. Sci. 2026, 16, 366. https://doi.org/10.3390/app16010366

AMA Style

Sulewski P, Stoltmann D. Parameterized Kolmogorov–Smirnov Test for Normality. Applied Sciences. 2026; 16(1):366. https://doi.org/10.3390/app16010366

Chicago/Turabian Style

Sulewski, Piotr, and Damian Stoltmann. 2026. "Parameterized Kolmogorov–Smirnov Test for Normality" Applied Sciences 16, no. 1: 366. https://doi.org/10.3390/app16010366

APA Style

Sulewski, P., & Stoltmann, D. (2026). Parameterized Kolmogorov–Smirnov Test for Normality. Applied Sciences, 16(1), 366. https://doi.org/10.3390/app16010366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop