Previous Article in Journal
Reservoir Computing with a Single Oscillating Gas Bubble: Emphasizing the Chaotic Regime
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Index for Measuring the Non-Uniformity of a Probability Distribution

Teledyne RD Instruments, San Diego, CA 92127, USA
Retired.
AppliedMath 2025, 5(3), 102; https://doi.org/10.3390/appliedmath5030102
Submission received: 27 June 2025 / Revised: 29 July 2025 / Accepted: 4 August 2025 / Published: 8 August 2025

Abstract

This paper proposes a new index, the “distribution non-uniformity index (DNUI)”, for quantitatively measuring the non-uniformity or unevenness of a probability distribution relative to a baseline uniform distribution. The proposed DNUI is a normalized, distance-based metric ranging between 0 and 1, with 0 indicating perfect uniformity and 1 indicating extreme non-uniformity. It satisfies our axioms for an effective non-uniformity index and is applicable to both discrete and continuous probability distributions. Several examples are presented to demonstrate its application and to compare it with two distance measures, namely, the Hellinger distance (HD) and the total variation distance (TVD), and two classical evenness measures, namely, Simpson’s evenness and Buzas and Gibson’s evenness.

1. Introduction

Non-uniformity, or unevenness, is an inherent characteristic of probability distributions, as outcomes or values from a probability system are typically not distributed uniformly or evenly. Although the shape of a distribution can offer an intuitive sense of its non-uniformity, researchers often require a quantitative measure to assess this property. Such a measure is valuable for constructing distribution models and for comparing the non-uniformity across different distributions in a consistent and interpretable way.
A probability distribution is considered uniform when all outcomes have equal probability, in the discrete case, or when the probability density is constant, in the continuous case. Therefore, the uniform distribution serves as the natural baseline for assessing the non-uniformity of any given distribution, and non-uniformity is referred to as the degree to which a distribution deviates from this uniform benchmark. It is essential to ensure that the distribution being evaluated and the baseline uniform distribution share the same support. This requirement is especially important in the continuous case, where a fixed and clearly defined support is crucial for meaningful comparison.
The Kullback–Leibler (KL) divergence or the χ2 divergence may be used as a metric for measuring the non-uniformity of a given distribution by quantifying how different the distribution is from a baseline uniform distribution. For a discrete random variable X with probability mass function (PMF) P ( x ) and n possible outcomes, the KL divergence relative to the uniform distribution with PMF 1/n is given by
K L = i = 1 n P x i l o g P x i ( 1 / n ) = l o g ( n ) + i = 1 n P x i l o g P x i
The χ2 divergence is given by
χ 2   d i v e r g e n c e =   i = 1 n P x i 2 1 n 1 = n i = 1 n P x i 2 1
While a KL or χ2 divergence value of zero indicates perfect uniformity, there is no natural upper bound that allows us to specify how non-uniform a distribution is. Furthermore, as shown in Equations (1) and (2), the KL or χ2 divergence will tend to infinity as the number of possible outcomes (n) goes to infinity, regardless of the distribution (except for the uniform distribution). The lack of an upper bound can make interpretation difficult, especially when comparing different distributions or when the scale of the divergence matters. Therefore, we will not discuss the KL and χ2 divergence further in this paper.
The Hellinger distance (HD) and the total variation distance (TVD), two well-known distances, may be used to measure the non-uniformity of a given distribution relative to a baseline uniform distribution. For the discrete case, the HD, as a non-uniformity measure, is given by (relative to a uniform distribution with PMF 1/n)
H D = 1 2 i = 1 n P x i 1 n 2  
The TVD as a non-uniformity measure is given by
T V D = 1 2 i = 1 n | P x i 1 n |
The HD and TVD range between 0 and 1 and do not require standardization or normalization. This is a desirable property for non-uniformity metrics. However, to the best of the author’s knowledge, the HD and TVD have not been used to measure distribution non-uniformity. Therefore, their performance is unknown.
In recent work, Rajaram et al. [1,2] proposed a measure called the “degree of uniformity (DOU)” to quantify how evenly the probability mass or density is distributed across available outcomes or support. Specifically, they defined the DOU for a partial distribution on a fixed interval as the ratio of the exponential of the Shannon entropy to the coverage probability of that interval [1,2].
D O U = D P c P   = 1 c P e x p ( H P )
where the subscript “P” denotes “part”, referring to the partial distribution on the fixed interval, c P is the coverage probability of the interval, H P is the entropy of the partial distribution, and D P   =   e x p ( H P ) is the entropy-based diversity of the partial distribution. When the entire distribution is considered, c P   =   1 , and thus, the DOU equals the entropy-based diversity e x p ( H ) . It should be noted that the DOU is neither standardized nor normalized and does not explicitly measure the deviation of a given distribution from a uniform benchmark. Therefore, we will not discuss the DOU further in this paper.
Classical evenness measures, such as Simpson’s evenness and Buzas and Gibson’s evenness, are essentially diversity ratios. For a discrete random variable X with PMF   P ( x ) and n possible outcomes, Simpson’s evenness is defined as (e.g., [3])
E S 2 = 1 / i = 1 n [ P x i ] 2 n
where 1 / i = 1 n [ P x i ] 2 is called Simpson’s diversity, representing the effective number of distinct elements in the probability system { X ,   P ( x ) } , and n is the maximum diversity that corresponds to a uniform distribution with PMF 1/n. The concept of effective number is the core of diversity measures in biology [4].
Buzas and Gibson’s evenness is defined as [5].
E B G = e x p [ H X ] e x p [ l n ( n ) ] = e x p [ H X ] n  
where H X is the Shannon entropy of X, H X   = i = 1 n P x i ln P x i , and l n ( n ) is the entropy of the baseline uniform distribution. The exponential of the Shannon entropy e x p [ H X ] is the entropy-based diversity and is also considered to be an effective number of elements in the probability system { X ,   P ( x ) } .
Both E S 2 and E B G are normalized by n, the maximum diversity corresponding to the baseline uniform distribution. Therefore, these indices range between 0 and 1, with 0 indicating extreme unevenness and 1 indicating perfect evenness. Since evenness is negatively correlated with unevenness, we consider the complement of E S 2 and E B G as unevenness (i.e., non-uniformity) indices. That is, we denote ( 1 E S 2 ) as Simpson’s unevenness and ( 1 E B G ) as Buzas and Gibson’s unevenness, with 0 indicating perfect evenness (uniformity) and 1 indicating extreme unevenness (non-uniformity).
However, as Gregorius and Gillet [6] pointed out, “Diversity-based methods of assessing evenness cannot provide information on unevenness, since measures of diversity generally do not produce characteristic values that are associated with states of complete unevenness.” This limitation arises because diversity measures are primarily designed to capture internal distribution characteristics, such as concentration and relative abundance within the distribution. For example, the quantity i = 1 n [ P x i ] 2 is often called the “repeat rate” [7] or Simpson concentration [4]; it has historically been used as a measure of concentration [7]. Moreover, since diversity metrics are not constructed within a comparative distance framework, they inherently lack the ability to quantify deviations from uniformity in a meaningful or interpretable way. This limitation significantly diminishes their effectiveness when the goal is specifically to detect or describe high degrees of non-uniformity.
The aim of this study is to develop a new normalized, distance-based index that can effectively quantify the non-uniformity or unevenness of a probability distribution. In the following sections, Section 2 describes the proposed distribution non-uniformity index (DNUI). Section 3 presents several examples to compare the proposed NDUI with the Hellinger distance (HD), the total variation distance (TVD), Simpson’s unevenness, and Buzas and Gibson’s unevenness. Section 4 and Section 5 provide discussion and conclusion, respectively.

2. The Proposed Distribution Non-Uniformity Index (DNUI)

The mathematical formulation of the proposed distribution non-uniformity index (DNUI) differs for discrete and continuous random variables.

2.1. Discrete Cases

Consider a discrete random variable X with PMF   P ( x ) and n possible outcomes. Let X U denote the uniform distribution with the same possible outcomes, so that its PMF P U ( x )   =   1 n for all x. We use this uniform distribution as the baseline for measuring the non-uniformity of the distribution of X.
The difference between the two PMFs P ( x ) and P U ( x ) is given by
Δ P ( x ) = P ( x ) P U ( x ) = P ( x ) 1 n
Thus, P ( x ) can be written as
P x = Δ P x + 1 n
Taking squares on both sides of Equation (9) yields
P ( x ) 2 = Δ P ( x ) 2 + 2 n Δ P ( x ) + 1 n 2
Then, taking the expectation on both sides of Equation (10) yields
E [ P ( x ) 2 ] = E Δ P x 2 + 2 n E Δ P x + 1 n 2 = ω P ( x ) 2 + 1 n 2
In Equation (11), the second moment E [ P ( x ) 2 ] is expressed as the sum of the total variance ω P ( x ) 2 and the baseline term 1/n2, where ω P ( x ) is called the total deviation given by
ω P ( x ) = E ( Δ P ( x ) 2 ) + 2 n E ( Δ P ( x ) )  
where E ( Δ P ( x ) 2 ) is the variance of P ( x ) relative to the baseline uniform distribution, given by
E Δ P x 2 = E { [ P ( x ) P U ( x ) ] 2 } = i = 1 n P x i [ P x i 1 n ] 2  
E ( Δ P ( x ) ) is the bias of P ( x ) relative to the baseline uniform distribution, given by
E Δ P x = E P x P U x = i = 1 n P x i 2 1 n = β X 1 n
where β X   =   i = 1 n P x i 2 is called the (discrete) informity of X in the theory of informity proposed by Huang [8], which is the expectation of the PMF P ( x ) . The informity of the baseline uniform distribution of X U is β X U   =   E [ P U ( x ) ] = 1 n . Therefore, E ( Δ P ( x ) ) is the difference between the two discrete informities.
Definition 1. 
The proposed DNUI (denoted by  ρ ( X ) ) for the distribution of X is given by
ρ X = ω P ( x ) E [ P ( x ) 2 ] = E [ P ( x ) 2 ] 1 n 2 E [ P ( x ) 2 ] = E ( Δ P ( x ) 2 ) + 2 n E ( Δ P ( x ) ) E ( Δ P ( x ) 2 ) + 2 n E ( Δ P ( x ) ) + 1 n 2
where  E [ P ( x ) 2 ]  is the root mean square (RMS) of  P ( x ) . The second moment  E [ P ( x ) 2 ]  can be calculated as
E [ P ( x ) 2 ] = i = 1 n P x i P x i 2 = i = 1 n P x i 3

2.2. Continuous Cases

Consider a continuous random variable Y with probability density function (PDF) p y defined on an unbounded support, such as ( , ). Since there is no baseline uniform distribution defined over an unbounded support, we cannot measure the non-uniformity of the entire distribution. Instead, we examine parts of the distribution on a fixed interval [ y 1 , y 2 ] , which allows us to assess local non-uniformity.
According to Rajaram et al. [1], the PDF of a partial distribution on [ y 1 , y 2 ] is given by the renormalization of the original PDF
p y = p y P ( y 1 , y 2 )
where P ( y 1 , y 2 ) = y 1 y 2 p y d y , which is the coverage probability of the interval [ y 1 , y 2 ] .
Let Y U denote the uniform distribution on [ y 1 , y 2 ] with PDF p U ( y )   = 1 ( y 2 y 1 ) . We use this uniform distribution as the baseline for measuring the non-uniformity of the partial distribution.
Similar to the discrete case, the difference between the two PDFs p y and p U ( y ) is given by
Δ p y = p y p U y = p y 1 ( y 2 y 1 )
Thus, p y can be written as
p y = Δ p y + 1 ( y 2 y 1 )
Taking squares on both sides of Equation (19) yields
p y 2 = Δ p y 2 + 2 ( y 2 y 1 ) Δ p y + 1 ( y 2 y 1 ) 2
Then, taking the expectation on both sides of Equation (20) yields
E [ p y 2 ] = E Δ p y 2 + 2 ( y 2 y 1 ) E Δ p y + 1 ( y 2 y 1 ) 2 = ω p y 2 + 1 ( y 2 y 1 ) 2
The total deviation ω p y is given by
ω p y = E ( Δ p y 2 ) + 2 ( y 2 y 1 ) E Δ p y
where E ( Δ p y 2 ) is the variance in p y relative to p U ( y ) , given by
E Δ p y 2 = E { [ p y p U ( y ) ] 2 } = y 1 y 2 p y P ( y 1 , y 2 ) p y P ( y 1 , y 2 ) 1 ( y 2 y 1 ) 2 d y
and E ( Δ p y ) is the bias of p y relative to p U ( y ) , given by
E Δ p y = E p y p U y = y 1 y 2 p y P ( y 1 , y 2 ) p y P ( y 1 , y 2 ) 1 ( y 2 y 1 ) d y
Definition 2. 
The proposed DNUI for the partial distribution on  [ y 1 , y 2 ]  (denoted by  ρ ( y 1 , y 2 ) ) is given by
ρ y 1 , y 2 = ω p y E [ p y 2 ] = E [ p y 2 ] 1 ( y 2 y 1 ) 2 E [ p y 2 ] = E ( Δ p y 2 ) + 2 ( y 2 y 1 ) E ( Δ p y ) E ( Δ p y 2 ) + 2 ( y 2 y 1 ) E ( Δ p y ) + 1 ( y 2 y 1 ) 2
where  E [ p y 2 ]  is the second moment of  p y , given by
E [ p y 2 ] = y 1 y 2 p y p y 2 d y = y 1 y 2 p y P ( y 1 , y 2 ) 3 d y
Definition 3. 
If the continuous distribution is defined on the fixed support  [ a , a ] P ( a , a ) = 1  and  ( y 2 y 1 ) = 2 a , the proposed DNUI for the entire distribution of Y (denoted by  ρ Y ) is given by
ρ Y = ω p y E [ p y 2 ] = E [ p y 2 ] 1 4 a 2 E [ p y 2 ] = E ( Δ p y 2 ) + 1 a E ( Δ p y ) E ( Δ p y 2 ) + 1 a E ( Δ p y ) + 1 4 a 2
where  E [ p y 2 ]  is the second moment of  p y , given by
E [ p y 2 ] = a a p y 3 d y
the variance  E ( Δ p y 2 )  is given by
E ( Δ p y 2 ) = a a p y p y 1 2 a 2 d y = a a p y 3 d y 1 a a a p y 2 d y + 1 4 a 2
and the bias  E ( Δ p y )  is given by
E ( Δ p y ) = E [ p y p U ( y ) ] = a a p y [ p y 1 2 a ] d y = a a p y 2 d y 1 2 a
The quantity a a p y 2 d y is denoted by β Y and is called the continuous informity of Y in the theory of informity [8]. The continuous informity of the baseline uniform distribution of Y U is β Y U = E [ p U ( y ) ] = 1 2 a . Therefore, E ( Δ p y ) is the difference between the two continuous informities.

3. Examples

3.1. Coin Tossing

Consider tossing a coin, which is a simplest two-state probability system: {X; P(x)} = {head, tail; P(head), P(tail)}, where P t a i l = 1 P h e a d . The DNUI for the distribution of X is given by
ρ X = E [ P ( x ) 2 ] 1 2 2 E [ P ( x ) 2 ]
where the second moment E [ P ( x ) 2 ] can be calculated as
E [ P ( x ) 2 ] = [ P h e a d ] 3 + [ P t a i l ] 3
Figure 1 shows the DNUI for the distribution of X as a function of the bias represented by P h e a d . The HD, TVD, ( 1 E S 2 ) , and ( 1 E B G ) are also shown in Figure 1 for comparison.
As shown in Figure 1, when the coin is fair (i.e., P h e a d   =   P t a i l   =   0.5 ), the DNUI, HD, and TVD, ( 1 E S 2 ) , and ( 1 E B G ) are all 0, indicating perfect uniformity or evenness. As the coin becomes increasingly biased toward either heads or tails, all indices increase. In the extreme case where P t a i l = 1 or P h e a d = 1 , the DNUI reaches a maximum value of ρ X = 0.866 , reflecting a high degree of non-uniformity. However, the HD reaches a maximum value of 0.541, and the TVD, ( 1 E S 2 ) , and ( 1 E B G ) reach a maximum value of 0.5, which are significantly smaller than 1, indicating that these indices fail to capture the high degree of non-uniformity.

3.2. Three Frequency Data Series

JJC [9] posted a question on Cross Validated about quantifying distribution non-uniformity. He supplied three frequency datasets (Series A, B, and C), each containing 10 values (Table 1). Visually, Series A is almost perfectly uniform, Series B is nearly uniform, and Series C is heavily skewed by a single outlier (0.6). Table 1 lists these datasets alongside the corresponding DNUI, HD, TVD, ( 1 E S 2 ) , and ( 1 E B G ) values.
From Table 1, we can see that the DNUI value for Series A is 0.1864, confirming its high uniformity, while the DNUI value for Series B is 0.2499, indicating near-uniformity. In contrast, the DNUI value for Series C is 0.9767 (close to 1), signaling extreme non-uniformity. These results align well with intuitive expectations. The HD, TVD, ( 1 E S 2 ) , and ( 1 E B G ) values range from 0.0060 to 0.04 for Series A and from 0.0109 to 0.06 for Series B, which may be considered to reflect the uniformity of these two series fairly well. However, the HD, TVD, ( 1 E S 2 ) , and ( 1 E B G ) values range from 0.4121 to 0.7375 for Series C, which are too high to adequately reflect the severity of the non-uniformity.

3.3. Five Continuous Distributions with Fixed Support [ a , a ]

Consider five continuous distributions with fixed support [ a , a ] : uniform, triangular, quadratic, raised cosine, and half-cosine. Table 2 summarizes their PDFs, variances, biases, second moments, and DNUIs.
As shown in Table 2, the DNUI is independent of the scale parameter a, which is a desirable property for a measure of distribution non-uniformity. By definition, the DNUI for the uniform distribution is 0. In contrast, the DNUI values for the other four distributions range from 0.5932 to 0.7746, indicating moderate to high non-uniformity. These results align well with intuitive expectations. Notably, the raised cosine distribution has the highest DNUI value among the five distributions, suggesting it exhibits the greatest non-uniformity.

3.4. Exponential Distribution

The PDF of the exponential distribution with support [ 0 , ) is
p y = λ e λ y
where λ is the shape parameter.
We consider a partial exponential distribution on the interval [ 0 , b ] (i.e., y 1 = 0 and y 2 = b ), and b is the length of the interval. Thus, the DNUI for the partial exponential distribution is given by
ρ 0 , b = E [ p y 2 ] 1 b 2 E [ p y 2 ]
where the second moment E [ p y 2 ] is given by
E [ p y 2 ] = 1 [ P ( 0 , b ) ] 3 0 b p y 3 d y
The coverage probability of the interval [ 0 , b ] is given by
P ( 0 , b ) = 0 b λ e x p ( λ y ) d y = 1 e λ b
The integral 0 b p y 3 d y can be solved as
0 b p y 3 d y = 0 b λ e λ y ) 3 d y = λ 3 0 b e 3 λ y d y = λ 2 3 ( 1 e 3 λ b )
Figure 2 shows the plot of the DNUI for the partial exponential distribution with λ = 1 as a function of the interval length b. It also shows the PDF of the original exponential distribution, Equation (33) with λ = 1 , as a function of y.
As shown in Figure 2, when the interval length b is very small (approaching 0), the DNUI is close to 0, reflecting the high local uniformity within small intervals. As the interval length b increases, the DNUI also increases, indicating the growing local non-uniformity with larger intervals. When the interval length b becomes very large, the DNUI approaches 1, indicating that the distribution over a large interval is extremely non-uniform. These observations align well with intuitive expectations.

4. Discussion

4.1. Axioms for an Effective Non-Uniformity Index

It is important to note that non-uniformity indices require an axiomatic foundation to ensure their validity and meaningful interpretation. This foundation should be built upon a set of axioms that any acceptable non-uniformity index should satisfy. We propose the following four axioms for an effective non-uniformity index:
  • Normalization: The index should range between 0 and 1 (or approximately), with 0 indicating perfect uniformity and 1 (or near 1) indicating extreme non-uniformity.
  • Sensitivity to Deviations: The index should be sensitive to any deviations from a baseline uniform distribution, producing a value that reflects the extent of non-uniformity.
  • Consistency and Comparability: The index should yield consistent results when applied to similar distributions and enable comparisons across different distributions.
  • Intuitive Interpretation: The index should be easy to understand and interpret, providing a clear indication of how close a distribution is to perfect uniformity.
Of the eight non-uniformity measures evaluated in this paper, the DOU, KL divergence, and χ2 divergence fail to meet Axiom 1 (normalization), as noted in the Introduction. The Hellinger distance (HD), total variation distance (TVD), Simpson’s unevenness ( 1 E S 2 ) , and Buzas and Gibson’s unevenness ( 1 E B G ) do not satisfy Axiom 2 (sensitivity to deviations), as demonstrated in Examples 3.1 and 3.2. Only the proposed NDUI satisfies all four axioms, making it a robust and effective measure.

4.2. Normalization, Benchmarks for Defining Non-Uniformity Levels, and Invariance to Probability Permutations

The definition of the proposed NUDI is both mathematically sound and intuitively interpretable. It is a normalized, distance-based metric derived from the total deviation defined in Equations (11) and (21). Importantly, this total deviation incorporates two components, namely, variance and bias, both measured relative to the baseline uniform distribution. The particular normalization using the second moment of the PMF or PDF provides a natural and robust scaling factor, which ensures that the NDUI consistently reflects deviations from uniformity across diverse distributions while maintaining a normalized range of [0, 1], as demonstrated in the presented examples.
The proposed DNUI ranges between 0 and 1, with 0 indicating perfect uniformity and 1 indicating extreme non-uniformity. Lower DNUI values (near 0) suggest a more uniform or flatter distribution, while higher values (near 1) suggest a greater degree of non-uniformity or unevenness. Since there are no universally accepted benchmarks for defining levels of non-uniformity, we tentatively propose DNUI values of 0.25, 0.5, and 0.75 to represent low, moderate, and high non-uniformity, respectively. The proposed thresholds are determined by the DNUI’s normalized [0, 1] range, which approximately divide it into quartiles and aligns with the empirical DNUI values observed in Examples 3.1 and 3.2, where values near 0.25 indicate minor deviations, 0.5 indicate moderate deviations, and 0.75 or higher indicate significant deviations from uniformity.
Note that the DNUI (similar to other indices) depends solely on the probability values and not on the associated outcomes (or scores) or their order. This property can be illustrated using the frequency data from Series C in Section 3.2: {0.03, 0.02, 0.6, 0.02, 0.03, 0.07, 0.06, 0.05, 0.05, 0.07}. If, for example, the second and third values are swapped, the DNUI value remains unchanged. Therefore, the DNUI is not a one-to-one function of the distribution; it can “collapse” different distributions into the same value. This property is analogous to how different distributions can share the same mean or variance. The invariance of the DNUI to probability permutations implies that it may not distinguish distributions with identical probability sets but different arrangements, suggesting that in applications like clustering or anomaly detection, the DNUI should be complemented with order-sensitive metrics when structural differences are critical.

4.3. Upper Bounds of Non-Uniformity Indices in the Discrete Case

In the discrete case, when X follows a uniform distribution, the DNUI, HD, TVD, ( 1 E S 2 ) , and ( 1 E B G ) are all 0 regardless of the number of possible outcomes. However, in the extreme case, where all outcomes have probability 0 except one with probability 1, the upper bound of these indices depends on the number of possible outcomes. The upper bound of the DNUI is given by
ρ X u p p e r b o u n d = 1 1 n 2
The upper bound of the HD is given by
H D u p p e r b o u n d = 1 2 i = 1 n 1 1 n + 1 2 1 1 n 2 = n 1 2 n + 1 2 1 1 n 2
The upper bound of the TVD is given by
T V D u p p e r b o u n d = 1 2 i = 1 n 1 1 n + 1 2 ( 1 1 n ) = 1 1 n
The upper bound of ( 1 E S 2 ) is given by
( 1 E S 2 ) u p p e r b o u n d = 1 1 n
The upper bound of ( 1 E B G ) is given by
( 1 E B G ) u p p e r b o u n d = 1 1 n
Note that the TVD, ( 1 E S 2 ) , and ( 1 E B G ) have the same upper bound. Figure 3 shows plots of the upper bounds of the five indices as functions of the number of possible outcomes. It can be seen that among the five indices, the NDUI has the largest upper bound at n = 2 (where the upper bound is minimum), which increases rapidly to 1 as n increases. In contrast, the other indices have a very low upper bound at n = 2, less than 0.541, which increases slowly to 1 as n increases. Our common sense tells us that this extreme case represents a very high degree of non-uniformity and should be represented by an index value of 1 or close to 1. Therefore, the NDUI performs best among the five non-uniformity indices.

5. Conclusions

Four axioms for an effective non-uniformity index are proposed: normalization, sensitivity to deviations, consistency and comparability, and intuitive interpretation. Among the eight non-uniformity measures evaluated in this paper, the degree of uniformity (DOU), KL divergence, and χ2 divergence fail to satisfy Axiom 1 (normalization). The Hellinger distance (HD), total variation distance (TVD), Simpson’s unevenness ( 1 E S 2 ) , and Buzas and Gibson’s unevenness ( 1 E B G ) do not satisfy Axiom 2 (sensitivity to deviations). Only the proposed NDUI satisfies all four axioms.
The proposed DNUI provides an effective metric for quantifying the non-uniformity or unevenness of probability distributions. It is applicable to any distribution, discrete or continuous, defined on a fixed support. It can also be applied to partial distributions on fixed intervals to examine local non-uniformity, even when the overall distribution has unbounded support. The presented examples have demonstrated the effectiveness of the proposed DNUI in capturing and quantifying distribution non-uniformity.
It is important to emphasize that the NDUI, as a normalized and axiomatically grounded measure of non-uniformity, could be applied to fields such as ecological modeling, information theory, and machine learning. For example, the NDUI’s sensitivity to deviations and intuitive interpretation could support its use as an alternative to diversity-based evenness measures in evenness/unevenness analysis in ecology. The scope of application of the NDUI needs to be further studied and expanded.

Funding

This research received no external funding.

Data Availability Statement

The data are contained within this article.

Acknowledgments

The author would like to thank three anonymous reviewers for their valuable comments that helped to improve the quality of this article.

Conflicts of Interest

Author Hening Huang was employed by the company Teledyne RD Instruments and retired in February 2022. The author declares that this study received no funding from the company. The company was not involved in the study design; the collection, analysis, or interpretation of data; the writing of this article; or the decision to submit it for publication.

References

  1. Rajaram, R.; Ritchey, N.; Castellani, B. On the mathematical quantification of inequality in probability distributions. J. Phys. Commun. 2024, 8, 085002. [Google Scholar] [CrossRef]
  2. Rajaram, R.; Ritchey, N.; Castellani, B. On the degree of uniformity measure for probability distributions. J. Phys. Commun. 2024, 8, 115003. [Google Scholar] [CrossRef]
  3. Roy, S.; Bhattacharya, K.R. A theoretical study to introduce an index of biodiversity and its corresponding index of evenness based on mean deviation. World J. Adv. Res. Rev. 2024, 21, 22–32. [Google Scholar] [CrossRef]
  4. Jost, L. Entropy and diversity. Oikos 2006, 113, 363–375. [Google Scholar] [CrossRef]
  5. Buzas, M.A.; Gibson, T.G. Species diversity: Benthonic foraminifera in western North Atlantic. Science 1969, 163, 72–75. [Google Scholar] [CrossRef] [PubMed]
  6. Gregorius, H.R.; Gillet, E.M. The Concept of Evenness/Unevenness: Less Evenness or More Unevenness? Acta Biotheor. 2021, 70, 3. [Google Scholar] [CrossRef] [PubMed]
  7. Rousseau, R. The repeat rate: From Hirschman to Stirling. Scientometrics 2018, 116, 645–653. [Google Scholar] [CrossRef]
  8. Huang, H. The theory of informity: A novel probability framework. Bull. Taras Shevchenko Natl. Univ. Kyiv Phys. Math. 2025, 80, 53–59. [Google Scholar] [CrossRef] [PubMed]
  9. JJC. How Does One Measure the Non-Uniformity of A Distribution? Available online: https://stats.stackexchange.com/q/25827 (accessed on 20 March 2025).
Figure 1. The DNUI for the distribution of X as a function of the bias represented by the probability of heads, compared with the Hellinger distance (HD), the total variation distance (TVD), Simpson’s unevenness ( 1 E S 2 ) , and Buzas and Gibson’s unevenness ( 1 E B G ) .
Figure 1. The DNUI for the distribution of X as a function of the bias represented by the probability of heads, compared with the Hellinger distance (HD), the total variation distance (TVD), Simpson’s unevenness ( 1 E S 2 ) , and Buzas and Gibson’s unevenness ( 1 E B G ) .
Appliedmath 05 00102 g001
Figure 2. Plots of the DNUI for the partial exponential distribution with λ = 1 and the PDF of the original exponential distribution.
Figure 2. Plots of the DNUI for the partial exponential distribution with λ = 1 and the PDF of the original exponential distribution.
Appliedmath 05 00102 g002
Figure 3. Plots of the upper bounds of the five indices as functions of the number of possible outcomes.
Figure 3. Plots of the upper bounds of the five indices as functions of the number of possible outcomes.
Appliedmath 05 00102 g003
Table 1. Three frequency data series and the corresponding DNUI, HD, TVD, ( 1 E S 2 ) , and ( 1 E B G ) values.
Table 1. Three frequency data series and the corresponding DNUI, HD, TVD, ( 1 E S 2 ) , and ( 1 E B G ) values.
Series ρ X HDTVD 1 E S 2 1 E B G
A: {0.1, 0.11, 0.1, 0.09, 0.09, 0.11, 0.1, 0.1, 0.12, 0.08}0.18640.03890.040.01190.0060
B: {0.1, 0.1, 0.1, 0.08, 0.12, 0.12, 0.09, 0.09, 0.12, 0.08}0.24990.05240.060.02150.0109
C: {0.03, 0.02, 0.6, 0.02, 0.03, 0.07, 0.06, 0.05, 0.05, 0.07}0.97670.41210.50.73750.5455
Table 2. The PDF p y , variance E ( Δ p y 2 ) , bias E ( Δ p y ) , second moment E [ p y 2 ] , and DNUI ρ Y for five continuous distributions with fixed support [ a , a ] .
Table 2. The PDF p y , variance E ( Δ p y 2 ) , bias E ( Δ p y ) , second moment E [ p y 2 ] , and DNUI ρ Y for five continuous distributions with fixed support [ a , a ] .
Distribution p y E ( Δ p y 2 ) E ( Δ p y ) E [ p y 2 ] ρ Y
Uniform 1 2 a 00 1 4 a 2 0
Triangular ( y + a ) a 2 ,             a y 0 ( a y ) a 2 ,                         0 y a   1 12 a 2 1 6 a 1 2 a 2 0.7071
Quadratic 3 4 a 1 1 a y 2 1 28 a 2 1 10 a 27 70 a 2 0.5932
Raised cosine 1 2 a 1 + cos π a y 1 8 a 2 1 4 a 5 8 a 2 0.7746
Half-cosine π 4 a cos π 2 a y 1 4 π 2 48 1 a 2 π 2 16 1 2 1 a 1 4 a 2 0.6262
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, H. A New Index for Measuring the Non-Uniformity of a Probability Distribution. AppliedMath 2025, 5, 102. https://doi.org/10.3390/appliedmath5030102

AMA Style

Huang H. A New Index for Measuring the Non-Uniformity of a Probability Distribution. AppliedMath. 2025; 5(3):102. https://doi.org/10.3390/appliedmath5030102

Chicago/Turabian Style

Huang, Hening. 2025. "A New Index for Measuring the Non-Uniformity of a Probability Distribution" AppliedMath 5, no. 3: 102. https://doi.org/10.3390/appliedmath5030102

APA Style

Huang, H. (2025). A New Index for Measuring the Non-Uniformity of a Probability Distribution. AppliedMath, 5(3), 102. https://doi.org/10.3390/appliedmath5030102

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop